Professional Documents
Culture Documents
Lawrence R. Griffing
Biology Department
Texas A&M University
Texas, United States
Copyright © 2023 by John Wiley & Sons, Inc. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical,
photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act,
without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750–8400, fax (978) 750–4470, or on the web at www.copyright.com.
Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, (201) 748–6011, fax (201) 748–6008, or online at http://www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United
States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners.
John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no
representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied
warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written
sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where
appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited
to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or
disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or
any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the
United States at (800) 762–2974, outside the United States at (317) 572–3993 or fax (317) 572–4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats.
For more information about Wiley products, visit our web site at www.wiley.com.
A catalogue record for this book is available from the Library of Congress
Set in 9.5/12.5pt STIXTwoText by Integra Software Services Pvt. Ltd., Pondicherry, India
Contents
Preface xii
Acknowledgments xiv
About the Companion Website xv
6.3 Scientific-Grade Flatbed Scanners Can Detect Chemiluminescence, Fluorescence, and Phosphorescence 114
6.4 Scientific-Grade Scanning Systems Often Use Photomultiplier Tubes and Avalanche Photodiodes as the
Camera 118
6.5 X-ray Planar Radiography Uses Both Scanning and Camera Technologies 119
6.6 Medical Computed Tomography Scans Rotate the X-ray Source and Sensor in a Helical Fashion
Around the Body 121
6.7 Micro-CT and Nano-CT Scanners Use Both Hard and Soft X-Rays and Can Resolve Cellular Features 123
6.8 Macro Laser Scanners Acquire Three-Dimensional Images by Time-of-Flight or Structured Light 125
6.9 Laser Scanning and Spinning Disks Generate Images for Confocal Scanning Microscopy 126
6.10 Electron Beam Scanning Generates Images for Scanning Electron Microscopy 128
6.11 Atomic Force Microscopy Scans a Force-Sensing Probe Across the Sample 128
9.3 The Light Emission and Contrast of Small Objects Limits Their Visibility 194
9.4 Use the Image Histogram to Adjust the Trade-off Between Depth of Field and Motion Blur 194
9.5 Use the Camera’s Light Meter to Detect Intrascene Dynamic Range and Set Exposure Compensation 196
9.6 Light Sources Produce a Variety of Colors and Intensities That Determine the Quality of the Illumination 197
9.7 Lasers and LEDs Provide Lighting with Specific Color and High Intensity 199
9.8 Change Light Values with Absorption, Reflectance, Interference, and Polarizing Filters 200
9.9 Köhler-Illuminated Microscopes Produce Conjugate Planes of Collimated Light from the Source and
Specimen 203
9.10 Reflectors, Diffusers, and Filters Control Lighting in Macro-imaging 207
15.8 Faraday Induction Produces the Magnetic Resonance Imaging Signal (in Volts) with Coils in the x-y Plane 343
15.9 Magnetic Gradients and Selective Radiofrequency Frequencies Generate Slices in the x, y, and z Directions 343
15.10 Acquiring a Gradient Echo Image Is a Highly Repetitive Process, Getting Information Independently
in the x, y, and z Dimensions 344
15.11 Fast Low-Angle Shot Gradient Echo Imaging Speeds Up Imaging for T1-Weighted Images 346
15.12 The Spin-Echo Image Compensates for Magnetic Heterogeneities in the Tissue in T2-Weighted Images 346
15.13 Three-Dimensional Imaging Sequences Produce Higher Axial Resolution 347
15.14 Echo Planar Imaging Is a Fast Two-Dimensional Imaging Modality But Has Limited Resolving Power 347
15.15 Magnetic Resonance Angiography Analyzes Blood Velocity 347
15.16 Diffusion Tensor Imaging Visualizes and Compares Directional (Anisotropic) Diffusion Coefficients
in a Tissue 349
15.17 Functional Magnetic Resonance Imaging Provides a Map of Brain Activity 350
15.18 Magnetic Resonance Imaging Contrast Agents Detect Small Lesions That Are Otherwise Difficult to Detect 351
17.11 The Confocal Microscope Has Higher Axial and Lateral Resolving Power Than the Widefield Epi-illuminated
Microscope, Some Designs Reaching Superresolution 415
17.12 Multiphoton Microscopy and Other Forms of Non-linear Optics Create Conditions for Near-Simultaneous
Excitation of Fluorophores with Two or More Photons 419
18 Extending the Resolving Power of the Light Microscope in Time and Space 427
18.1 Superresolution Microscopy Extends the Resolving Power of the Light Microscope 427
18.2 Fluorescence Lifetime Imaging Uses a Temporal Resolving Power that Extends to Gigahertz Frequencies
(Nanosecond Resolution) 428
18.3 Spatial Resolving Power Extends Past the Diffraction Limit of Light 429
18.4 Light Sheet Fluorescence Microscopy Achieves Fast Acquisition Times and Low Photon Dose 432
18.5 Lattice Light Sheets Increase Axial Resolving Power 435
18.6 Total Internal Reflection Microscopy and Glancing Incident Microscopy Produce a Thin Sheet of Excitation
Energy Near the Coverslip 437
18.7 Structured Illumination Microscopy Improves Resolution with Harmonic Patterns That Reveal Higher Spatial
Frequencies 440
18.8 Stimulated Emission Depletion and Reversible Saturable Optical Linear Fluorescence Transitions
Superresolution Approaches Use Reversibly Saturable Fluorescence to Reduce the Size
of the Illumination Spot 447
18.9 Single-Molecule Excitation Microscopies, Photo-Activated Localization Microscopy, and Stochastic Optical
Reconstruction Microscopy Also Rely on Switchable Fluorophores 452
18.10 MINFLUX Combines Single-Molecule Localization with Structured Illumination to Get Resolution
below 10 nm 455
Index 497
xii
Preface
Imaging Life Has Three Sections: Image Acquisition, Image Analysis, and Imaging
Modalities
The first section, Image Acquisition, lays the foundation for imaging by extending prior knowledge about image struc-
ture (Chapter 1), image contrast (Chapter 2), and proper image representation (Chapter 3). The chapters on imaging by eye
(Chapter 4), by camera (Chapter 5), and by scanners (Chapter 6) relate to prior knowledge of sight, digital (e.g., cell phone)
cameras, and flatbed scanners.
The second section, Image Analysis, starts with how to select features in an image and measure them (Chapter 7). With
this knowledge comes the realization that there are limits to image measurement set by the optics of the system (Chapter 8),
a system that includes the sample and the light- and radiation-gathering properties of the instrumentation. For light-based
imaging, the nature of the lighting and its ability to generate contrast (Chapter 9) optimize the image data acquired for
analysis. A wide variety of image filters (Chapter 10) that operate in real and reciprocal space make it possible to display or
measure large amounts of data or data with low signal. Spatial measurement in two dimensions (Chapter 11), measurement
in time (Chapter 12), and processing and measurement in three dimensions (Chapter 13) cover many of the tenets of image
analysis at the macro and micro levels.
The third section, Imaging Modalities, builds on some of the modalities necessarily introduced in previous chapters,
such as computed tomography (CT) scanning, basic microscopy, and camera optics. Many students interested in biological
imaging are particularly interested in biomedical modalities. Unfortunately, most of the classes in biomedical imaging are
not part of standard biology curricula but in biomedical engineering. Likewise, students in biomedical engineering often
get less exposure to microscopy-related modalities. This section brings the two together.
The book does not use examples from materials science, although some materials science students may find it useful.
This book can stand alone as a text for a lecture course on biological imaging intended for junior or senior undergraduates
or first- and second-year graduate students in life sciences. The annotated references section at the end of each chapter
provides the URLs for supplementary videos available from iBiology.com and other recommended sites. In addition, the
recommended text-based internet, print, and electronic resources, such as microscopyu.com, provide expert and in-depth
materials on digital imaging and light microscopy. However, these resources focus on particular imaging modalities and
exclude some (e.g., single-lens reflex cameras, ultrasound, CT scanning, magnetic resonance imaging [MRI], structure
from motion). The objective of this book is to serve as a solid foundation in imaging, emphasizing the shared concepts of
these imaging approaches. In this vein, the book does not attempt to be encyclopedic but instead provides a gateway to the
ongoing advances in biological imaging.
The author’s biology course non-linearly builds off this text with weekly computer sessions. Every third class session
covers practical image processing, analysis, and presentations with still, video, and three-dimensional (3D) images.
Although these computer labs may introduce Adobe Photoshop and Illustrator and MATLAB and Simulink (available on
our university computers), the class primarily uses open-source software (i.e., GIMP2, Inkscape, FIJI [FIJI Is Just ImageJ],
Icy, and Blender). The course emphasizes open-source imaging. Many open-source software packages use published and
Preface xiii
archived algorithms. This is better for science, making image processing more reproducible. They are also free or at least
cheaper for students and university labs.
The images the students acquire on their own with their cell phones, in the lab (if taught as a lab course), or from online
scientific databases (e.g., Morphosource.org) are the subjects of these tutorials. The initial tutorials simply introduce basic
features of the software that are fun, such as 3D model reconstruction in FIJI of CT scans from Morphosource, and infor-
mative, such as how to control image size, resolving power, and compression for analysis and publication. Although simple,
the tutorials address major pedagogical challenges caused by the casual, uninformed use of digital images. The tutorials
combine the opportunity to judge and analyze images acquired by the students with the opportunity to learn about the
software. They are the basis for weekly assignments. Later tutorials provide instruction on video and 3D editing, as well as
more advanced image processing (filters and deconvolution) and measurement. An important learning outcome for the
course is that the students can use this software to rigorously analyze and manage imaging data, as well as generate publi-
cation-quality images, videos, and presentations.
This book can also serve as a text for a laboratory course, along with an accompanying lab manual that contains protocols
for experiments and instructions for the operation of particular instruments. The current lab manual is available on request,
but it has instructions for equipment at Texas A&M University. Besides cell phones, digital single-lens reflex cameras, flat-
bed scanners, and stereo-microscopes, the first quarter of the lab includes brightfield transmitted light microscopy and
fluorescence microscopy. Assigning Chapter 16 on transmitted light microscopy and Chapter 17 on epi-illuminated light
microscopy early in the course supplements the lab manual information and introduces the students to microscopy before
covering it during class time. Almost all the students have worked with microscopes before, but many have not captured
images that require better set-up (e.g., Köhler illumination with a sub-stage condenser) and a more thorough under-
standing of image acquisition and lighting.
The lab course involves students using imaging instrumentation. All the students have access to cameras on their cell
phones, and most labs have access to brightfield microscopy, perhaps with various contrast-generating optical configura-
tions (darkfield, phase contrast, differential interference contrast). Access to fluorescence microscopy is also important.
One of the anticipated learning outcomes for the lab course is that students can troubleshoot optical systems. For this
reason, it is important that they take apart, clean, and correctly reassemble and align some optical instruments for cali-
brated image acquisition. With this knowledge, they can become responsible users of more expensive, multi-user equip-
ment. Some might even learn how to build their own!
Access to CT scanning, confocal microscopy, multi-photon microscopy, ultrasonography, MRI, light sheet microscopy,
superresolution light microscopy, and electron microscopy will vary by institution. Students can use remote learning to
view demonstrations of how to set up and use them. Many of these instruments have linkage to the internet. Zoom (or
other live video) presentations provide access to operator activity for the entire class and are therefore preferable for larger
classes that need to see the operation of a machine with restricted access. Several instrument companies provide video
demonstrations of the use of their instruments. Live video is more informative, particularly if the students read about the
instruments first with a distilled set of instrument-operating instructions, so they can then ask questions of the operators.
Example images from the tutorials for most of these modalities should be available for student analysis.
xiv
Acknowledgments
Peter Hepler and Paul Green taught a light and electron microscopy course at Stanford University that introduced me to
the topic while I was a graduate student of Peter Ray. After working in the lab of Ralph Quatrano, I acquired additional
expertise in light and electron microscopy as a post-doc with Larry Fowke and Fred Constabel at the University of
Saskatchewan and collaborating with Hilton Mollenhauer at Texas A&M University. They were all great mentors.
I created a light and electron microscopy course for upper-level undergraduates with Kate VandenBosch, who had taken
a later version of Hepler’s course at the University of Massachusetts. However, with the widespread adoption of digital
imaging, I took the course in a different direction. The goals were to introduce students to digital image acquisition,
processing, and analysis while they learned about the diverse modalities of digital imaging. The National Science
Foundation and the Biology Department at Texas A&M University provided financial support for the course. No single
textbook existing for such a course, I decided to write one. Texas A&M University graciously provided one semester of
development leave for its completion.
Martin Steer at University College Dublin and Chris Hawes at Oxford Brookes University, Oxford, read and made con-
structive comments on sections of the first half of the book, as did Kate VandenBosch at the University of Wisconsin. I
thank them for their help, friendship, and encouragement.
I give my loving thanks to my children. Alexander Griffing contributed a much-needed perspective on all of the chapters,
extensively copy edited the text, and provided commentary and corrections on the math. Daniel Griffing also provided
helpful suggestions. Beth Russell was a constant source of enthusiasm.
My collaborators, Holly Gibbs and Alvin Yeh at Texas A&M University, read several chapters and made comments and
contributions that were useful and informative. Jennifer Lippincott-Schwartz, senior group leader and head of Janelia’s
four-dimensional cellular physiology program, generously provided comment and insight on the chapters on temporal
operations and superresolution microscopy. I also wish to thank the students in my lab who served as teaching assis-
tants and provided enthusiastic and welcome feedback, particularly Kalli Landua, Krishna Kumar, and Sara Maynard.
The editors at Wiley, particularly Rosie Hayden and Julia Squarr, provided help and encouragement. Any errors, of
course, are mine.
The person most responsible for the completion of this book is my wife, Margaret Ezell, who motivates and enlightens
me. In addition to her expertise and authorship on early modern literary history, including science, she is an accomplished
photographer. Imaging life is one of our mutual joys. I dedicate this book to her, with love and affection.
xv
www.wiley.com/go/griffing/imaginglife
Section 1
Image Acquisition
3
Images have structure. They have a certain arrangement of small and large objects. The large objects are often compos-
ites of small objects. The Roman mosaic from the House VIII.1.16 in Pompeii, the House of Five Floors, has incredible
structure (Figure 1.1). It has lifelike images of a bird on a reef, fishes, an electric eel, a shrimp, a squid, an octopus, and
a rock lobster. It illustrates Aristotle’s natural history account of a struggle between a rock lobster and an octopus. In
fact, the species are identifiable and are common to certain bays in the Italian coast, a remarkable example of early
biological imaging.
It is a mosaic of uniformly sized square colored tiles. Each tile is the smallest picture element, or pixel, of the mosaic. At
a certain appropriate viewing distance from the mosaic, the individual pixels cannot be distinguished, or resolved, and
what is a combination of individual tiles looks solid or continuous, taking the form of a fish, or lobster, or octopus. When
viewed closer than this distance, the individual tiles or pixels become apparent (see Figure 1.1); the image is pixelated.
Beyond viewing it from the distance that is the height of the person standing on the mosaic, pixelation in this scene was
probably further reduced by the shallow pool of water that covered it in the House of Five Floors.
The order in which the image elements come together, or render, also describes the image structure. This mosaic was
probably constructed by tiling the different objects in the scene, then surrounding the objects with a single layer of tiles of
the black background (Figure 1.2), and finally filling in the background with parallel rows of black tiles. This form of image
construction is object-order rendering. The background rendering follows the rendering of the objects. Vector graphic
images use object-ordered rendering. Vector graphics define the object mathematically with a set of vectors and render it
in a scene, with the background and other objects rendered separately.
Vector graphics are very useful because any number of pixels can represent the mathematically defined objects. This is
why programs, such as Adobe Illustrator, with vector graphics for fonts and illustrated objects are so useful: the number
(and, therefore, size) of pixels that represent the image is chosen by the user and depends on the type of media that will
display it. This number can be set so that the fonts and objects never have to appear pixelated. Vector graphics are resolution
independent; scaling the object to any size will not lose its sharpness from pixelation.
Another way to make the mosaic would be to start from the top upper left of the mosaic and start tiling in rows. One row
near the top of the mosaic contains parts of three fishes, a shrimp, and the background. This form of image structure is
image-order rendering. Many scanning systems construct images using this form of rendering. A horizontal scan line is
a raster. Almost all computer displays and televisions are raster based. They display a rasterized grid of data, and because
the data are in the form of bits (see Section 2.2), it is a bitmap image. As described later, bitmap graphics are resolution
dependent; that is, as they scale larger, the pixels become larger, and the images become pixelated.
Even though pixels are the smallest discrete unit of the picture, it does have structure. The fundamental unit of visuali-
zation is the cell (Figure 1.3). A pixel is a two-dimensional (2D) cell described by an ordered list of four points (its corners
or vertices), and geometric constraints make it square. In three-dimensional (3D) images, the smallest discrete unit of the
volume is the voxel. A voxel is the 3D cell described by an ordered list of eight points (its vertices), and geometrics con-
straints make it a cube.
Imaging Life: Image Acquisition and Analysis in Biology and Medicine, First Edition. Lawrence R. Griffing.
© 2023 John Wiley & Sons, Inc. Published 2023 by John Wiley & Sons, Inc.
Companion Website: www.wiley.com/go/griffing/imaginglife
4 1 Image Structure and Pixels
1.2 The Resolving Power of a Camera or Display Is the Spatial Frequency of Its Pixels
In biological imaging, we use powerful lenses to resolve details of far away or very small objects. The round plant proto-
plasts in Figure 1.5 are invisible to the naked eye. To get an image of them, we need to use lenses that collect a lot of light
from a very small area and magnify the image onto the chip of a camera. Not only is the power of the lens important but
also the power of the camera. Naively, we might think that a powerful camera will have more pixels (e.g., 16 megapixels
[MP]) on its chip than a less powerful one (e.g., 4 MP). Not necessarily! The 4-MP camera could actually be more powerful
(require less magnification) if the pixels are smaller. The size of the chip and the pixels in the chip matter.
The power of a lens or camera chip is its resolving power, the number of pixels per unit length (assuming a square pixel).
It is not the number of total pixels but the number of pixels per unit space, the spatial frequency of pixels. For example, the
eye on the bird in the mosaic in Figure 1.1 is only 1 pixel (one tile) big. There is no detail to it. Adding more tiles to give the
eye some detail requires smaller tiles, that is, the number of tiles within that space of the eye increases – the spatial frequency
of pixels has to increase. Just adding more tiles of the original size will do no good at all. Common measures of spatial fre-
quency and resolving power are pixels per inch (ppi) or lines per millimeter (lpm – used in printing).
Another way to think about resolving power is to take its inverse, the inches or millimeters per pixel. Pixel size, the
inverse of the resolving power, is the image resolution. One bright pixel between two dark pixels resolves the two dark
pixels. Resolution is the minimum separation distance for distinguishing two objects, dmin. Resolving power is 1/dmin.
Note: Usage of the terms resolving power and resolution is not universal. For example, Adobe Photoshop and Gimp use res-
olution to refer to the spatial frequency of the image. Using resolving power to describe spatial frequencies facilitates the
discussion of spatial frequencies later.
As indicated by the example of the bird eye in the mosaic and as shown in Figure 1.5, the resolving power is as impor-
tant in image display as it is in detecting the small features of the object. To eliminate pixelation detected by eye, the
resolving power of the eye should be less than the pixel spatial frequency on the display medium when viewed from an
appropriate viewing distance. The eye can resolve objects separated by about 1 minute (one 60th) of 1 degree of the
almost 140-degree field of view for binocular vision. Because things appear smaller with distance, that is, occupy a
Figure 1.5 Soybean protoplasts (cells with their cell walls digested away with enzymes) imaged with differential interference contrast
microscopy and displayed at different resolving powers. The scale bar is 10 μm long. The mosaic pixelation filter in Photoshop generated
these images. This filter divides the spatial frequency of pixels in the original by the “cell size” in the dialog box (filter > pixelate > mosaic).
The original is 600 ppi. The 75-ppi images used a cell size of 8, the 32-ppi image used a cell size of 16, and the 16-ppi image used a cell
size of 32. Photo by L. Griffing.
1.3 Image Legibility Is the Ability to Recognize Text in an Image by Eye 7
Table 1.1 Laptop, Netbook, and Tablet Monitor Sizes, Resolving Power, and Resolution.
6.8 inches (Kindle Paperwhite 5) 1236 × 1648 300 0.0846 4:3 2.03
11 inches (iPad Pro) 2388 × 1668 264 (retina display) 0.1087 4:3 3.98
10.1 inches (Amazon Fire HD 10 e) 1920 × 1200 224 0.1134 16:10 2.3
12.1 inches (netbook) 1400 × 1050 144.6 0.1756 4:3 1.4
13.3 inches (laptop) 1920 ×1080 165.6 0.153 16:9 2.07
14 inches (laptop) 1920 × 1080 157 0.161 16:9 2.07
2560 × 1440 209.8 0.121 16:9 3.6
15.2 inches (laptop) 1152 × 768 91 0.278 3:2 0.8
15.6 inches (laptop) 1920 × 1200 147 0.1728 8:5 2.2
3840 × 2160 282.4 0.089 16:9 8.2
17 inches (laptop) 1920 × 1080 129 0.196 16:9 2.07
smaller angle in the field of view, even things with large pixels look non-pixelated at large distances. Hence, the pixels
on roadside signs and billboards can have very low spatial frequencies, and the signs will still look non-pixelated when
viewed from the road.
Appropriate viewing distances vary with the display device. Presumably, the floor mosaic (it was an interior shallow
pool, so it would have been covered in water) has an ideal viewing distance, the distance to the eye, of about 6 feet. At this
distance, the individual tiles would blur enough to be indistinguishable. For printed material, the closest point at which
objects come into focus is the near point, or 25 cm (10 inches) from your eyes. Ideal viewing for typed text varies with the
size of font but is between 25 and 50 cm (10 and 20 inches). The ideal viewing distance for a television display, with 1080
horizontal raster lines, is four times the height of the screen or two times the diagonal screen dimension. When
describing a display or monitor, we use its diagonal dimension (Table 1.1). We also use numbers of pixels. A 14-inch mon-
itor with the same number of pixels as a 13.3-inch monitor (2.07 × 106 in Table 1.1) has larger pixels, requiring a slightly
farther appropriate viewing distance. Likewise, viewing a 24-inch HD 1080 television from 4 feet is equivalent to viewing
a 48-inch HD 1080 television from 8 feet.
There are different display standards, based on aspect ratio, the ratio of width to height of the displayed image (Table 1.2).
For example, the 15.6-inch monitors in Table 1.1 have different aspect ratios (Apple has 8:5 or 16:10, while Windows has
16:9). They also use different standards: a 1920 × 1200 monitor uses the WUXGA standard (see Table 1.2), and the
3840 × 2160 monitor uses the UHD-1 standard (also called 4K, but true 4K is different; see Table 1.2). The UHD-1 monitor
has half the pixel size of the WUXGA monitor. Even though these monitors have the same diagonal dimension, they have
different appropriate viewing distances. The standards in Table 1.2 are important when generating video (see Sections 5.8
and 5.9) because different devices have different sizes of display (see Table 1.1). Furthermore, different video publication
sites such as YouTube and Facebook and professional journals use standards that fit multiple devices, not just devices with
high resolving power. We now turn to this general problem of different resolving powers for different media.
Image legibility, or the ability to recognize text in an image, is another way to think about resolution (Table 1.3). This
concept incorporates not only the resolution of the display medium but also the resolution of the recording medium, in this
case, the eye. Image legibility depends on the eye’s inability to detect pixels in an image. In a highly legible image, the eye
does not see the individual pixels making up the text (i.e., the text “looks” smooth). In other words, for text to be highly
legible, the pixels should have a spatial frequency near to or exceeding the resolving power of the eye.
At near point (25 cm), it is difficult for the eye to resolve two points separated by 0.1 mm or less. An image that resolves
0.1 mm pixels has a resolving power of 10 pixels per mm (254 ppi). Consequently, a picture reproduced at 300 ppi would
8 1 Image Structure and Pixels
Resolving Power
have excellent text legibility (see Table 1.3). However, there are degrees of legibility; some early computer displays had a
resolving power, also called dot pitch, of only 72 ppi. As seen in Figure 1.5, some of the small particles in the cytoplasm of
the cell vanish at that resolving power. Nevertheless, 72 ppi is the borderline between good and fair legibility (see Table 1.3)
and provides enough legibility for people to read text on the early computers.
The average computer is now a platform for image display. Circulation of electronic images via the web presents something
of a dilemma. What should the resolving power of web-published images be? To include computer users who use old displays,
1.4 Magnification Reduces Spatial Frequencies While Making Bigger Images 9
Table 1.4 Resolving Power Required for Excellent Images from Different Media.
the solution is to make it equal to the lowest resolving power of any monitor (i.e., 72 ppi). Images at this resolving power also
have a small file size, which is ideal for web communication. However, most modern portable computers have larger resolving
powers (see Table 1.1) because as the numbers of horizontal and vertical pixels increase, the displays remain a physical size
that is portable. A 72-ppi image displayed on a 144-ppi screen becomes half the size in each dimension. Likewise, high-ppi
images become much bigger on low-ppi screens. This same problem necessitates reduction of the resolving power of a photo-
graph taken with a digital camera when published on the web. A digital camera may have 600 ppi as its default output reso-
lution. If a web browser displays images at 72 ppi, the 600-ppi image looks eight times its size in each dimension.
This brings us to an important point. Different imaging media have different resolving powers. For each type of media, the
final product must look non-pixelated when viewed by eye (Table 1.4). These values are representative of those required
for publication in scientific journals. Journals generally require grayscale images to be 300 ppi, and color images should be
350–600 ppi. The resolving power of the final image is not the same as the resolving power of the newly acquired image
(e.g., that on the camera chip). The display of images acquired on a small camera chip requires enlargement. How much is
the topic of the next section.
As discussed earlier, images acquired at high resolving power are quite large on displays that have small resolving
power, such as a 72-ppi web page. We have magnified the image! As long as decreasing the spatial frequency of the
display does not result in pixelation, the process of magnification can reveal more detail to the eye. As soon as the
image becomes pixelated, any further magnification is empty magnification. Instead of seeing more detail in the
image, we just see bigger image pixels.
In film photography, the enlargement latitude is a measure of the amount of negative enlargement before empty mag-
nification occurs and the image pixel, in this case the photographic grain, becomes obvious. Likewise, for chip cameras, it
is the amount of enlargement before pixelation occurs. Enlargement latitude is
E = R / L, (1.1)
in which E is enlargement magnification, R is the resolving power (spatial frequency of pixels) of the original, and L is the
acceptable legibility.
For digital cameras, it is how much digital zoom is acceptable (Figure 1.6). A sixfold magnification reducing the resolving
power from 600 to 100 ppi produces interesting detail: the moose calves become visible, and markings on the female
become clear. However, further magnification produces pixelation and empty magnification. Digital zoom magnification
is common in cameras. It is very important to realize that digital zoom reduces the resolving power of the image. For
scientific applications, it is best to use only optical zoom in the field and then perform digital zoom when analyzing or
presenting the image.
The amount of final magnification makes a large difference in the displayed image content. The image should be magnified
to the extent that the subject or region of interest (ROI) fills the frame but without pixelation. The ROI is the image area of
the most importance, whether for display, analysis, or processing. Sometimes showing the environmental context of a feature
is important. Figure 1.7 is a picture of a female brown bear being “herded” by or followed by a male in the spring (depending
10 1 Image Structure and Pixels
Figure 1.6 (A) A photograph of a moose at 600 ppi. (B) When A is enlarged sixfold by keeping the same information and dropping the
resolving power to 100 ppi, two calves become clear (and a spotted rump on the female). (C) Further magnification of 1.6× produces
pixelation and blur. (D) Even further magnification of 2× produces empty magnification. Photo by L. Griffing.
Figure 1.7 (A) A 600-ppi view of two grizzlies in Alaska shows the terrain and the distance between the two grizzlies. Hence, even
though the grizzlies themselves are not clear, the information about the distance between them is clear. (B) A cropped 100-ppi
enlargement of A that shows a clearly identifiable grizzly, which fills the frame. Although the enlargement latitude is acceptable,
resizing for journal publication to 600 ppi would use pixel interpolation. Photo by L. Griffing.
1.5 Technology Determines Scale and Resolution 11
on who is being selective for their mate, the male or the female). The foliage in the alders on the hillside shows that it is spring.
Therefore, showing both the bears and the time of year requires most of the field of view in Figure 1.7A as the ROI. On the
other hand, getting a more detailed view of the behavior of the female responding to the presence of the male requires the
magnified image in Figure 1.7B. Here, the position of the jaw (closed) and ears (back) are clear, but they were not in the
original image. This digital zoom is at the limit of pixelation. If a journal were to want a 600 ppi image of the female, it would
be necessary to resize the 100 ppi image by increasing the spatial frequency to 600 ppi using interpolation (see Section 1.7).
To record objects within vast or small spaces, changing over long or very short times, requires technology that aids the eye
(Figure 1.8). Limits of resolution set the boundaries of scale intrinsic to the eye (see Section 4.1) or any sensing device. The
spatial resolution limit is the shortest distance between two discrete points or lines. To extend the spatial resolution of
the eye, these devices provide images that resolve distances less than 0.1 mm apart at near point (25 cm) or angles of sepa-
ration less than 1 arc-minute (objects farther away have smaller angles of separation). The temporal resolution limit is
the shortest time between two separate events. To extend the temporal resolution of the eye, devices detect changes that
are faster than about one 20th of a second.
Figure 1.8 Useful range for imaging technologies. 3D, three dimensional; CT, computed tomography. Diagram by L. Griffing.
12 1 Image Structure and Pixels
The devices that extend our spatial resolution limit include a variety of lens and scanning systems based on light, elec-
trons, or sound and magnetic pulses (see Figure 1.8), described elsewhere in this book. In all of these technologies, to
resolve an object, the acquisition system must have a resolving power that is double the spatial frequency of the smallest
objects to be resolved. The technologies provide magnification that lowers the spatial frequency of these objects to half (or
less) that of the spatial frequency of the recording medium. Likewise, to record temporally resolved signals, the recording
medium has to run a timed frequency that is twice (or more) the speed of the fastest recordable event. Both of these rules
are a consequence of the Nyquist criterion.
1.6 The Nyquist Criterion: Capture at Twice the Spatial Frequency of the Smallest
Object Imaged
In taking an image of living cells (see Figure 1.5), there are several components of the imaging chain: the microscope lenses
and image modifiers (the polarizers, analyzers, and prisms for differential interference contrast), the lens that projects the
image onto the camera (the projection lens), the camera chip, and the print from the camera. Each one of these links in the
image chain has a certain resolving power. The lenses are particularly interesting because they magnify (i.e., reduce the
spatial frequency). They detect a high spatial frequency and produce a lower one over a larger area. Our eyes can then see
these small features.
We use still more powerful cameras to detect these lowered spatial frequencies. The diameter of small organelles, such as
mitochondria, is about half of a micrometer, not far from the diffraction limit of resolution with light microscopy (see
Sections 5.14, 8.4, and 18.3), about a fifth of a micrometer. To resolve mitochondria with a camera that has a resolving power
of 4618 ppi (5.5-μm pixels, Orca Lightning; see Section 5.3, Table 5.1), the spatial frequency of the mitochondrial diameter
Figure 1.9 (A) and (B) Capture when the resolving power of the capture device is equal to the spatial frequency of the object pixels.
(A) When pixels of the camera and the object align, the object is resolved. (B) When the pixels are offset, the object “disappears.” (C)
and (D) Doubling the resolving power of the capture device resolves the stripe pattern of the object even when the pixels are offset.
(C) Aligned pixels completely reproduce the object. (D) Offset pixels still reproduce the alternating pattern, with peaks (white) at the
same spatial frequency as the object. Diagram by L. Griffing.
1.7 Archival Time, Storage Limits, and the Resolution of the Display Medium Influence Capture and Scan Resolving Power 13
1.7 Archival Time, Storage Limits, and the Resolution of the Display Medium Influence
Capture and Scan Resolving Power
Flatbed scanners archive photographs, slides, gels, and radiograms (see Sections 6.1 and 6.2). Copying with scanners
should use the Nyquist criterion. For example, most consumer-grade electronic scanners for printed material now come
with a 1200 × 1200 dpi resolving power because half this spatial frequency, 600 ppi, is optimal for printed color photo-
graphs (see Table 1.4). For slide scanners, the highest resolving power should be 1500 to 3000 dpi, 1500 dpi for black and
white and 3000 dpi for color slides (see Table 1.4).
14 1 Image Structure and Pixels
Figure 1.11 (A) Image of the central region of a diatom scanned at 72ppi. The vertical stripes, the striae, on the shell of the diatom
are prominent, but the bumps, or spicules, within the striae are not. (B) Scanning the images at 155 ppi reveals the spicules. However,
this may be too large for web presentation. (C) Resizing the image using interpolation (bicubic) to 72 ppi maintains the view of the
spicules and is therefore better than the original scan at 72 ppi. This is a scan of an image in Inoue, S. and Spring, K. 1997. Video
Microscopy. Second Edition. Plenum Press New York, NY. p. 528.
When setting scan resolving power in dots per inch, consider the final display medium of the copy. For web display, the
final ppi of the image is 72. However, the scan should meet or exceed the Nyquist criterion of 144 ppi. In the example
shown in Figure 1.11, there is a clear advantage to using a higher resolving power, 155 ppi, in the original scan even when
software rescales the image to 72 ppi.
If the output resolution can only accommodate a digital image of low resolving power, then saving the image as a low-
resolving-power image will conserve computer disk space. However, if scanning time and storage limits allow, it is always
best to save the original scan that used the Nyquist criterion. This fine-resolution image is then available for analysis and
display on devices with higher resolving powers.
1.8 Digital Image Resizing or Scaling Match the Captured Image Resolution to the Output
Resolution
If the final output resolution is a print, there are varieties of printing methods, each with its own resolving power. Laser
prints with a resolving power of 300 dpi produce high-quality images of black and white text with excellent legibility, as
would be expected from Table 1.1. However, in printers that report their dpi to include the dots inside half-tone cells
(Figure 1.12), which are the pixels of the image, the dpi set for the scan needs to be much higher than the value listed in
Table 1.4. Printers used by printing presses have the size of their half-tone screens pre-set. The resolution of these printers
is in lines per inch or lines per millimeter, each line being a row of half-tone cells. For these printers, half-tone images of
the highest quality come from a captured image resolving power (ppi) that is two times (i.e., the Nyquist criterion) the
printer half-tone screen frequency. Typical screen frequencies are 65 lpi (grocery coupons), 85 lpi (newsprint), 133 lpi
(magazines), and 177 lpi (art books).
1.8 Digital Image Resizing or Scaling Match the Captured Image Resolution to the Output Resolution 15
Figure 1.12 Half-tone cells for inkjet and laser printers. (A) Two half-tone cells composed of a 5 × 5 grid of dots. A 300-dpi printer
with 5 × 5 half-tone cells would print at 300/5 or 60 cells per inch (60 ppi). This is lower resolution than all computer screens. These
cells could represent 26 shades of gray. (B) Two half-tone cells composed of a 16 × 16 grid of dots. A 1440-dpi printer with 16 × 16
half-tone cells would print at 90 cells per inch. This is good legibility but not excellent. These cells (90 ppi) could represent 256
shades of gray. Diagram by L. Griffing.
Figure 1.13 Information loss during resizing. (A) The original image (2.3 inches ×1.6 inches). (B) The result achieved after reducing A
about fourfold (0.5 inches in width) and re-enlarging using interpolation during both shrinking and enlarging. Note the complete
blurring of fine details and the text in the header. Photo by L. Griffing.
For image display at the same size in both a web browser and printed presentation, scan it at the resolution needed
for printing and then rescale it for display on the web. In other words, always acquire images at the resolving power
required for the display with the higher resolving power and rescale it for the lower resolving power display (see
Figure 1.11).
Digital image resolving power diminishes when resizing or scaling produces fewer pixels in the image. Reducing the
image to half size could just remove every other pixel. However, this does not result in a satisfactory image because the
image leaves out a large part of the information in the scene that it could otherwise incorporate. A more satisfactory way is
to group several pixels together and make a single new pixel from them. The value assigned to the new pixel comes from
the values of the grouped pixels. However, even with this form of reduction, there is, of course, lost resolving power (com-
pare Figure 1.11C with 1.11B and Figure 1.13B with 1.13A). Computational resizing and rescaling a fine resolution image
(Figure 1.11C) is better than capturing the image at lower resolving power (Figure 1.11A).
Enlarging an image can either make the pixels bigger or interpolate new pixels between the old pixels. The accuracy of
interpolation depends on the sample and the process used. Three approaches for interpolating new pixel values in order
of increasing accuracy and processing time are the near-neighbor process, the bilinear process, and the bicubic pro-
cess (see also Section 11.3 for 3D objects). Generating new pixels might result in a higher pixels per inch, but all of the
information necessary to generate the scene resides in the original smaller image. True resolving power is not improved; in
fact, some information might be lost. Even simply reducing the image is problematic because shrinking the image by the
process described earlier using groups of pixels changes information content of the image.
16 1 Image Structure and Pixels
Recording the settings for acquiring an image in scientific work (pixels per inch of acquisition device, lenses, exposure, date
and time of acquisition, and so on) is very important. Sometimes this metadata is in the image file itself (Figures 1.13 and
1.14). In the picture of the bear (Figure 1.13), the metadata is a header stating the time and date of image acquisition. In the
picture of the plant meristem (Figure 1.14), the metadata is a footer stating the voltage of the scanning electron microscope,
the magnification, a scale bar, and a unique numbered identifier. Including the metadata as part of the image has advan-
tages. A major advantage is that an internal scale bar provides accurate calibration of the image upon reduction or rescal-
ing. A major disadvantage is that resizing the image can make the metadata unreadable as the resolving power of the image
decreases (Figure 1.13B, header). Because digital imaging can rescale the x and y dimensions differently (without a specific
command such as holding down the shift key), a 2D internal scale bar would be best, but this is rare.
For digital camera and recording systems, the image file stores the metadata separately from the image pixel information.
The standard metadata format is EXIF (Exchangeable Image File) format. Table 1.5 provides an example of some of the
recorded metadata from a consumer-grade digital camera. However, not all imaging software recognizes and uses the same
codes for metadata. Therefore, the software that comes with the camera can read all of the metadata codes from that
camera, but other more general image processing software may not. This makes metadata somewhat volatile because just
opening and saving images in a new software package can remove it.
Several images may share metadata. Image scaling (changing the pixels per inch) is a common operation in image
processing, making it very important that there be internal measurement calibration on digital scientific images. Fiducial
markers are calibration standards of known size contained within the image, such as a ruler or coin (for macro work), a
stage micrometer (for microscopy), or gold beads (fine resolution electron microscopy). However, their inclusion as an
internal standard is not always possible. A separate picture of such calibration standards taken under identical conditions
as the picture of the object produces a fiducial image, and metadata can refer to the fiducial image for scaling information
of the object of interest.
Image databases use metadata. A uniform EXIF format facilitates integration of this information into databases. There
are emerging standards for the integration of metadata into databases, but for now, many different standards exist. For
example, medical imaging metadata standards are different from the standards used for basic cell biology research. Hence,
the databases for these professions are different. However, in both these professions, it is important to record the condi-
tions of image acquisition in automatically generated EXIF files or in lab, field, and clinical notes.
Figure 1.14 Scanning electron micrograph with an internal scale bar and other metadata. This is an image of a telomerase-minus
mutant of Arabidopsis thaliana. The accelerating voltage (15 kV), the magnification (×150), a scale bar (100 µm), and a negative number
are included as an information strip below the captured image. Photo by L. Griffing.
1.9 Metadata Describes Image Content, Structure, and Conditions of Acquisition 17
Table 1.5 Partial Exchangeable Image File Information for an Image from a
Canon Rebel.
Title IMG_6505
(Continued)
18 1 Image Structure and Pixels
Title IMG_6505
1.2 The Resolving Power of a Camera or Display Is the Spatial Frequency of Its Pixels
The reciprocal relationship between resolving power and resolution is key to understanding the measurement of the
fidelity of optical systems. The concept of spatial frequency, also called reciprocal space or k space, is necessary for the
future treatments in this book of Fourier optics, found in Chapters 8 and 14–19.
For more on video display standards, see https://en.wikipedia.org/wiki/List_of_common_resolutions.
Appropriate viewing distance is in Anshel, J. 2005. Visual Ergonomics Handbook. CRC Press, Taylor and Francis Group,
Boca Raton, FL.
1.6 The Nyquist Criterion: Capture at Twice the Spatial Frequency of the Smallest Object Imaged
The Nyquist criterion is from Shannon, C. 1949. Communication in the presence of noise. Proceedings of the Institute of
Radio Engineers 37:10–21. and Nyquist, H. 1928. Certain topics in telegraph transmission theory. Transactions of the
American Institute of Electrical Engineers 47:617–644.
1.7 Archival Time, Storage Limits, and the Resolution of the Display Medium Influence Capture and Scan
Resolving Power
Figure 1.10 is a scan of diatom images in Inoue, S. and Spring, K. 1997. Video Microscopy. Second Edition. Plenum Press,
New York, NY. p. 528.
1.8 Digital Image Resizing or Scaling Match the Captured Image Resolution to the Output Resolution
See the half-tone cell discussion in Russ, J. 2007. The Image Processing Handbook. CRC Taylor and Francis, Boca Raton, FL.
p. 137
Printer technology is now at the level where standard desk jet printers are satisfactory for most printing needs.
2.1 Contrast Compares the Intensity of a Pixel with That of Its Surround
How well we see a pixel depends not only on its size, as described in Chapter 1, but also on its contrast. If a ladybug’s spots
are black, then they stand out best on the part of the animal that is white, its thorax (Figure 2.1A and C‑).Black pixels have
the lowest pixel value, and white pixels have the highest (by most conventions); the difference between them is the con-
trast. In this case, the spots have positive contrast; subtracting the black spot value from the white background value
gives a positive number. Negative contrast occurs when white spots occur against a dark background. In the “negative” of
Figure 2.1A, Figure 2.1B shows the ladybug’s spots as white. They have high negative contrast against the darker wings;
subtracting the white spot value from the black background gives a negative number.
Figure 2.1 Grayscale and color contrast of ladybugs on a leaf. Positive-contrast images (A and C) compared with negative-contrast images
(B and D). In the positive-contrast images, the ladybugs’ spots appear dark against a lighter background. In the negative-contrast images, the
spots appear light against a darker background. The contrast between the ladybugs and the leaf in C is good because the colors red and
green are nearly complementary. A negative image (D) produces complementary colors, and the negative or complementary color to leaf
green is magenta. (E) Histograms display the number of pixels at each intensity. Grayscale positive and negative images have mirror-image
histograms. (F) The histograms of color images show the number of pixels at each intensity of the primary colors, red, green, and blue. A color
negative shows the mirror image of the histogram of the color positive: making a negative “flips” the histogram. Photo by L. Griffing.
Imaging Life: Image Acquisition and Analysis in Biology and Medicine, First Edition. Lawrence R. Griffing.
© 2023 John Wiley & Sons, Inc. Published 2023 by John Wiley & Sons, Inc.
Companion Website: www.wiley.com/go/griffing/imaginglife
2.2 Pixel Values Determine Brightness and Color 21
The terms positive contrast and negative contrast come directly from the
algebraic definition of percent contrast in Figure 2.2. If pixels in the
background have higher intensity than the pixels of the object, then the value
of the numerator is positive, and the object has positive contrast. If the
object pixels have a higher intensity than the background pixels, then the
value in the numerator is negative, and the object has negative contrast.
The negatives of black-and-white photographs, or grayscale photographs,
have negative contrast. Although the information content in the positive and Figure 2.2 Algebraic definition of percent contrast.
negative images in Figure 2.1 is identical, our ability to distinguish features in If Ibkg > Iobj, there is positive contrast. If Iobj > Ibkg,
the two images depends on the perception of shades of gray by eye and on there is negative contrast. Diagram by L. Griffing.
psychological factors that may influence that perception.
In a color image, intensity values are combinations of the intensities of the primary colors, red, green, and blue. While
the human eye (see Section 4.2) can only distinguish 50–60 levels or tones of gray on a printed page (Figure 2.3), it can
distinguish millions of colors (Figure 2.4). Consequently, color images can have much more contrast and more information
than grayscale images. In Figure 2.1C, the distinction between the orange and red ladybugs is more apparent than
in Figure 2.1A. Figure 2.1D shows the negative color contrast image of Figure 2.1C. The negative of a color is its comple-
mentary color (see Figure 2.4). The highest contrast between colors occurs when the colors are complementary.
That pixels have intensity values is implicit in the definition of contrast (see Figure 2.2). In a black-and-white, or grayscale,
image, intensity values are shades of gray. If the image has fewer than 60 shades of gray, adjacent regions (where the gray
values should blend) become discrete, producing a banded, or posterized, appearance to the image. Dropping the number
of gray values from 64 to 16 produces a posterized image, as shown in Figure 2.3B and C. Likewise, as the number of gray
values diminishes below 64, more posterization becomes evident (Figure 2.5).
22 2 Pixel Values and Image Contrast
In digital imaging, the image information comes in discrete information bits, the bit being a simple “on/off” switch hav-
ing a value of 0 or 1. The more bits in a pixel, the larger the amount of information and the greater the pixel depth.
Increasing the number of bits increases the information by powers of 2 for the two states of each bit. Consequently, an
image with 8 bits, or one byte, per pixel has 28, or 256, combinations of the “on/off” switches. Because 0 is a value, the
grayscale values range from 0 to 255 in a 1-byte image.
Computers that process and display images with large pixel depth have a lot of information to handle. To calculate the
amount of information in a digital image, multiply the pixel dimensions in height, width, and pixel depth. A digitized
frame 640 pixels in height × 480 pixels in width × 1 byte deep requires 307.2 kilobytes (kB) for storage and display. A color
image with three 1-byte channels and the same pixel height and width will be 921 kB.
The pixel depth of the image limits the choice of software for processing the image (Table 2.1). A major distinguishing
feature of different general-purpose image processing programs is their ability to handle high pixel depth commonly found
in scientific cameras and applications (see Table 2.1). Regardless of the pixel depth, a graph of how many pixels in the
image have a certain pixel value, or the image histogram, is in most software.
Table 2.1 Raster Graphics Image Processing Programs Commonly Used for Contrast Analysis and Adjustment.a
File Formats
Color Spaces and Image Modes Supported + PNG,
Supported Features Supported JPG, RAWh
Operating Large
Software Systemsb: Editable Pixel sRGB
Package Win OSX Lin Histogram Selectionc Layersd Depthe aRGBf CMYKg Indexed Grayscale TIFF SVG XCF
Proprietary–
purchase
Adobe Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No
Photoshop
Corel Paint Yes No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No
Shop Pro
Proprietary–
freeware
IrfanView Yes No No Yes Yes No No sRGB No Yes Yes Yes Plgin No
Paint.Net Yes No Yes Yes Yes Yes No sRGB Some Some Some Plgin Plgin Yes
Google Yes Yes Yes Yes No No No sRGB No Some Some Imprt Yes No
Photos
Open
Source
GIMP2 (or Yes Yes Yes Yes Yes Yes Yes Yes Some Yes Yes Some Yes Yes
GIMPShop)
ImageJ Yes Yes Yes Yes Yes Some Yes Yes No Yes Yes Yes Some No
a
There is no standard image processing software for the “simple” tasks of contrast enhancement in science. (The image analysis software for
image measurement is described in Chapter 8). The demands of scientific imaging include being able to recognize images of many different
formats, including some RAW formats proprietary to certain manufacturers, and large pixel depths of as much as 32 bits per channel. XCF is
the native format for GIMP2. PNG, JPG, and RAW are all supported.
b
Win = Windows (Microsoft), OSX = OSX (Apple), and Lin = Unix (The Open Group) or Linux (multiple distributors).
c
Editable selections can be either raster or vector based. Vector based is preferred.
d
Layers can include contrast adjustment layers, whereby the layer modifies the contrast in the underlying image.
e
Usually includes 12-, 16-, and 32-bit grayscale and higher bit depth multi-channel images.
f
aRGB = Adobe (1998) RGB colorspace, sRGB = standard RGB.
g
CMYK = cyan, magenta, yellow and black colorspace.
h
PNG = portable network graphic, JPG = joint photographic experts group, RAW = digital negative, TIFF = tagged image file format,
SVG = scalable vector graphic, XCF = experimental computing facility format.
Imprt, opens as an imported file; Plgin, opens with Plugin.
24 2 Pixel Values and Image Contrast
2.3 The Histogram Is a Plot of the Number of Pixels in an Image at Each Level of Intensity
The image histogram is a plot of the number of pixels (y-axis) at each intensity level (x-axis) (see Figure 2.5a–e). As the bit
depth and intensity levels increase, the number on the x-axis of the histogram increases (see Figure 2.5a–e). For 8-bit images,
the brightest pixel is 255 (see Figure 2.5a). The blackest pixel is zero. Histograms of color images show the number of pixels at
each intensity value of each primary color (see Figure 2.1F). To produce a negative image, “flip” or invert the intensity values
of the histogram of grayscale images (see Figure 2.1E) and of each color channel in a color image (see Figure 2.1F).
As the pixel depth decreases, the number of histogram values along the x-axis (see Figure 2.5a–e) of the histograms
decreases. When there are only two values in the histogram, the image is binary (Figure 2.5E). Gray-level detail in the
image diminishes as pixel depth decreases, as can be seen in the posterization of the protoplast in Figure 2.5B–E.
The histogram has no spatial information in it other than the total number of pixels in the region represented by the his-
togram, which, for purposes of discussion in this chapter, is the whole image. To get spatial representations of pixel inten-
sity, intensities can be plotted along a selected line, the intensity plot (Figure 2.6A and B) or the intensities can be mapped
across a two-dimensional region of interest (ROI) with a three-dimensional surface plot, (Figure 2.6A and C).
Figure 2.6 Intensity plots of a grayscale light micrograph of a plant protoplast taken with differential interference contrast optics.
(A) Plant protoplast with a line selection (yellow) made across it in ImageJ. (B) Intensity plot of line selection (A) using the Analyze > Plot
Profile command in ImageJ. (C) Surface plot of the grayscale intensity of the protoplast in (A) using the Analyze > Surface Plot command
in ImageJ. Photo by L. Griffing.
2.4 Tonal Range Is How Much of the Pixel Depth Is Used in an Image 25
Information about the histogram in standard histogram displays (Figure 2.7, insets B and D) includes the total number
of pixels in the image, as well as the median value, the mean value, and the standard deviation around the mean value. The
standard deviation is an important number because it shows the level of contrast or variation between dark and light,
higher standard deviations meaning higher contrast. One way to assess sharpening of the image is by pixel intensity
standard deviation, with higher standard deviations producing higher contrast and better “focus” (see Section 3.8).
However, there is a trade-off between contrast and resolution (see Section 5.15) – higher contrast images of the same scene
do not have higher resolving power.
2.4 Tonal Range Is How Much of the Pixel Depth Is Used in an Image
The number of gray levels represented in the image is its tonal range. The ideal situation for grayscale image recording
is that the tonal range matches the pixel depth, and there are pixels at all the gray values in the histogram. If only a small
region of the x-axis of the image histogram has values in the y-axis, then the tonal range of the image is too small. With
a narrow range of gray tones, the image becomes less recognizable, and features may be lost as in Figure 2.7A, in which
the toads are hard to see. Likewise, if the tonal range of the scene is greater than the pixel depth (as in over- and under-
exposure; see Section 2.5), information and features can be lost. In Figure 2.7C, the tonal range of the snail on the leaf is
good, but there are many pixels in the histogram at both the 0 and 255 values (Figure 2.7D). Objects with gray-level
values below zero, in deep shade, and above 255, in bright sunlight, have no contrast and are lost. The values above 255
saturate the 255 limit of the camera. The ROI, such as the snail on the leaf in Figure 2.7C, may have good tonal range
even though there are regions outside the ROI that are over- or underexposed. A histogram of that ROI would reveal its
tonal range.
Do not confuse the sensitivity of the recording medium with its pixel depth, its capacity to record light gray levels or
gradations. The ISO setting on cameras (high ISO settings, 800–3200, are for low light) adjusts pixel depth, but this does not
make the camera more sensitive to light; it just makes each pixel shallower, saturating at lower light levels (see Section
5.11). Scene lighting and camera exposure time are the keys to getting good tonal range (see Section 9.5). Many digital
single lens reflex (SLR) cameras display the image histogram of the scene in real time. The photographer can use that
information to match the tonal range of the scene to the pixel depth of the camera, looking for exposure and lighting con-
ditions where there are y-axis values for the entire x-axis on the image histogram. After taking the image, digital adjust-
ment (histogram stretching; see Section 2.10) of the tonal range can help images with limited tonal range that are not
over- or underexposed. However, it is always better to optimize tonal range through lighting and exposure control (see
Sections 9.5 and 9.10).
Underexposed images contain more than 5% of pixels in the bottom four gray levels of a 256 grayscale. The ability of the
human eye to distinguish 60 or so gray levels (see Chapter 5) determines this; four gray levels is about 1/60th of 256. Hence,
differences within the first four darkest intensities are indistinguishable by eye. A value of zero means that the camera was
not sensitive enough to capture light during the exposure time. Assuming that the ROI is 100% of the pixels, then underex-
posure of 5% of the pixels is a convenient statistically acceptable limit. There are underexposed areas in Figure 2.7C but
none in Figure 2.7A. However, less than 5% of Figure 2.7C is underexposed. Underexposed Figure 2.8A has more than 40%
of the pixels in the bottom four gray levels.
The argument is the same for overexposure, the criterion being more than 5% of the pixels in the top four gray levels of
a 256 grayscale. For pixels with a value in the highest intensity setting, 255 in a 256–gray-level spectrum, the camera pixels
saturate with light and objects within those areas have no information. Figure 2.7C has some bright areas that overexposed,
as shown by histogram in Figure 2.7D. However, the image as a whole is not overexposed. Figure 2.9A is bright but not
overexposed, with just about 5% of the pixels in the top four gray levels (Figure 2.9B).
Figure 2.9 High key grayscale image and its histogram. (A) High-key image of polar bears. Most of the tonal range is in the region of
high intensity values, so it is very light. (B) Histogram of the polar bear image. Note that although there are primarily high intensity
values in the image, there are very few at the highest value; the image is not overexposed. Photo © Jenny Ross, used with permission.
2.7 Color Images Have Various Pixel Depths 27
2.6 High-Key Images Are Very Light, and Low-Key Images Are Very Dark
Figure 2.9A is high key because most of the pixels have an intensity level greater than 128, half of the 255 maximum inten-
sity value. Figure 2.8A is low key because most of the pixels have less than 128. For these scenes, the exposure – light
intensity times the length of time for the shutter on the camera to stay open – is set so that the interesting objects have
adequate tonal range. Exposure metering of the ROI (spot metering; see Sections 5.2 and 9.4) is necessary because taking
the integrated brightness of the images in Figures 2.8 and 2.10 for an exposure setting would result in overexposed
fluorescent cells. Hence, the metered region should only contain fluorescent cells. Over- or underexposure of regions that
are not of scientific interest is acceptable if the ROIs have adequate tonal range. Low-key micrographs of fluorescent (see
Section 17.3) or darkfield (see Sections 9.3 and 16.3) objects are quite common.
Pixels produce color in a variety of modes. Subpixel color (see Section 1.1) produces the impression that the entire pixel
has a color when it is composed of different intensities of three primary colors. The lowest pixel depth for color images
is 8 bit, in which the intensity of a single color ranges from 0 to 255. These are useful in some forms of monochromatic
imaging, such as fluorescence microscopy, in which a grayscale camera records a monochromatic image. To display the
monochromatic image in color, the entire image is converted to indexed color and pseudocolored, or false colored, with
a 256-level color table or look-up table (LUT) (Figure 2.10). Color combinations arise by assigning different colors, not
just one color, to the 256 different pixel intensities in an 8-bit image; 8-bit color images are indexed color images.
Indexed color mode uses less memory other color modes because they are only one channel. Not all software supports
this mode (see Table 2.1).
Figure 2.10 Indexed color image of Figure 2.8 pseudocolored using the color table shown. The original sample emitted
monochromatic green light from green fluorescent protein captured with a grayscale camera. Converting the grayscale image to an
indexed color image and using a green lookup table mimics fluoresced green light. Photo by L. Griffing.
28 2 Pixel Values and Image Contrast
Figure 2.11 Color image and its associated red, green, and blue channels. (A) Full-color image of a benthic rock outcrop in Stetson
Bank, part of the Flower Gardens National Marine Sanctuary, shows all three channels, red, green, and blue. There is a circle around
the red fish. (B) Red channel of the color image in A, shows high intensities, bright reds, in the white and orange regions of the image.
Note the red fish (circle) produces negative contrast against the blue-green water, which has very low red intensity. (C) Green channel
of the image. Note that the fish (circle) is nearly invisible because it had the same intensity of green as the background. (D) Blue
channel of image. Note that the fish (circle) is now visible in positive contrast because it has very low intensities of blue compared
with the blue sea surrounding it. Photo by S. Bernhardt, used with permission.
A more common way of representing color is to combine channels of primary colors to make the final image, thereby
increasing the pixel depth with each added channel. The two most common modes are RGB (for red, green, and blue) and
CMYK (for cyan, magenta, yellow, and black). Figure 2.11 is an RGB image in which a red fish (circled) appears in the
red (negative contrast) and blue (positive contrast) channels but disappears in the green channel. The three 8-bit chan-
nels, each with its own histogram (see Figure 2.1), add together to generate a full-color final image, producing a pixel
depth of 24 bits (3 bytes, or three channels that are 8 bits each). Reducing the pixel depth in the channels produces a
color-posterized image (Figure 2.12B–D). At low pixel depths, background gradients become posterized (arrows in Figure
2.12), and objects such as the fish become unrecognizable (Figure 2.12D). Video and computer graphic displays use RGB
mode, whereby adding different channels makes the image brighter, producing additive colors. The CMYK mode uses
subtractive colors, whereby combining different cyan, magenta, and yellow channels make the image darker, subtracting
intensity, as happens with inks and printing. Because the dark color made with these three channels never quite reaches
true black, the mix includes a separate black channel. Consequently, the CMYK uses four 8-bit channels, or has a pixel
depth of 32 bits (4 bytes). Because these color spaces are different, they represent a different range, or gamut, of colors
(see Section 4.3). Even within a given color space, such as RGB, there are different gamuts, such as sRGB and Adobe RGB
(see Table 2.1).
Another random document with
no related content on Scribd:
Leonia, 414
Lepeta, 405
Lepetella, 405
Lepetidae, radula, 227
Lepidomenia, 404;
radula, 229
Leptachatina, 327
Leptaena, 500, 501, 502, 503, 505;
stratigraphical distribution, 507, 508
Leptaxis, 441
Leptinaria, 357, 358, 442
Leptochiton, 403
Leptoconchus, 75, 423
Leptoloma, 348, 351
Lepton, 453;
parasitic, 77;
commensal, 80;
mantle-edge, 175, 178
Leptoplax, 403
Leptopoma, 316, 319, 338, 414
Leptoteuthis, 390
Leptothyra, 409
Leroya, 331
Leucochila, 442
Leucochloridium, 61
Leucochroa, 292, 295, 441
Leuconia, 439
Leucotaenia, 335, 359, 441
Leucozonia, 64, 424, 424
Levantina, 295
Libania, 295
Libera, 327, 441;
egg-laying, 128
Libitina, 451
Licina, 414
Life, duration of, in snails, 39
Ligament, 271
Liguus, 349, 351, 442
Lima, 178, 179, 450;
habits, 63
Limacidae, radula, 232
Limacina, 59, 249, 436, 436
Limapontia, 429, 432;
breathing, 152
Limax, 245, 440;
food, 31, 179;
variation, 86;
pulmonary orifice, 160;
shell, 175;
jaw, 211;
radula, 217;
distribution, 285, 324;
L. agrestis, eats May flies, 31;
arborum, slime, 30;
food, 31;
flavus, food, 33, 36;
habits, 35, 36;
gagates, 279, 358;
maximus, 32, 161;
eats raw beef, 32;
cannibalism, 32;
sexual union, 128;
smell, 193 f.
Limea, 450
Limicolaria, 329–332, 443
Limnaea, 439;
self-impregnation, 44;
development and variation, 84, 92, 93;
size affected by volume of water, 94;
eggs, 124;
sexual union, 134;
jaw, 211;
radula, 217, 235;
L. auricularia, 24;
glutinosa, sudden appearance, 46;
Hookeri, 25;
involuta, 82, 278, 287;
peregra, 10, 180;
burial, 27;
food, 34, 37;
variation, 85;
distribution, 282;
palustris, distribution, 282;
stagnalis, food, 34, 37;
variation, 85, 95;
circum-oral lobes, 131;
generative organs, 414;
breathing, 161;
nervous system, 204;
distribution, 282;
truncatula, parasite, 61;
distribution, 282
Limnocardium, 455
Limnotrochus, 332, 415
Limopsis, 448
Limpet-shaped shells, 244
Limpets as food for birds, 56;
rats, 57;
birds and rats caught by, 57;
as bait, 118
Lingula, 464, 467, 468, 471, 472, 473, 475, 477, 478, 487;
habits, 483, 484;
distribution, 485;
fossil, 493, 494, 503;
stratigraphical distribution, 506, 508, 510, 511
Lingulella, 493, 503;
stratigraphical distribution, 506, 508, 511
Lingulepis, 503, 511
Lingulidae, 485, 487, 496, 503, 508
Linnarssonia, 504;
stratigraphical distribution, 506, 508
Lintricula, 426
Liobaikalia, 290
Liomesus, 424
Lioplax, 340, 416
Liostoma, 424
Liostracus, 442
Liotia, 408
Liparus, 324, 359, 441
Lissoceras, 399
Lithasia, 340, 417
Lithidion, 414
Lithocardium, 455
Lithodomus, 449
Lithoglyphus, 294, 296, 297, 415
Lithopoma, 409
Lithotis, 302, 443
Litiopa, 30, 361, 415
Littorina, 413;
living out of water, 20;
radula, 20, 215;
habits, 50;
protective coloration, 69;
egg-laying, 126;
hybrid union, 130;
monstrosity, 251, 252;
operculum, 269;
erosion, 276;
L. littorea, in America, 374;
obtusata, generative organs, 135;
rudis, 150;
Prof. Herdman’s experiments on, 151 n.
Littorinida, 415
Lituites, 247, 395
Liver, 239;
liver-fluke, 61
Livinhacea, 333, 359, 441
Livona, 408;
radula, 226;
operculum, 268
Lloyd, W. A., on Nassa, 193
Lobiger, 432
Lobites, 397
Loligo, 378–389;
glands, 136;
modified arm, 139;
eye, 183;
radula, 236;
club, 381;
L. punctata, egg-laying, 127;
vulgaris, larva, 133
Loligopsis, 391
Loliguncula, 390
Loliolus, 390
Lomanotus, 433
Lophocercus, 432
Lorica, 403
Lowe, E. J., on growth of shell, 40
Loxonema, 417
Lucapina, 406
Lucapinella, 406
Lucerna, 441
Lucidella, 348–351, 410
Lucina, 270, 452
Lucinopsis, 454
Lung, 151, 160
Lunulicardium, 455
Lutetia, 452
Lutraria, 446, 456
Lychnus, 442
Lyonsia, 458
Lyonsiella, 458;
branchiae, 168
Lyra, stratigraphical distribution, 507
Lyria, 425
Lyrodesma, 447
Lysinoe, 441
Lytoceras, 398
Maackia, 290
Macgillivrayia, 133
Machomya, 458
Maclurea, 410
Macroceramus, 343–353, 442
Macroceras, 440
Macrochilus, 417
Macrochlamys, 296, 299, 301 f., 310, 316–322, 440
Macrocyclis, 358, 359, 442
Macron, 424
Macroön, 441
Macroscaphites, 247, 399, 399
Macroschisma, 265, 406
Mactra, 271, 446, 454
Macularia, 285, 291, 292 f., 441
Magas, 506;
stratigraphical distribution, 507, 508
Magellania, 500
Magilus, 75, 423
Mainwaringia, 302
Malaptera, 418
Malea, 419
Malletia, 447
Malleus, 449
Mangilia, 426
Mantle, 172 f., 173;
lobes of, 177
Margarita, 408;
radula, 225
Marginella, 425;
radula, 221
Mariaella, 314, 338, 440
Marionia, 433
Marmorostoma, 409
Marrat, F. P., views on variation, 82
Marsenia, 133
Marsenina, 411
Martesia, 305, 457
Mastigoteuthis, 390
Mastus, 296, 442
Matheronia, 455
Mathilda, 250, 417
Maugeria, 403
Mazzalina, 424
Megalatractus, 424
Megalodontidae, 451
Megalomastoma, 344, 414
Megalomphalus, 416
Megaspira, 358, 442
Megatebennus, 406
Megerlia, distribution, 486, 487
Meladomus, 249, 328, 331, 416
Melampus, 18, 199, 250, 439, 439
Melanatria, 336
Melania, 276, 417, 417;
distribution, 285, 292 f., 316 f., 324, 336
Melaniella, 442
Melaniidae, origin, 17
Melanism in Mollusca, 85
Melanopsis, 417;
distribution, 285, 291, 292 f., 323, 326
Melantho, 340, 416
Melapium, 424
Meleagrina, 449
Melia, 348
Melibe, 432
Melongena, 424;
radula, 220;
stomach, 238
Merica, 426
Merista, 505, 508
Meroe, 454
Merope, 327
Mesalia, 417
Mesembrinus, 356, 442
Mesodesma, 454
Mesodon, 340, 441
Mesomphix, 340, 440
Mesorhytis, 377
Meta, 423
Metula, 424
Meyeria, 424
Miamira, 434
Microcystis, 323, 324, 327, 338, 440
Microgaza, 408
Micromelania, 12, 297
Microphysa, protective habits, 70
Microplax, 403
Micropyrgus, 415
Microvoluta, 425
Middendorffia, 403
Milneria, 451
Mimicry, 66
Minolia, 408
Mitra, 425;
radula, 221
Mitrella, 423
Mitreola, 425
Mitrularia, 248, 412
Modiola, 446, 449;
habits, 64;
genital orifice, 242
Modiolarca, 449
Modiolaria, 449;
habits, 78
Modiolopsis, 452
Modulus, 417
Monilia, 408
Monkey devouring oysters, 59
Monoceros, 423
Monocondylaea, 452
Monodacna, 12, 297, 455
Monodonta, 408, 408;
tentaculae, 178
Monogonopora, 134, 140
Monomerella, 496, 504
Monopleura, 456
Monotis, 449
Monotocardia, 9, 170, 411
Monstrosities, 250
Montacuta, 452;
M. ferruginosa, commensal, 80;
substriata, parasitic, 77
Mopalia, 403
Moquin-Tandon, on breathing of Limnaeidae, 162;
on smell, 193 f.
Moreletia, 440
Morio, 420
Mormus, 356, 442
Moseley, H. N., on eyes of Chiton, 187 f.
Moussonia, 327
Mouth, 209
Mucronalia, 422
Mucus, use of, 63
Mulinia, 272
Mülleria, 344, 452
Mumiola, 422
Murchisonia, 265, 407
Murchisoniella, 422
Murex, 423;
attacks Arca, 60;
use of spines, 64;
egg-capsules, 124;
eye, 182;
radula, 220;
shell, 256
Musical sounds, 50
Mussels, cultivation of, 115;
as bait, 116;
poisonous, 117;
on Great Eastern, 116
Mutela, 294, 328, 331, 336, 452
Mutyca, 425
Mya, 271, 275, 446, 456;
stylet, 240;
M. arenaria, variation, 84
Myacea, 456
Myalina, 449
Mycetopus, 307, 316, 344, 452
Myochama, 458
Myodora, 458
Myophoria, 448
Myopsidae, 389
Myrina, 449
Myristica, 424
Mytilacea, 448
Mytilimeria, 458
Mytilops, 452
Mytilopsis, 14
Mytilus, 258, 449;
gill filaments, 166, 285;
M. edulis, 14, 165;
attached to crabs, 48, 78;
pierced by Purpura, 60;
Bideford Bridge and, 117;
rate of growth, 258;
stylet, 240
Myxostoma, 414
Nacella, 405
Naiadina, 449
Nanina, 278, 300 f., 335, 440;
radula, 217, 232
Napaeus, 296–299, 316, 442
Naranio, 454
Narica, 412
Nassa, 423;
egg-capsules, 126;
sense of smell, 193
Nassodonta, 423
Nassopsis, 332
Natica, 246, 263, 411;
spawn, 126;
operculum, 268
Naticopsis, 409
‘Native’ oysters, 106
Nausitora, 15
Nautiloidea, 393
Nautilus, 254, 392, 395;
modified arms, 140;
eye, 183;
nervous system, 206;
radula, 236;
kidneys, 242
Navicella, 267, 268, 324, 327, 410;
origin, 17
Navicula, 358, 442
Navicula (Diatom), cause of greening in oysters, 108
Nectoteuthis, 389
Neda, 431
Nematurella, 12, 297
Nembrotha, 434
Neobolus, 504
Neobuccinum, 424
Neocyclotus, 357, 358
Neomenia, 8, 133, 216, 228, 404, 404;
breathing organs, 154;
nervous system, 203
Neothauma, 332
Neotremata, 511
Neptunea, 252, 262, 423;
egg-capsules, 126;
capture, 193;
monstrosity, 251
Nerinea, 417
Nerita, 17, 410;
N. polita used as money, 97
Neritidae, 260, 410;
radula, 226
Neritina, 256, 410;
origin, 16, 17, 21;
egg-laying, 128;
eye, 181;
distribution, 285, 291 f., 324, 327;
N. fluviatilis, habitat, 12, 25
Neritoma, 410
Neritopsis, 409;
radula, 226;
operculum, 269
Nervous system, 201 f.
Nesiotis, 357, 442
New Zealanders, use of shells, 99
Nicida, 413
Ninella, 409
Niphonia, 408
Niso, 422
Nitidella, 423
Nodulus, 415
Notarchus, 431
Nothus, 358, 442
Notobranchaea, 438
Notodoris, 434
Notoplax, 403
Novaculina, 305
Nucula, 254, 269, 273, 447
Nuculidae, otocyst, 197;
foot, 201
Nuculina, 448
Nudibranchiata, 432;
defined, 10;
protective and warning colours, 71 f.;
breathing organs, 159
Nummulina, 295
Nuttallina, 403