You are on page 1of 67

Imaging Life: Image Acquisition and

Analysis in Biology and Medicine 1st


Edition Lawrence R. Griffing
Visit to download the full and correct content document:
https://ebookmass.com/product/imaging-life-image-acquisition-and-analysis-in-biology
-and-medicine-1st-edition-lawrence-r-griffing/
Imaging Life
Imaging Life

Image Acquisition and Analysis in Biology and Medicine

Lawrence R. Griffing
Biology Department
Texas A&M University
Texas, United States
Copyright © 2023 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.


Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical,
photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act,
without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750–8400, fax (978) 750–4470, or on the web at www.copyright.com.
Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, (201) 748–6011, fax (201) 748–6008, or online at http://www.wiley.com/go/permission.

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United
States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners.
John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no
representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied
warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written
sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where
appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited
to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or
disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or
any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the
United States at (800) 762–2974, outside the United States at (317) 572–3993 or fax (317) 572–4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats.
For more information about Wiley products, visit our web site at www.wiley.com.

A catalogue record for this book is available from the Library of Congress

Hardback ISBN: 9781119949206; ePub ISBN: 9781119081579; ePDF ISBN: 9781119081593

Cover image(s): © Westend61/Getty Images; Yaroslav Kushta/Getty Images


Cover design: Wiley

Set in 9.5/12.5pt STIXTwoText by Integra Software Services Pvt. Ltd., Pondicherry, India

ffirs.indd 4 06-03-2023 10:24:40


v

Contents

Preface xii
Acknowledgments xiv
About the Companion Website xv

Section 1 Image Acquisition 1

1 Image Structure and Pixels 3


1.1 The Pixel Is the Smallest Discrete Unit of a Picture 3
1.2 The Resolving Power of a Camera or Display Is the Spatial Frequency of Its Pixels 6
1.3 Image Legibility Is the Ability to Recognize Text in an Image by Eye 7
1.4 Magnification Reduces Spatial Frequencies While Making Bigger Images 9
1.5 Technology Determines Scale and Resolution 11
1.6 The Nyquist Criterion: Capture at Twice the Spatial Frequency of the Smallest Object Imaged 12
1.7 Archival Time, Storage Limits, and the Resolution of the Display Medium Influence Capture
and Scan Resolving Power 13
1.8 Digital Image Resizing or Scaling Match the Captured Image Resolution to the Output Resolution 14
1.9 Metadata Describes Image Content, Structure, and Conditions of Acquisition 16

2 Pixel Values and Image Contrast 20


2.1 Contrast Compares the Intensity of a Pixel with That of Its Surround 20
2.2 Pixel Values Determine Brightness and Color 21
2.3 The Histogram Is a Plot of the Number of Pixels in an Image at Each Level of Intensity 24
2.4 Tonal Range Is How Much of the Pixel Depth Is Used in an Image 25
2.5 The Image Histogram Shows Overexposure and Underexposure 26
2.6 High-Key Images Are Very Light, and Low-Key Images Are Very Dark 27
2.7 Color Images Have Various Pixel Depths 27
2.8 Contrast Analysis and Adjustment Using Histograms Are Available in Proprietary
and Open-Source Software 29
2.9 The Intensity Transfer Graph Shows Adjustments of Contrast and Brightness Using Input and Output
Histograms 30
2.10 Histogram Stretching Can Improve the Contrast and Tonal Range of the Image without
Losing Information 32
2.11 Histogram Stretching of Color Channels Improves Color Balance 32
2.12 Software Tools for Contrast Manipulation Provide Linear, Non-linear, and Output-Visualized
Adjustment 34
2.13 Different Image Formats Support Different Image Modes 36
2.14 Lossless Compression Preserves Pixel Values, and Lossy Compression Changes Them 37
vi Contents

3 Representation and Evaluation of Image Data 42


3.1 Image Representation Incorporates Multiple Visual Elements to Tell a Story 42
3.2 Illustrated Confections Combine the Accuracy of a Typical Specimen with a Science Story 42
3.3 Digital Confections Combine the Accuracy of Photography with a Science Story 45
3.4 The Video Storyboard Is an Explicit Visual Confection 48
3.5 Artificial Intelligence Can Generate Photorealistic Images from Text Stories 48
3.6 Making Images Believable: Show Representative Images and State the Acquisition Method 50
3.7 Making Images Understood: Clearly Identify Regions of Interest with Suitable Framing, Labels,
and Image Contrast 51
3.8 Avoid Dequantification and Technical Artifacts While Not Hesitating to Take the Picture 55
3.9 Accurate, Reproducible Imaging Requires a Set of Rules and Guidelines 56
3.10 The Structural Similarity Index Measure Quantifies Image Degradation 57

4 Image Capture by Eye 61


4.1 The Anatomy of the Eye Limits Its Spatial Resolution 61
4.2 The Dynamic Range of the Eye Exceeds 11 Orders of Magnitude of Light Intensity, and Intrascene
Dynamic Range Is about 3 Orders 63
4.3 The Absorption Characteristics of Photopigments of the Eye Determines Its Wavelength Sensitivity 63
4.4 Refraction and Reflection Determine the Optical Properties of Materials 67
4.5 Movement of Light Through the Eye Depends on the Refractive Index and Thickness of the Lens,
the Vitreous Humor, and Other Components 69
4.6 Neural Feedback in the Brain Dictates Temporal Resolution of the Eye 69
4.7 We Sense Size and Distribution in Large Spaces Using the Rules of Perspective 70
4.8 Three-Dimensional Representation Depends on Eye Focus from Different Angles 71
4.9 Binocular Vision Relaxes the Eye and Provides a Three-Dimensional View in Stereomicroscopes 74

5 Image Capture with Digital Cameras 78


5.1 Digital Cameras are Everywhere 78
5.2 Light Interacts with Silicon Chips to Produce Electrons 78
5.3 The Anatomy of the Camera Chip Limits Its Spatial Resolution 80
5.4 Camera Chips Convert Spatial Frequencies to Temporal Frequencies with a Series of Horizontal
and Vertical Clocks 82
5.5 Different Charge-Coupled Device Architectures Have Different Read-out Mechanisms 85
5.6 The Digital Camera Image Starts Out as an Analog Signal that Becomes Digital 87
5.7 Video Broadcast Uses Legacy Frequency Standards 88
5.8 Codecs Code and Decode Digital Video 89
5.9 Digital Video Playback Formats Vary Widely, Reflecting Different Means of Transmission and Display 91
5.10 The Light Absorption Characteristics of the Metal Oxide Semiconductor, Its Filters, and Its Coatings
Determine the Wavelength Sensitivity of the Camera Chip 91
5.11 Camera Noise and Potential Well Size Determine the Sensitivity of the Camera to Detectable Light 93
5.12 Scientific Camera Chips Increase Light Sensitivity and Amplify the Signal 97
5.13 Cameras for Electron Microscopy Use Regular Imaging Chips after Converting Electrons to Photons
or Detect the Electron Signal Directly with Modified CMOS 99
5.14 Camera Lenses Place Additional Constraints on Spatial Resolution 101
5.15 Lens Aperture Controls Resolution, the Amount of Light, the Contrast, and the Depth of Field
in a Digital Camera 106
5.16 Relative Magnification with a Photographic Lens Depends on Chip Size and Lens Focal Length 107

6 Image Capture by Scanning Systems 111


6.1 Scanners Build Images Point by Point, Line by Line, and Slice by Slice 111
6.2 Consumer-Grade Flatbed Scanners Provide Calibrated Color and Relatively High Resolution Over a Wide
Field of View 111
Contents vii

6.3 Scientific-Grade Flatbed Scanners Can Detect Chemiluminescence, Fluorescence, and Phosphorescence 114
6.4 Scientific-Grade Scanning Systems Often Use Photomultiplier Tubes and Avalanche Photodiodes as the
Camera 118
6.5 X-ray Planar Radiography Uses Both Scanning and Camera Technologies 119
6.6 Medical Computed Tomography Scans Rotate the X-ray Source and Sensor in a Helical Fashion
Around the Body 121
6.7 Micro-CT and Nano-CT Scanners Use Both Hard and Soft X-Rays and Can Resolve Cellular Features 123
6.8 Macro Laser Scanners Acquire Three-Dimensional Images by Time-of-Flight or Structured Light 125
6.9 Laser Scanning and Spinning Disks Generate Images for Confocal Scanning Microscopy 126
6.10 Electron Beam Scanning Generates Images for Scanning Electron Microscopy 128
6.11 Atomic Force Microscopy Scans a Force-Sensing Probe Across the Sample 128

Section 2 Image Analysis 135

7 Measuring Selected Image Features 137


7.1 Digital Image Processing and Measurements are Part of the Image Metadata 137
7.2 The Subject Matter Determines the Choice of Image Analysis and Measurement Software 140
7.3 Recorded Paths, Regions of Interest, or Masks Save Selections for Measurement in Separate Images,
Channels, and Overlays 140
7.4 Stereology and Photoquadrat Sampling Measure Unsegmented Images 144
7.5 Automatic Segmentation of Images Selects Image Features for Measurement Based on Common Feature
Properties 146
7.6 Segmenting by Pixel Intensity Is Thresholding 146
7.7 Color Segmentation Looks for Similarities in a Three-Dimensional Color Space 147
7.8 Morphological Image Processing Separates or Connects Features 149
7.9 Measures of Pixel Intensity Quantify Light Absorption by and Emission from the Sample 153
7.10 Morphometric Measurements Quantify the Geometric Properties of Selections 155
7.11 Multi-dimensional Measurements Require Specific Filters 156

8 Optics and Image Formation 161


8.1 Optical Mechanics Can Be Well Described Mathematically 161
8.2 A Lens Divides Space Into Image and Object Spaces 161
8.3 The Lens Aperture Determines How Well the Lens Collects Radiation 163
8.4 The Diffraction Limit and the Contrast between Two Closely Spaced Self-Luminous Spots Give Rise
to the Limits of Resolution 164
8.5 The Depth of the Three-Dimensional Slice of Object Space Remaining in Focus Is the Depth of Field 167
8.6 In Electromagnetic Lenses, Focal Length Produces Focus and Magnification 170
8.7 The Axial, Z-Dimensional, Point Spread Function Is a Measure of the Axial Resolution of High Numerical
Aperture Lenses 171
8.8 Numerical Aperture and Magnification Determine the Light-Gathering Properties of the Microscope
Objective 172
8.9 The Modulation (Contrast) Transfer Function Relates the Relative Contrast to Resolving Power in Fourier,
or Frequency, Space 172
8.10 The Point Spread Function Convolves the Object to Generate the Image 176
8.11 Problems with the Focus of the Lens Arise from Lens Aberrations 177
8.12 Refractive Index Mismatch in the Sample Produces Spherical Aberration 182
8.13 Adaptive Optics Compensate for Refractive Index Changes and Aberration Introduced by Thick Samples 183

9 Contrast and Tone Control 189


9.1 The Subject Determines the Lighting 189
9.2 Light Measurements Use Two Different Standards: Photometric and Radiometric Units 190
viii Contents

9.3 The Light Emission and Contrast of Small Objects Limits Their Visibility 194
9.4 Use the Image Histogram to Adjust the Trade-off Between Depth of Field and Motion Blur 194
9.5 Use the Camera’s Light Meter to Detect Intrascene Dynamic Range and Set Exposure Compensation 196
9.6 Light Sources Produce a Variety of Colors and Intensities That Determine the Quality of the Illumination 197
9.7 Lasers and LEDs Provide Lighting with Specific Color and High Intensity 199
9.8 Change Light Values with Absorption, Reflectance, Interference, and Polarizing Filters 200
9.9 Köhler-Illuminated Microscopes Produce Conjugate Planes of Collimated Light from the Source and
Specimen 203
9.10 Reflectors, Diffusers, and Filters Control Lighting in Macro-imaging 207

10 Processing with Digital Filters 212


10.1 Image Processing Occurs Before, During, and After Image Acquisition 212
10.2 Near-Neighbor Operations Modify the Value of a Target Pixel 214
10.3 Rank Filters Identify Noise and Remove It from Images 215
10.4 Convolution Can Be an Arithmetic Operation with Near Neighbors 217
10.5 Deblurring and Background Subtraction Remove Out-of-Focus Features from Optical Sections 221
10.6 Convolution Operations in Frequency Space Multiply the Fourier Transform of an Image by the Fourier
Transform of the Convolution Mask 222
10.7 Tomographic Operations in Frequency Space Produce Better Back-Projections 224
10.8 Deconvolution in Frequency Space Removes Blur Introduced by the Optical System But Has
a Problem with Noise 224

11 Spatial Analysis 231


11.1 Affine Transforms Produce Geometric Transformations 231
11.2 Measuring Geometric Distortion Requires Grid Calibration 231
11.3 Distortion Compensation Locally Adds and Subtracts Pixels 231
11.4 Shape Analysis Starts with the Identification of Landmarks, Then Registration 232
11.5 Grid Transformations are the Basis for Morphometric Examination of Shape Change in Populations 234
11.6 Principal Component Analysis and Canonical Variates Analysis Use Measures of Similarity as Coordinates 237
11.7 Convolutional Neural Networks Can Identify Shapes and Objects Using Deep Learning 238
11.8 Boundary Morphometrics Analyzes and Mathematically Describes the Edge of the Object 240
11.9 Measurement of Object Boundaries Can Reveal Fractal Relationships 245
11.10 Pixel Intensity–Based Colocalization Analysis Reports the Spatial Correlation of Overlapping Signals 246
11.11 Distance-Based Colocalization and Cluster Analysis Analyze the Spatial Proximity of Objects 250
11.12 Fluorescence Resonance Energy Transfer Occurs Over Small (1–10 nm) Distances 252
11.13 Image Correlations Reveal Patterns in Time and Space 253

12 Temporal Analysis 260


12.1 Representations of Molecular, Cellular, Tissue, and Organism Dynamics Require Video and Motion
Graphics 260
12.2 Motion Graphics Editors Use Key Frames to Specify Motion 262
12.3 Motion Estimation Uses Successive Video Frames to Analyze Motion 265
12.4 Optic Flow Compares the Intensities of Pixels, Pixel Blocks, or Regions Between Frames 266
12.5 The Kymograph Uses Time as an Axis to Make a Visual Plot of the Object Motion 268
12.6 Particle Tracking Is a Form of Feature-Based Motion Estimation 269
12.7 Fluorescence Recovery After Photobleaching Shows Compartment Connectivity and the Movement of
Molecules 273
12.8 Fluorescence Switching Also Shows Connectivity and Movement 276
12.9 Fluorescence Correlation Spectroscopy and Raster Image Correlation Spectroscopy Can Distinguish between
Diffusion and Advection 280
12.10 Fluorescent Protein Timers Provide Tracking of Maturing Proteins as They Move through Compartments 282
Contents ix

13 Three-Dimensional Imaging, Modeling, and Analysis 287


13.1 Three-Dimensional Worlds Are Scalable and Require Both Camera and Actor Views 287
13.2 Stacking Multiple Adjacent Slices Can Produce a Three-Dimensional Volume or Surface 291
13.3 Structure-from-Motion Photogrammetry Reconstructs Three-Dimensional Surfaces Using Multiple Camera
Views 292
13.4 Reconstruction of Aligned Images in Fourier Space Produces Three-Dimensional Volumes or Surfaces 295
13.5 Surface Rendering Produces Isosurface Polygon Meshes Generated from Contoured Intensities 296
13.6 Texture Maps of Object Isosurfaces Are Images or Movies 299
13.7 Ray Tracing Follows a Ray of Light Backward from the Eye or Camera to Its Source 300
13.8 Ray Tracing Shows the Object Based on Internal Intensities or Nearness to the Camera 300
13.9 Transfer Functions Discriminate Objects in Ray-Traced Three Dimensions 301
13.10 Four Dimensions, a Time Series of Three-Dimensional Volumes, Can Use Either Ray-Traced or Isosurface
Rendering 303
13.11 Volumes Rendered with Splats and Texture Maps Provide Realistic Object-Ordered Reconstructions 303
13.12 Analysis of Three-Dimensional Volumes Uses the Same Approaches as Two-Dimensional Area Analysis But
Includes Voxel Adjacency and Connectivity 305
13.13 Head-Mounted Displays and Holograms Achieve an Immersive Three-Dimensional Experience 307

Section 3 Image Modalities 313

14 Ultrasound Imaging 315


14.1 Ultrasonography Is a Cheap, High-Resolution, Deep-Penetration, Non-invasive Imaging Modality 315
14.2 Many Species Use Ultrasound and Infrasound for Communication and Detection 315
14.3 Sound Is a Compression, or Pressure, Wave 316
14.4 The Measurement of Audible Sound Intensity Is in Decibels 317
14.5 A Piezoelectric Transducer Creates the Ultrasound Wave 318
14.6 Different Tissues Have Different Acoustic Impedances 319
14.7 Sonic Wave Scatter Generates Speckle 321
14.8 Lateral Resolution Depends on Sound Frequency and the Size and Focal Length of the Transducer Elements 322
14.9 Axial Resolution Depends on the Duration of the Ultrasound Pulse 323
14.10 Scatter and Absorption by Tissues Attenuate the Ultrasound Beam 324
14.11 Amplitude Mode, Motion Mode, Brightness Mode, and Coherent Planar Wave Mode Are the Standard
Modes for Clinical Practice 324
14.12 Doppler Scans of Moving Red Blood Cells Reveal Changes in Vascular Flows with Time and Provide
the Basis for Functional Ultrasound Imaging 327
14.13 Microbubbles and Gas Vesicles Provide Ultrasound Contrast and Have Therapeutic Potential 329

15 Magnetic Resonance Imaging 334


15.1 Magnetic Resonance Imaging, Like Ultrasound, Performs Non-invasive Analysis without Ionizing
Radiation 334
15.2 Magnetic Resonance Imaging Is an Image of the Hydrogen Nuclei in Fat and Water 337
15.3 Magnetic Resonance Imaging Sets up a Net Magnetization in Each Voxel That Is in Dynamic Equilibrium
with the Applied Field 338
15.4 The Magnetic Field Imposed by Magnetic Resonance Imaging Makes Protons Spin Like Tops with
the Same Tilt and Determines the Frequency of Precession 338
15.5 Magnetic Resonance Imaging Disturbs the Net Magnetization Equilibrium and Then Follows the Relaxation
Back to Equilibrium 339
15.6 T2 Relaxation, or Spin–Spin Relaxation, Causes the Disappearance of Transverse (x-y Direction) Magnetization
Through Dephasing 342
15.7 T1 Relaxation, or Spin-Lattice Relaxation, Causes the Disappearance of Longitudinal (z-Direction)
Magnetization Through Energy Loss 342
x Contents

15.8 Faraday Induction Produces the Magnetic Resonance Imaging Signal (in Volts) with Coils in the x-y Plane 343
15.9 Magnetic Gradients and Selective Radiofrequency Frequencies Generate Slices in the x, y, and z Directions 343
15.10 Acquiring a Gradient Echo Image Is a Highly Repetitive Process, Getting Information Independently
in the x, y, and z Dimensions 344
15.11 Fast Low-Angle Shot Gradient Echo Imaging Speeds Up Imaging for T1-Weighted Images 346
15.12 The Spin-Echo Image Compensates for Magnetic Heterogeneities in the Tissue in T2-Weighted Images 346
15.13 Three-Dimensional Imaging Sequences Produce Higher Axial Resolution 347
15.14 Echo Planar Imaging Is a Fast Two-Dimensional Imaging Modality But Has Limited Resolving Power 347
15.15 Magnetic Resonance Angiography Analyzes Blood Velocity 347
15.16 Diffusion Tensor Imaging Visualizes and Compares Directional (Anisotropic) Diffusion Coefficients
in a Tissue 349
15.17 Functional Magnetic Resonance Imaging Provides a Map of Brain Activity 350
15.18 Magnetic Resonance Imaging Contrast Agents Detect Small Lesions That Are Otherwise Difficult to Detect 351

16 Microscopy with Transmitted and Refracted Light 355


16.1 Brightfield Microscopy of Living Cells Uses Apertures and the Absorbance of Transmitted Light to Generate
Contrast 355
16.2 Staining Fixed or Frozen Tissue Can Localize Large Polymers, Such as Proteins, Carbohydrates, and Nucleic
Acids, But Is Less Effective for Lipids, Diffusible Ions, and Small Metabolites 361
16.3 Darkfield Microscopy Generates Contrast by Only Collecting the Refracted Light from the Specimen 365
16.4 Rheinberg Microscopy Generates Contrast by Producing Color Differences between Refracted
and Unrefracted Light 368
16.5 Wave Interference from the Object and Its Surround Generates Contrast in Polarized Light, Differential
Interference Contrast, and Phase Contrast Microscopies 369
16.6 Phase Contrast Microscopy Generates Contrast by Changing the Phase Difference Between the Light Coming
from the Object and Its Surround 369
16.7 Polarized Light Reveals Order within a Specimen and Differences in Object Thickness 374
16.8 The Phase Difference Between the Slow and Fast Axes of Ordered Specimens Generates Contrast in Polarized
Light Microscopy 376
16.9 Compensators Cancel Out or Add to the Retardation Introduced by the Sample, Making It Possible to Measure
the Sample Retardation 379
16.10 Differential Interference Contrast Microscopy Is a Form of Polarized Light Microscopy That Generates Contrast
Through Differential Interference of Two Slightly Separated Beams of Light 383

17 Microscopy Using Fluoresced and Reflected Light 390


17.1 Fluorescence and Autofluorescence: Excitation of Molecules by Light Leads to Rapid Re-emission of Lower
Energy Light 390
17.2 Fluorescence Properties Vary Among Molecules and Depend on Their Environment 391
17.3 Fluorescent Labels Include Fluorescent Proteins, Fluorescent Labeling Agents, and Vital and Non-vital
Fluorescence Affinity Dyes 394
17.4 Fluorescence Environment Sensors Include Single-Wavelength Ion Sensors, Ratio Imaging Ion Sensors, FRET
Sensors, and FRET-FLIM Sensors 399
17.5 Widefield Microscopy for Reflective or Fluorescent Samples Uses Epi-illumination 402
17.6 Epi-polarization Microscopy Detects Reflective Ordered Inorganic or Organic Crystallites and Uses
Nanogold and Gold Beads as Labels 405
17.7 To Optimize the Signal from the Sample, Use Specialized and Adaptive Optics 405
17.8 Confocal Microscopes Use Accurate, Mechanical Four-Dimensional Epi-illumination and Acquisition 408
17.9 The Best Light Sources for Fluorescence Match Fluorophore Absorbance 410
17.10 Filters, Mirrors, and Computational Approaches Optimize Signal While Limiting the Crosstalk Between
Fluorophores 411
Contents xi

17.11 The Confocal Microscope Has Higher Axial and Lateral Resolving Power Than the Widefield Epi-illuminated
Microscope, Some Designs Reaching Superresolution 415
17.12 Multiphoton Microscopy and Other Forms of Non-linear Optics Create Conditions for Near-Simultaneous
Excitation of Fluorophores with Two or More Photons 419

18 Extending the Resolving Power of the Light Microscope in Time and Space 427
18.1 Superresolution Microscopy Extends the Resolving Power of the Light Microscope 427
18.2 Fluorescence Lifetime Imaging Uses a Temporal Resolving Power that Extends to Gigahertz Frequencies
(Nanosecond Resolution) 428
18.3 Spatial Resolving Power Extends Past the Diffraction Limit of Light 429
18.4 Light Sheet Fluorescence Microscopy Achieves Fast Acquisition Times and Low Photon Dose 432
18.5 Lattice Light Sheets Increase Axial Resolving Power 435
18.6 Total Internal Reflection Microscopy and Glancing Incident Microscopy Produce a Thin Sheet of Excitation
Energy Near the Coverslip 437
18.7 Structured Illumination Microscopy Improves Resolution with Harmonic Patterns That Reveal Higher Spatial
Frequencies 440
18.8 Stimulated Emission Depletion and Reversible Saturable Optical Linear Fluorescence Transitions
Superresolution Approaches Use Reversibly Saturable Fluorescence to Reduce the Size
of the Illumination Spot 447
18.9 Single-Molecule Excitation Microscopies, Photo-Activated Localization Microscopy, and Stochastic Optical
Reconstruction Microscopy Also Rely on Switchable Fluorophores 452
18.10 MINFLUX Combines Single-Molecule Localization with Structured Illumination to Get Resolution
below 10 nm 455

19 Electron Microscopy 461


19.1 Electron Microscopy Uses a Transmitted Primary Electron Beam (Transmission Electron Micrography)
or Secondary and Backscattered Electrons (Scanning Electron Micrography) to Image the Sample 461
19.2 Some Forms of Scanning Electron Micrography Use Unfixed Tissue at Low Vacuums (Relatively High
Pressure) 462
19.3 Both Transmission Electron Micrography and Scanning Electron Micrography Use Frozen or Fixed Tissues 465
19.4 Critical Point Drying and Surface Coating with Metal Preserves Surface Structures and Enhances Contrast
for Scanning Electron Micrography 467
19.5 Glass and Diamond Knives Make Ultrathin Sections on Ultramicrotomes 468
19.6 The Filament Type and the Condenser Lenses Control Illumination in Scanning Electron Micrography
and Transmission Electron Micrography 471
19.7 The Objective Lens Aperture Blocks Scattered Electrons, Producing Contrast in Transmission Electron
Micrography 474
19.8 High-Resolution Transmission Electron Micrography Uses Large (or No) Objective Apertures 475
19.9 Conventional Transmission Electron Micrography Provides a Cellular Context for Visualizing Organelles
and Specific Molecules 479
19.10 Serial Section Transmitted Primary Electron Analysis Can Provide Three-Dimensional Cellular Structures 482
19.11 Scanning Electron Micrography Volume Microscopy Produces Three-Dimensional Microscopy at Nanometer
Scales and Includes In-Lens Detectors and In-Column Sectioning Devices 483
19.12 Correlative Electron Microscopy Provides Ultrastructural Context for Fluorescence Studies 488
19.13 Tomographic Reconstruction of Transmission Electron Micrography Images Produces Very Thin (10-nm)
Virtual Sections for High-Resolution Three-Dimensional Reconstruction 490
19.14 Cryo-Electron Microscopy Achieves Molecular Resolving Power (Resolution, 0.1–0.2 Nm) Using Single-Particle
Analysis 492

Index 497
xii

Preface

Imaging Life Has Three Sections: Image Acquisition, Image Analysis, and Imaging
Modalities

The first section, Image Acquisition, lays the foundation for imaging by extending prior knowledge about image struc-
ture (Chapter 1), image contrast (Chapter 2), and proper image representation (Chapter 3). The chapters on imaging by eye
(Chapter 4), by camera (Chapter 5), and by scanners (Chapter 6) relate to prior knowledge of sight, digital (e.g., cell phone)
cameras, and flatbed scanners.
The second section, Image Analysis, starts with how to select features in an image and measure them (Chapter 7). With
this knowledge comes the realization that there are limits to image measurement set by the optics of the system (Chapter 8),
a system that includes the sample and the light- and radiation-gathering properties of the instrumentation. For light-based
imaging, the nature of the lighting and its ability to generate contrast (Chapter 9) optimize the image data acquired for
analysis. A wide variety of image filters (Chapter 10) that operate in real and reciprocal space make it possible to display or
measure large amounts of data or data with low signal. Spatial measurement in two dimensions (Chapter 11), measurement
in time (Chapter 12), and processing and measurement in three dimensions (Chapter 13) cover many of the tenets of image
analysis at the macro and micro levels.
The third section, Imaging Modalities, builds on some of the modalities necessarily introduced in previous chapters,
such as computed tomography (CT) scanning, basic microscopy, and camera optics. Many students interested in biological
imaging are particularly interested in biomedical modalities. Unfortunately, most of the classes in biomedical imaging are
not part of standard biology curricula but in biomedical engineering. Likewise, students in biomedical engineering often
get less exposure to microscopy-related modalities. This section brings the two together.
The book does not use examples from materials science, although some materials science students may find it useful.

Imaging Life Can Be Either a Lecture Course or a Lab Course

This book can stand alone as a text for a lecture course on biological imaging intended for junior or senior undergraduates
or first- and second-year graduate students in life sciences. The annotated references section at the end of each chapter
provides the URLs for supplementary videos available from iBiology.com and other recommended sites. In addition, the
recommended text-based internet, print, and electronic resources, such as microscopyu.com, provide expert and in-depth
materials on digital imaging and light microscopy. However, these resources focus on particular imaging modalities and
exclude some (e.g., single-lens reflex cameras, ultrasound, CT scanning, magnetic resonance imaging [MRI], structure
from motion). The objective of this book is to serve as a solid foundation in imaging, emphasizing the shared concepts of
these imaging approaches. In this vein, the book does not attempt to be encyclopedic but instead provides a gateway to the
ongoing advances in biological imaging.
The author’s biology course non-linearly builds off this text with weekly computer sessions. Every third class session
covers practical image processing, analysis, and presentations with still, video, and three-dimensional (3D) images.
Although these computer labs may introduce Adobe Photoshop and Illustrator and MATLAB and Simulink (available on
our university computers), the class primarily uses open-source software (i.e., GIMP2, Inkscape, FIJI [FIJI Is Just ImageJ],
Icy, and Blender). The course emphasizes open-source imaging. Many open-source software packages use published and
Preface xiii

archived algorithms. This is better for science, making image processing more reproducible. They are also free or at least
cheaper for students and university labs.
The images the students acquire on their own with their cell phones, in the lab (if taught as a lab course), or from online
scientific databases (e.g., Morphosource.org) are the subjects of these tutorials. The initial tutorials simply introduce basic
features of the software that are fun, such as 3D model reconstruction in FIJI of CT scans from Morphosource, and infor-
mative, such as how to control image size, resolving power, and compression for analysis and publication. Although simple,
the tutorials address major pedagogical challenges caused by the casual, uninformed use of digital images. The tutorials
combine the opportunity to judge and analyze images acquired by the students with the opportunity to learn about the
software. They are the basis for weekly assignments. Later tutorials provide instruction on video and 3D editing, as well as
more advanced image processing (filters and deconvolution) and measurement. An important learning outcome for the
course is that the students can use this software to rigorously analyze and manage imaging data, as well as generate publi-
cation-quality images, videos, and presentations.
This book can also serve as a text for a laboratory course, along with an accompanying lab manual that contains protocols
for experiments and instructions for the operation of particular instruments. The current lab manual is available on request,
but it has instructions for equipment at Texas A&M University. Besides cell phones, digital single-lens reflex cameras, flat-
bed scanners, and stereo-microscopes, the first quarter of the lab includes brightfield transmitted light microscopy and
fluorescence microscopy. Assigning Chapter 16 on transmitted light microscopy and Chapter 17 on epi-illuminated light
microscopy early in the course supplements the lab manual information and introduces the students to microscopy before
covering it during class time. Almost all the students have worked with microscopes before, but many have not captured
images that require better set-up (e.g., Köhler illumination with a sub-stage condenser) and a more thorough under-
standing of image acquisition and lighting.
The lab course involves students using imaging instrumentation. All the students have access to cameras on their cell
phones, and most labs have access to brightfield microscopy, perhaps with various contrast-generating optical configura-
tions (darkfield, phase contrast, differential interference contrast). Access to fluorescence microscopy is also important.
One of the anticipated learning outcomes for the lab course is that students can troubleshoot optical systems. For this
reason, it is important that they take apart, clean, and correctly reassemble and align some optical instruments for cali-
brated image acquisition. With this knowledge, they can become responsible users of more expensive, multi-user equip-
ment. Some might even learn how to build their own!
Access to CT scanning, confocal microscopy, multi-photon microscopy, ultrasonography, MRI, light sheet microscopy,
superresolution light microscopy, and electron microscopy will vary by institution. Students can use remote learning to
view demonstrations of how to set up and use them. Many of these instruments have linkage to the internet. Zoom (or
other live video) presentations provide access to operator activity for the entire class and are therefore preferable for larger
classes that need to see the operation of a machine with restricted access. Several instrument companies provide video
demonstrations of the use of their instruments. Live video is more informative, particularly if the students read about the
instruments first with a distilled set of instrument-operating instructions, so they can then ask questions of the operators.
Example images from the tutorials for most of these modalities should be available for student analysis.
xiv

Acknowledgments

Peter Hepler and Paul Green taught a light and electron microscopy course at Stanford University that introduced me to
the topic while I was a graduate student of Peter Ray. After working in the lab of Ralph Quatrano, I acquired additional
expertise in light and electron microscopy as a post-doc with Larry Fowke and Fred Constabel at the University of
Saskatchewan and collaborating with Hilton Mollenhauer at Texas A&M University. They were all great mentors.
I created a light and electron microscopy course for upper-level undergraduates with Kate VandenBosch, who had taken
a later version of Hepler’s course at the University of Massachusetts. However, with the widespread adoption of digital
imaging, I took the course in a different direction. The goals were to introduce students to digital image acquisition,
processing, and analysis while they learned about the diverse modalities of digital imaging. The National Science
Foundation and the Biology Department at Texas A&M University provided financial support for the course. No single
textbook existing for such a course, I decided to write one. Texas A&M University graciously provided one semester of
development leave for its completion.
Martin Steer at University College Dublin and Chris Hawes at Oxford Brookes University, Oxford, read and made con-
structive comments on sections of the first half of the book, as did Kate VandenBosch at the University of Wisconsin. I
thank them for their help, friendship, and encouragement.
I give my loving thanks to my children. Alexander Griffing contributed a much-needed perspective on all of the chapters,
extensively copy edited the text, and provided commentary and corrections on the math. Daniel Griffing also provided
helpful suggestions. Beth Russell was a constant source of enthusiasm.
My collaborators, Holly Gibbs and Alvin Yeh at Texas A&M University, read several chapters and made comments and
contributions that were useful and informative. Jennifer Lippincott-Schwartz, senior group leader and head of Janelia’s
four-dimensional cellular physiology program, generously provided comment and insight on the chapters on temporal
operations and superresolution microscopy. I also wish to thank the students in my lab who served as teaching assis-
tants and provided enthusiastic and welcome feedback, particularly Kalli Landua, Krishna Kumar, and Sara Maynard.
The editors at Wiley, particularly Rosie Hayden and Julia Squarr, provided help and encouragement. Any errors, of
course, are mine.
The person most responsible for the completion of this book is my wife, Margaret Ezell, who motivates and enlightens
me. In addition to her expertise and authorship on early modern literary history, including science, she is an accomplished
photographer. Imaging life is one of our mutual joys. I dedicate this book to her, with love and affection.
xv

About the Companion Website

This book is accompanied by a companion website:

www.wiley.com/go/griffing/imaginglife

Please note that the resources are password protected.

The resources include:

● Images and tables from the book


● Examples of the use of open source software to introduce and illustrate important features with video tutorials on
YouTube
● Data, and a description of its acquisition, for use in the examples
1

Section 1

Image Acquisition
3

Image Structure and Pixels

1.1 The Pixel Is the Smallest Discrete Unit of a Picture

Images have structure. They have a certain arrangement of small and large objects. The large objects are often compos-
ites of small objects. The Roman mosaic from the House VIII.1.16 in Pompeii, the House of Five Floors, has incredible
structure (Figure 1.1). It has lifelike images of a bird on a reef, fishes, an electric eel, a shrimp, a squid, an octopus, and
a rock lobster. It illustrates Aristotle’s natural history account of a struggle between a rock lobster and an octopus. In
fact, the species are identifiable and are common to certain bays in the Italian coast, a remarkable example of early
biological imaging.
It is a mosaic of uniformly sized square colored tiles. Each tile is the smallest picture element, or pixel, of the mosaic. At
a certain appropriate viewing distance from the mosaic, the individual pixels cannot be distinguished, or resolved, and
what is a combination of individual tiles looks solid or continuous, taking the form of a fish, or lobster, or octopus. When
viewed closer than this distance, the individual tiles or pixels become apparent (see Figure 1.1); the image is pixelated.
Beyond viewing it from the distance that is the height of the person standing on the mosaic, pixelation in this scene was
probably further reduced by the shallow pool of water that covered it in the House of Five Floors.
The order in which the image elements come together, or render, also describes the image structure. This mosaic was
probably constructed by tiling the different objects in the scene, then surrounding the objects with a single layer of tiles of
the black background (Figure 1.2), and finally filling in the background with parallel rows of black tiles. This form of image
construction is object-order rendering. The background rendering follows the rendering of the objects. Vector graphic
images use object-ordered rendering. Vector graphics define the object mathematically with a set of vectors and render it
in a scene, with the background and other objects rendered separately.
Vector graphics are very useful because any number of pixels can represent the mathematically defined objects. This is
why programs, such as Adobe Illustrator, with vector graphics for fonts and illustrated objects are so useful: the number
(and, therefore, size) of pixels that represent the image is chosen by the user and depends on the type of media that will
display it. This number can be set so that the fonts and objects never have to appear pixelated. Vector graphics are resolution
independent; scaling the object to any size will not lose its sharpness from pixelation.
Another way to make the mosaic would be to start from the top upper left of the mosaic and start tiling in rows. One row
near the top of the mosaic contains parts of three fishes, a shrimp, and the background. This form of image structure is
image-order rendering. Many scanning systems construct images using this form of rendering. A horizontal scan line is
a raster. Almost all computer displays and televisions are raster based. They display a rasterized grid of data, and because
the data are in the form of bits (see Section 2.2), it is a bitmap image. As described later, bitmap graphics are resolution
dependent; that is, as they scale larger, the pixels become larger, and the images become pixelated.
Even though pixels are the smallest discrete unit of the picture, it does have structure. The fundamental unit of visuali-
zation is the cell (Figure 1.3). A pixel is a two-dimensional (2D) cell described by an ordered list of four points (its corners
or vertices), and geometric constraints make it square. In three-dimensional (3D) images, the smallest discrete unit of the
volume is the voxel. A voxel is the 3D cell described by an ordered list of eight points (its vertices), and geometrics con-
straints make it a cube.

Imaging Life: Image Acquisition and Analysis in Biology and Medicine, First Edition. Lawrence R. Griffing.
© 2023 John Wiley & Sons, Inc. Published 2023 by John Wiley & Sons, Inc.
Companion Website: www.wiley.com/go/griffing/imaginglife
4 1 Image Structure and Pixels

Figure 1.1 The fishes mosaic (second century BCE)


from House VII.2.16, the House of Five Floors, in
Pompeii. The lower image is an enlargement of the fish
eye, showing that light reflection off the eye is a single
tile, or pixel, in the image. Photo by Wolfgang Rieger,
http://commons.wikimedia.org/wiki/File:Pompeii_-_
Casa_del_Fauno_-_MAN.jpg and is in the public domain
(PD-1996).

Figure 1.2 Detail from Figure 2.1. The line of black


tiles around the curved borders of the eel and the fish
are evidence that the mosaic employs object-order
rendering.
Color is a subpixel component of electronic displays; printed
material; and, remarkably, some paintings. Georges Seurat (1859–
1891) was a famous French post-impressionist painter. Seurat
communicated his impression of a scene by constructing his pic-
ture from many small dabs or points of paint (Figure 1.4); he was a
pointillist. However, each dab of paint is not a pixel. Instead,
when standing at the appropriate viewing distance, dabs of differ-
ently colored paint combine to form a new color. Seurat pioneered
this practice of subpixel color. Computer displays use it, each
pixel being made up of stripes (or dots) of red, green, and blue
color (see Figure 1.4). The intensity of the different stripes deter-
mines the displayed color of the pixel.
For many printed images, the half-tone cell is the pixel. A half-
tone cell contains an array of many black and white dots or dots of
different colors (see Figure 1.10); the more dots within the half-
tone cell, the more shades of gray or color that are possible. Chapter
2 is all about how different pixel values produce different shades of
gray or color. Figure 1.3 Cell types found in visualization systems
that can handle two- and three-dimensional
representation. Diagram by L. Griffing.

Figure 1.4 This famous picture A Sunday Afternoon on the Island


of La Grande Jatte (1884–1886) by Georges Seurat is made up of
small dots or dabs of paint, each discrete and with a separate
color. Viewed from a distance, the different points of color,
usually primary colors, blend in the mind of the observer and
create a canvas with a full spectrum of color. The lower panel
shows a picture of a liquid crystal display on a laptop that is
displaying a region of the Seurat painting magnified through a
lens. The view through the lens reveals that the image is
composed of differently illuminated pixels made up of parallel
stripes of red, green, and blue colors. The upper image is from
https://commons.wikimedia.org/wiki/File:A_Sunday_on_La_
Grande_Jatte,_Georges_Seurat,_1884.jpg. Lower photos by L.
Griffing.
6 1 Image Structure and Pixels

1.2 The Resolving Power of a Camera or Display Is the Spatial Frequency of Its Pixels

In biological imaging, we use powerful lenses to resolve details of far away or very small objects. The round plant proto-
plasts in Figure 1.5 are invisible to the naked eye. To get an image of them, we need to use lenses that collect a lot of light
from a very small area and magnify the image onto the chip of a camera. Not only is the power of the lens important but
also the power of the camera. Naively, we might think that a powerful camera will have more pixels (e.g., 16 megapixels
[MP]) on its chip than a less powerful one (e.g., 4 MP). Not necessarily! The 4-MP camera could actually be more powerful
(require less magnification) if the pixels are smaller. The size of the chip and the pixels in the chip matter.
The power of a lens or camera chip is its resolving power, the number of pixels per unit length (assuming a square pixel).
It is not the number of total pixels but the number of pixels per unit space, the spatial frequency of pixels. For example, the
eye on the bird in the mosaic in Figure 1.1 is only 1 pixel (one tile) big. There is no detail to it. Adding more tiles to give the
eye some detail requires smaller tiles, that is, the number of tiles within that space of the eye increases – the spatial frequency
of pixels has to increase. Just adding more tiles of the original size will do no good at all. Common measures of spatial fre-
quency and resolving power are pixels per inch (ppi) or lines per millimeter (lpm – used in printing).
Another way to think about resolving power is to take its inverse, the inches or millimeters per pixel. Pixel size, the
inverse of the resolving power, is the image resolution. One bright pixel between two dark pixels resolves the two dark
pixels. Resolution is the minimum separation distance for distinguishing two objects, dmin. Resolving power is 1/dmin.
Note: Usage of the terms resolving power and resolution is not universal. For example, Adobe Photoshop and Gimp use res-
olution to refer to the spatial frequency of the image. Using resolving power to describe spatial frequencies facilitates the
discussion of spatial frequencies later.
As indicated by the example of the bird eye in the mosaic and as shown in Figure 1.5, the resolving power is as impor-
tant in image display as it is in detecting the small features of the object. To eliminate pixelation detected by eye, the
resolving power of the eye should be less than the pixel spatial frequency on the display medium when viewed from an
appropriate viewing distance. The eye can resolve objects separated by about 1 minute (one 60th) of 1 degree of the
almost 140-degree field of view for binocular vision. Because things appear smaller with distance, that is, occupy a

Figure 1.5 Soybean protoplasts (cells with their cell walls digested away with enzymes) imaged with differential interference contrast
microscopy and displayed at different resolving powers. The scale bar is 10 μm long. The mosaic pixelation filter in Photoshop generated
these images. This filter divides the spatial frequency of pixels in the original by the “cell size” in the dialog box (filter > pixelate > mosaic).
The original is 600 ppi. The 75-ppi images used a cell size of 8, the 32-ppi image used a cell size of 16, and the 16-ppi image used a cell
size of 32. Photo by L. Griffing.
1.3 Image Legibility Is the Ability to Recognize Text in an Image by Eye 7

Table 1.1 Laptop, Netbook, and Tablet Monitor Sizes, Resolving Power, and Resolution.

Horizontal × Vertical Resolving Power: Resolution or Aspect


Size (Diagonal) Pixel Number Dot Pitch ( ppi) Pixel Size (mm) Ratio (W:H) Pixel Number (×106)

6.8 inches (Kindle Paperwhite 5) 1236 × 1648 300 0.0846 4:3 2.03
11 inches (iPad Pro) 2388 × 1668 264 (retina display) 0.1087 4:3 3.98
10.1 inches (Amazon Fire HD 10 e) 1920 × 1200 224 0.1134 16:10 2.3
12.1 inches (netbook) 1400 × 1050 144.6 0.1756 4:3 1.4
13.3 inches (laptop) 1920 ×1080 165.6 0.153 16:9 2.07
14 inches (laptop) 1920 × 1080 157 0.161 16:9 2.07
2560 × 1440 209.8 0.121 16:9 3.6
15.2 inches (laptop) 1152 × 768 91 0.278 3:2 0.8
15.6 inches (laptop) 1920 × 1200 147 0.1728 8:5 2.2
3840 × 2160 282.4 0.089 16:9 8.2
17 inches (laptop) 1920 × 1080 129 0.196 16:9 2.07

smaller angle in the field of view, even things with large pixels look non-pixelated at large distances. Hence, the pixels
on roadside signs and billboards can have very low spatial frequencies, and the signs will still look non-pixelated when
viewed from the road.
Appropriate viewing distances vary with the display device. Presumably, the floor mosaic (it was an interior shallow
pool, so it would have been covered in water) has an ideal viewing distance, the distance to the eye, of about 6 feet. At this
distance, the individual tiles would blur enough to be indistinguishable. For printed material, the closest point at which
objects come into focus is the near point, or 25 cm (10 inches) from your eyes. Ideal viewing for typed text varies with the
size of font but is between 25 and 50 cm (10 and 20 inches). The ideal viewing distance for a television display, with 1080
horizontal raster lines, is four times the height of the screen or two times the diagonal screen dimension. When
describing a display or monitor, we use its diagonal dimension (Table 1.1). We also use numbers of pixels. A 14-inch mon-
itor with the same number of pixels as a 13.3-inch monitor (2.07 × 106 in Table 1.1) has larger pixels, requiring a slightly
farther appropriate viewing distance. Likewise, viewing a 24-inch HD 1080 television from 4 feet is equivalent to viewing
a 48-inch HD 1080 television from 8 feet.
There are different display standards, based on aspect ratio, the ratio of width to height of the displayed image (Table 1.2).
For example, the 15.6-inch monitors in Table 1.1 have different aspect ratios (Apple has 8:5 or 16:10, while Windows has
16:9). They also use different standards: a 1920 × 1200 monitor uses the WUXGA standard (see Table 1.2), and the
3840 × 2160 monitor uses the UHD-1 standard (also called 4K, but true 4K is different; see Table 1.2). The UHD-1 monitor
has half the pixel size of the WUXGA monitor. Even though these monitors have the same diagonal dimension, they have
different appropriate viewing distances. The standards in Table 1.2 are important when generating video (see Sections 5.8
and 5.9) because different devices have different sizes of display (see Table 1.1). Furthermore, different video publication
sites such as YouTube and Facebook and professional journals use standards that fit multiple devices, not just devices with
high resolving power. We now turn to this general problem of different resolving powers for different media.

1.3 Image Legibility Is the Ability to Recognize Text in an Image by Eye

Image legibility, or the ability to recognize text in an image, is another way to think about resolution (Table 1.3). This
concept incorporates not only the resolution of the display medium but also the resolution of the recording medium, in this
case, the eye. Image legibility depends on the eye’s inability to detect pixels in an image. In a highly legible image, the eye
does not see the individual pixels making up the text (i.e., the text “looks” smooth). In other words, for text to be highly
legible, the pixels should have a spatial frequency near to or exceeding the resolving power of the eye.
At near point (25 cm), it is difficult for the eye to resolve two points separated by 0.1 mm or less. An image that resolves
0.1 mm pixels has a resolving power of 10 pixels per mm (254 ppi). Consequently, a picture reproduced at 300 ppi would
8 1 Image Structure and Pixels

Table 1.2 Display Standards.

Aspect Ratio (Width:Height in Pixels)

4:3 8:5 (16:10) 16:9 Various


QVGA CGA
320 × 240 320 × 200
SIF/CIF
384 × 288
352 × 288
VGA WVGA (5:3) WVGA
640 × 480 800 × 480 854 × 480
PAL PAL
768 × 576 1024 × 576
SVGA WSVGA
800 × 600 1024 × 600
XGA WXGA HD 720
1024 × 786 1280 × 800 1280 × 720
SXGA+ WXGA+ HD 1080 SXGA (5:4)
1400 × 1050 1680 × 1050 1920 × 1080 1280 × 1024
UXGA WUXGA 2K (17:9) UWHD (21:9)
1600 × 1200 1920 × 1200 2048 × 1080 2560 × 1080
QXGA WQXGA WQHD QSXGA (5:4)
2048 × 1536 1560 × 1600 2560 × 1440 2560:2048
UHD-1 UWQHD (21:9)
3840 × 2160 3440 × 1440
4K (17:9)
4096 × 2160
8K
7680 × 4320

Table 1.3 Image Legibility.

Resolving Power

ppi lpm Legibility Quality

200 8 Excellent High clarity


100 4 Good Clear enough for prolonged study
50 2 Fair Identity of letters questionable
25 1 Poor Writing illegible

lpm, lines per inch; ppi, pixels per inch.

have excellent text legibility (see Table 1.3). However, there are degrees of legibility; some early computer displays had a
resolving power, also called dot pitch, of only 72 ppi. As seen in Figure 1.5, some of the small particles in the cytoplasm of
the cell vanish at that resolving power. Nevertheless, 72 ppi is the borderline between good and fair legibility (see Table 1.3)
and provides enough legibility for people to read text on the early computers.
The average computer is now a platform for image display. Circulation of electronic images via the web presents something
of a dilemma. What should the resolving power of web-published images be? To include computer users who use old displays,
1.4 Magnification Reduces Spatial Frequencies While Making Bigger Images 9

Table 1.4 Resolving Power Required for Excellent Images from Different Media.

Imaging Media Resolving Power (ppi)

Portable computer 90–180


Standard print text 200
Printed image 300 (grayscale)
350–600 (color)
Film negative scan 1500 (grayscale)
3000 (color)
Black and white line drawing 1500 (best done with vector graphics)

the solution is to make it equal to the lowest resolving power of any monitor (i.e., 72 ppi). Images at this resolving power also
have a small file size, which is ideal for web communication. However, most modern portable computers have larger resolving
powers (see Table 1.1) because as the numbers of horizontal and vertical pixels increase, the displays remain a physical size
that is portable. A 72-ppi image displayed on a 144-ppi screen becomes half the size in each dimension. Likewise, high-ppi
images become much bigger on low-ppi screens. This same problem necessitates reduction of the resolving power of a photo-
graph taken with a digital camera when published on the web. A digital camera may have 600 ppi as its default output reso-
lution. If a web browser displays images at 72 ppi, the 600-ppi image looks eight times its size in each dimension.
This brings us to an important point. Different imaging media have different resolving powers. For each type of media, the
final product must look non-pixelated when viewed by eye (Table 1.4). These values are representative of those required
for publication in scientific journals. Journals generally require grayscale images to be 300 ppi, and color images should be
350–600 ppi. The resolving power of the final image is not the same as the resolving power of the newly acquired image
(e.g., that on the camera chip). The display of images acquired on a small camera chip requires enlargement. How much is
the topic of the next section.

1.4 Magnification Reduces Spatial Frequencies While Making Bigger Images

As discussed earlier, images acquired at high resolving power are quite large on displays that have small resolving
power, such as a 72-ppi web page. We have magnified the image! As long as decreasing the spatial frequency of the
display does not result in pixelation, the process of magnification can reveal more detail to the eye. As soon as the
image becomes pixelated, any further magnification is empty magnification. Instead of seeing more detail in the
image, we just see bigger image pixels.
In film photography, the enlargement latitude is a measure of the amount of negative enlargement before empty mag-
nification occurs and the image pixel, in this case the photographic grain, becomes obvious. Likewise, for chip cameras, it
is the amount of enlargement before pixelation occurs. Enlargement latitude is

E = R / L,  (1.1)

in which E is enlargement magnification, R is the resolving power (spatial frequency of pixels) of the original, and L is the
acceptable legibility.
For digital cameras, it is how much digital zoom is acceptable (Figure 1.6). A sixfold magnification reducing the resolving
power from 600 to 100 ppi produces interesting detail: the moose calves become visible, and markings on the female
become clear. However, further magnification produces pixelation and empty magnification. Digital zoom magnification
is common in cameras. It is very important to realize that digital zoom reduces the resolving power of the image. For
scientific applications, it is best to use only optical zoom in the field and then perform digital zoom when analyzing or
presenting the image.
The amount of final magnification makes a large difference in the displayed image content. The image should be magnified
to the extent that the subject or region of interest (ROI) fills the frame but without pixelation. The ROI is the image area of
the most importance, whether for display, analysis, or processing. Sometimes showing the environmental context of a feature
is important. Figure 1.7 is a picture of a female brown bear being “herded” by or followed by a male in the spring (depending
10 1 Image Structure and Pixels

Figure 1.6 (A) A photograph of a moose at 600 ppi. (B) When A is enlarged sixfold by keeping the same information and dropping the
resolving power to 100 ppi, two calves become clear (and a spotted rump on the female). (C) Further magnification of 1.6× produces
pixelation and blur. (D) Even further magnification of 2× produces empty magnification. Photo by L. Griffing.

Figure 1.7 (A) A 600-ppi view of two grizzlies in Alaska shows the terrain and the distance between the two grizzlies. Hence, even
though the grizzlies themselves are not clear, the information about the distance between them is clear. (B) A cropped 100-ppi
enlargement of A that shows a clearly identifiable grizzly, which fills the frame. Although the enlargement latitude is acceptable,
resizing for journal publication to 600 ppi would use pixel interpolation. Photo by L. Griffing.
1.5 Technology Determines Scale and Resolution 11

on who is being selective for their mate, the male or the female). The foliage in the alders on the hillside shows that it is spring.
Therefore, showing both the bears and the time of year requires most of the field of view in Figure 1.7A as the ROI. On the
other hand, getting a more detailed view of the behavior of the female responding to the presence of the male requires the
magnified image in Figure 1.7B. Here, the position of the jaw (closed) and ears (back) are clear, but they were not in the
original image. This digital zoom is at the limit of pixelation. If a journal were to want a 600 ppi image of the female, it would
be necessary to resize the 100 ppi image by increasing the spatial frequency to 600 ppi using interpolation (see Section 1.7).

1.5 Technology Determines Scale and Resolution

To record objects within vast or small spaces, changing over long or very short times, requires technology that aids the eye
(Figure 1.8). Limits of resolution set the boundaries of scale intrinsic to the eye (see Section 4.1) or any sensing device. The
spatial resolution limit is the shortest distance between two discrete points or lines. To extend the spatial resolution of
the eye, these devices provide images that resolve distances less than 0.1 mm apart at near point (25 cm) or angles of sepa-
ration less than 1 arc-minute (objects farther away have smaller angles of separation). The temporal resolution limit is
the shortest time between two separate events. To extend the temporal resolution of the eye, devices detect changes that
are faster than about one 20th of a second.

Figure 1.8 Useful range for imaging technologies. 3D, three dimensional; CT, computed tomography. Diagram by L. Griffing.
12 1 Image Structure and Pixels

The devices that extend our spatial resolution limit include a variety of lens and scanning systems based on light, elec-
trons, or sound and magnetic pulses (see Figure 1.8), described elsewhere in this book. In all of these technologies, to
resolve an object, the acquisition system must have a resolving power that is double the spatial frequency of the smallest
objects to be resolved. The technologies provide magnification that lowers the spatial frequency of these objects to half (or
less) that of the spatial frequency of the recording medium. Likewise, to record temporally resolved signals, the recording
medium has to run a timed frequency that is twice (or more) the speed of the fastest recordable event. Both of these rules
are a consequence of the Nyquist criterion.

1.6 The Nyquist Criterion: Capture at Twice the Spatial Frequency of the Smallest
Object Imaged

In taking an image of living cells (see Figure 1.5), there are several components of the imaging chain: the microscope lenses
and image modifiers (the polarizers, analyzers, and prisms for differential interference contrast), the lens that projects the
image onto the camera (the projection lens), the camera chip, and the print from the camera. Each one of these links in the
image chain has a certain resolving power. The lenses are particularly interesting because they magnify (i.e., reduce the
spatial frequency). They detect a high spatial frequency and produce a lower one over a larger area. Our eyes can then see
these small features.
We use still more powerful cameras to detect these lowered spatial frequencies. The diameter of small organelles, such as
mitochondria, is about half of a micrometer, not far from the diffraction limit of resolution with light microscopy (see
Sections 5.14, 8.4, and 18.3), about a fifth of a micrometer. To resolve mitochondria with a camera that has a resolving power
of 4618 ppi (5.5-μm pixels, Orca Lightning; see Section 5.3, Table 5.1), the spatial frequency of the mitochondrial diameter

Figure 1.9 (A) and (B) Capture when the resolving power of the capture device is equal to the spatial frequency of the object pixels.
(A) When pixels of the camera and the object align, the object is resolved. (B) When the pixels are offset, the object “disappears.” (C)
and (D) Doubling the resolving power of the capture device resolves the stripe pattern of the object even when the pixels are offset.
(C) Aligned pixels completely reproduce the object. (D) Offset pixels still reproduce the alternating pattern, with peaks (white) at the
same spatial frequency as the object. Diagram by L. Griffing.
1.7 Archival Time, Storage Limits, and the Resolution of the Display Medium Influence Capture and Scan Resolving Power 13

(127,000 ppi for a 0.5-μm object) has to be reduced by at least a factor of 40


(3175 ppi) to capture the object pixel by pixel. Ah ha! To do this, we use a
40× magnification objective. However, we have to go up even further in
magnification because the resolving power of the capture device needs to be
double the new magnified image spatial frequency.
To see why the resolving power of the capture device has to be double
the spatial frequency of the object or image it captures, examine Figure 1.9.
In panel A, the object is a series of alternating dark and bright pixels. If the
capture device has the same resolving power as the spatial frequency of the
object and its pixels align with the object pixels, the alternating pixels are
visible in the captured image. However, if the pixels of the camera are not
aligned with the pixels of the object, then the white and black pixels com-
bine to make gray in each capture device pixel (bright + dark = gray), and
the object pattern disappears! If we double the resolving power of the
capture device, as in panel B, the alternating pixel pattern is very sharp
when the pixels align, as in panel A. However, even if the capture device
pixels do not align with object pixels, an alternating pattern of light and
dark pixels is still visible; it is still not perfect, with gray between the dark
and bright pixels.
To resolve the alternating pattern of the object pixels, the camera has to
sample the object at twice its spatial frequency. This level of sampling uses Figure 1.10 An image of a coat captured with
the Nyquist criterion and comes from statistical sampling theory. If the color camera showing regions of moiré patterns.
camera has double the spatial frequency of the smallest object in the field, Image from Paul Roth. Used with permission.
the camera faithfully captures the image details. In terms of resolution, the
inverse of resolving power, the pixel size in the capturing device should be
half the size of the pixel size, or the smallest resolvable feature, of the object. The camera needs finer resolution than the
projected image of the object.
Getting back to the magnification needed to resolve a 0.5-μm mitochondrion with a 4618-ppi camera, a 40× lens pro-
duces an image with mitochondria at a spatial frequency of about 3175 ppi. To reduce the spatial frequency in the image
even more, a projection lens magnifies the image 2.5 times, projecting an enlarged image onto the camera chip. This is the
standard magnification of projection lenses, which are a basic part of compound photomicroscopes. The projection lens
produces an image of mitochondria at a spatial frequency of 1270 ppi, well below the sampling frequency of 2309 ppi
needed to exceed the Nyquist criterion for the 4618-ppi camera.
Sampling at the Nyquist criterion reduces aliasing, in which the image pixel value changes depending on the alignment
of the camera with the object. Aliasing produces moiré patterns. In Figure 1.10, the woven pattern in the sports coat gen-
erates wavy moiré patterns when the spatial frequency of the woven pattern matches or exceeds the spatial frequency of
the camera and its lens. A higher magnification lens can eliminate moiré patterns by reducing the spatial frequency of the
weave. In addition, reducing the aperture of the lens will eliminate the pattern by only collecting lower spatial frequencies
(see Sections 5.14 and 8.3).
The presence of moiré patterns in microscopes reveals that there are higher spatial frequencies to capture. Capturing
higher frequencies than the diffraction limit (moiré patterns) by illuminating the sample with structured light (see Section
18.7) produces a form of superresolution microscopy, structured illumination microscopy.

1.7 Archival Time, Storage Limits, and the Resolution of the Display Medium Influence
Capture and Scan Resolving Power

Flatbed scanners archive photographs, slides, gels, and radiograms (see Sections 6.1 and 6.2). Copying with scanners
should use the Nyquist criterion. For example, most consumer-grade electronic scanners for printed material now come
with a 1200 × 1200 dpi resolving power because half this spatial frequency, 600 ppi, is optimal for printed color photo-
graphs (see Table 1.4). For slide scanners, the highest resolving power should be 1500 to 3000 dpi, 1500 dpi for black and
white and 3000 dpi for color slides (see Table 1.4).
14 1 Image Structure and Pixels

Figure 1.11 (A) Image of the central region of a diatom scanned at 72ppi. The vertical stripes, the striae, on the shell of the diatom
are prominent, but the bumps, or spicules, within the striae are not. (B) Scanning the images at 155 ppi reveals the spicules. However,
this may be too large for web presentation. (C) Resizing the image using interpolation (bicubic) to 72 ppi maintains the view of the
spicules and is therefore better than the original scan at 72 ppi. This is a scan of an image in Inoue, S. and Spring, K. 1997. Video
Microscopy. Second Edition. Plenum Press New York, NY. p. 528.

When setting scan resolving power in dots per inch, consider the final display medium of the copy. For web display, the
final ppi of the image is 72. However, the scan should meet or exceed the Nyquist criterion of 144 ppi. In the example
shown in Figure 1.11, there is a clear advantage to using a higher resolving power, 155 ppi, in the original scan even when
software rescales the image to 72 ppi.
If the output resolution can only accommodate a digital image of low resolving power, then saving the image as a low-
resolving-power image will conserve computer disk space. However, if scanning time and storage limits allow, it is always
best to save the original scan that used the Nyquist criterion. This fine-resolution image is then available for analysis and
display on devices with higher resolving powers.

1.8 Digital Image Resizing or Scaling Match the Captured Image Resolution to the Output
Resolution

If the final output resolution is a print, there are varieties of printing methods, each with its own resolving power. Laser
prints with a resolving power of 300 dpi produce high-quality images of black and white text with excellent legibility, as
would be expected from Table 1.1. However, in printers that report their dpi to include the dots inside half-tone cells
(Figure 1.12), which are the pixels of the image, the dpi set for the scan needs to be much higher than the value listed in
Table 1.4. Printers used by printing presses have the size of their half-tone screens pre-set. The resolution of these printers
is in lines per inch or lines per millimeter, each line being a row of half-tone cells. For these printers, half-tone images of
the highest quality come from a captured image resolving power (ppi) that is two times (i.e., the Nyquist criterion) the
printer half-tone screen frequency. Typical screen frequencies are 65 lpi (grocery coupons), 85 lpi (newsprint), 133 lpi
(magazines), and 177 lpi (art books).
1.8 Digital Image Resizing or Scaling Match the Captured Image Resolution to the Output Resolution 15

Figure 1.12 Half-tone cells for inkjet and laser printers. (A) Two half-tone cells composed of a 5 × 5 grid of dots. A 300-dpi printer
with 5 × 5 half-tone cells would print at 300/5 or 60 cells per inch (60 ppi). This is lower resolution than all computer screens. These
cells could represent 26 shades of gray. (B) Two half-tone cells composed of a 16 × 16 grid of dots. A 1440-dpi printer with 16 × 16
half-tone cells would print at 90 cells per inch. This is good legibility but not excellent. These cells (90 ppi) could represent 256
shades of gray. Diagram by L. Griffing.

Figure 1.13 Information loss during resizing. (A) The original image (2.3 inches ×1.6 inches). (B) The result achieved after reducing A
about fourfold (0.5 inches in width) and re-enlarging using interpolation during both shrinking and enlarging. Note the complete
blurring of fine details and the text in the header. Photo by L. Griffing.

For image display at the same size in both a web browser and printed presentation, scan it at the resolution needed
for printing and then rescale it for display on the web. In other words, always acquire images at the resolving power
required for the display with the higher resolving power and rescale it for the lower resolving power display (see
Figure 1.11).
Digital image resolving power diminishes when resizing or scaling produces fewer pixels in the image. Reducing the
image to half size could just remove every other pixel. However, this does not result in a satisfactory image because the
image leaves out a large part of the information in the scene that it could otherwise incorporate. A more satisfactory way is
to group several pixels together and make a single new pixel from them. The value assigned to the new pixel comes from
the values of the grouped pixels. However, even with this form of reduction, there is, of course, lost resolving power (com-
pare Figure 1.11C with 1.11B and Figure 1.13B with 1.13A). Computational resizing and rescaling a fine resolution image
(Figure 1.11C) is better than capturing the image at lower resolving power (Figure 1.11A).
Enlarging an image can either make the pixels bigger or interpolate new pixels between the old pixels. The accuracy of
interpolation depends on the sample and the process used. Three approaches for interpolating new pixel values in order
of increasing accuracy and processing time are the near-neighbor process, the bilinear process, and the bicubic pro-
cess (see also Section 11.3 for 3D objects). Generating new pixels might result in a higher pixels per inch, but all of the
information necessary to generate the scene resides in the original smaller image. True resolving power is not improved; in
fact, some information might be lost. Even simply reducing the image is problematic because shrinking the image by the
process described earlier using groups of pixels changes information content of the image.
16 1 Image Structure and Pixels

1.9 Metadata Describes Image Content, Structure, and Conditions of Acquisition

Recording the settings for acquiring an image in scientific work (pixels per inch of acquisition device, lenses, exposure, date
and time of acquisition, and so on) is very important. Sometimes this metadata is in the image file itself (Figures 1.13 and
1.14). In the picture of the bear (Figure 1.13), the metadata is a header stating the time and date of image acquisition. In the
picture of the plant meristem (Figure 1.14), the metadata is a footer stating the voltage of the scanning electron microscope,
the magnification, a scale bar, and a unique numbered identifier. Including the metadata as part of the image has advan-
tages. A major advantage is that an internal scale bar provides accurate calibration of the image upon reduction or rescal-
ing. A major disadvantage is that resizing the image can make the metadata unreadable as the resolving power of the image
decreases (Figure 1.13B, header). Because digital imaging can rescale the x and y dimensions differently (without a specific
command such as holding down the shift key), a 2D internal scale bar would be best, but this is rare.
For digital camera and recording systems, the image file stores the metadata separately from the image pixel information.
The standard metadata format is EXIF (Exchangeable Image File) format. Table 1.5 provides an example of some of the
recorded metadata from a consumer-grade digital camera. However, not all imaging software recognizes and uses the same
codes for metadata. Therefore, the software that comes with the camera can read all of the metadata codes from that
camera, but other more general image processing software may not. This makes metadata somewhat volatile because just
opening and saving images in a new software package can remove it.
Several images may share metadata. Image scaling (changing the pixels per inch) is a common operation in image
processing, making it very important that there be internal measurement calibration on digital scientific images. Fiducial
markers are calibration standards of known size contained within the image, such as a ruler or coin (for macro work), a
stage micrometer (for microscopy), or gold beads (fine resolution electron microscopy). However, their inclusion as an
internal standard is not always possible. A separate picture of such calibration standards taken under identical conditions
as the picture of the object produces a fiducial image, and metadata can refer to the fiducial image for scaling information
of the object of interest.
Image databases use metadata. A uniform EXIF format facilitates integration of this information into databases. There
are emerging standards for the integration of metadata into databases, but for now, many different standards exist. For
example, medical imaging metadata standards are different from the standards used for basic cell biology research. Hence,
the databases for these professions are different. However, in both these professions, it is important to record the condi-
tions of image acquisition in automatically generated EXIF files or in lab, field, and clinical notes.

Figure 1.14 Scanning electron micrograph with an internal scale bar and other metadata. This is an image of a telomerase-minus
mutant of Arabidopsis thaliana. The accelerating voltage (15 kV), the magnification (×150), a scale bar (100 µm), and a negative number
are included as an information strip below the captured image. Photo by L. Griffing.
1.9 Metadata Describes Image Content, Structure, and Conditions of Acquisition 17

Table 1.5 Partial Exchangeable Image File Information for an Image from a
Canon Rebel.

Title IMG_6505

Image description Icelandic buttercups


Make Canon
Model Canon EOS DIGITAL REBEL XT
Orientation Left side, bottom
X resolution 72 dpi
Y resolution 72 dpi
Resolution unit Inch
Date/time 2008:06:15 01:42:00
YCbCr positioning Datum point
Exposure time 1/500 sec
F-number F5.6
Exposure program Program action (high-speed program)
ISO speed ratings 400
Exif version 2.21
Date/time original 2008:06:15 01:42:00
Date/time digitized 2008:06:15 01:42:00
Components configuration YCbCr
Shutter speed value 1/256 sec
Aperture value F5.6
Exposure Bias value 0
Metering mode Multi-segment
Flash Unknown (16)
Focal length 44.0 mm
User comment
FlashPix version 1
Color space sRGB
EXIF image width 3456 pixels
EXIF image height 2304 pixels
Focal plane X resolution 437/1728000 inches
Focal plane Y resolution 291/1152000 inches
Focal plane resolution unit Inches
Compression JPEG compression
Thumbnail offset 9716 bytes
Thumbnail length 12,493 bytes
Thumbnail data 12,493 bytes of thumbnail data
Macro mode Normal
Self-timer delay Self-timer not used
Unknown tag (0xc103) 3
Flash mode Auto and red-eye reduction
Continuous drive mode Continuous

(Continued)
18 1 Image Structure and Pixels

Table 1.5 (Continued)

Title IMG_6505

Focus mode AI Servo


Image size Large
Easy shooting mode Sports
Contrast High
Saturation High
Sharpness High

Annotated Images, Video, Web Sites, and References

1.1 The Pixel Is the Smallest Discrete Unit of a Picture


The mosaic in Figures 1.1 and 1.2 resides in the Museo Archeologico Nazionale (Naples).
For image-order and object-order rendering, see Schroder, W., Martin, K., and Lorensen, B. 2002. The Visualization
Toolkit. Third Edition. Kitware Inc. p. 35–36.
For a complete list of the different cell types, see Schroder, W., Martin, K., and Lorensen, B. 2002. The Visualization
Toolkit. Third Edition. Kitware Inc. p. 115.
The original painting in Figure 1.4 resides at the Art Institute of Chicago.
More discussion of subpixel color is in Russ, J. 2007. The Image Processing Handbook. CRC Taylor and Francis, Boca
Raton, FL. p. 136.

1.2 The Resolving Power of a Camera or Display Is the Spatial Frequency of Its Pixels
The reciprocal relationship between resolving power and resolution is key to understanding the measurement of the
fidelity of optical systems. The concept of spatial frequency, also called reciprocal space or k space, is necessary for the
future treatments in this book of Fourier optics, found in Chapters 8 and 14–19.
For more on video display standards, see https://en.wikipedia.org/wiki/List_of_common_resolutions.
Appropriate viewing distance is in Anshel, J. 2005. Visual Ergonomics Handbook. CRC Press, Taylor and Francis Group,
Boca Raton, FL.

1.3 Image Legibility Is the Ability to Recognize Text in an Image by Eye


Williams, J. B. 1990. Image Clarity: High Resolution Photography. Focal Press, Boston, MA. p 56, further develops the
information in Table 1.3.
Publication guidelines in journals are the basis for the stated resolving power for different media. For camera resolving
powers, see Section 5.3 and Table 5.1.

1.4 Magnification Reduces Spatial Frequencies While Making Bigger Images


More discussion of the concept of enlargement latitude is in Williams, J.B. 1990. Image Clarity: High Resolution Photography.
Focal Press, Boston, MA. p 57.

1.5 Technology Determines Scale and Resolution


Chapters 8 and 14–19 discuss the resolution criteria for each imaging modality.
Annotated Images, Video, Web Sites, and References 19

1.6 The Nyquist Criterion: Capture at Twice the Spatial Frequency of the Smallest Object Imaged
The Nyquist criterion is from Shannon, C. 1949. Communication in the presence of noise. Proceedings of the Institute of
Radio Engineers 37:10–21. and Nyquist, H. 1928. Certain topics in telegraph transmission theory. Transactions of the
American Institute of Electrical Engineers 47:617–644.

1.7 Archival Time, Storage Limits, and the Resolution of the Display Medium Influence Capture and Scan
Resolving Power
Figure 1.10 is a scan of diatom images in Inoue, S. and Spring, K. 1997. Video Microscopy. Second Edition. Plenum Press,
New York, NY. p. 528.

1.8 Digital Image Resizing or Scaling Match the Captured Image Resolution to the Output Resolution
See the half-tone cell discussion in Russ, J. 2007. The Image Processing Handbook. CRC Taylor and Francis, Boca Raton, FL.
p. 137
Printer technology is now at the level where standard desk jet printers are satisfactory for most printing needs.

1.9 Metadata Describes Image Content, Structure, and Conditions of Acquisition


Figure 1.12 is from the study reported in Riha, K., McKnight, T., Griffing, L., and Shippen, D. 2001. Living with genome
instability: Plant responses to telomere dysfunction. Science 291: 1797–1800.
For a discussion of metadata and databases, see Chapter 7 on measurement.
20

Pixel Values and Image Contrast

2.1 Contrast Compares the Intensity of a Pixel with That of Its Surround

How well we see a pixel depends not only on its size, as described in Chapter 1, but also on its contrast. If a ladybug’s spots
are black, then they stand out best on the part of the animal that is white, its thorax (Figure 2.1A and C‑).Black pixels have
the lowest pixel value, and white pixels have the highest (by most conventions); the difference between them is the con-
trast. In this case, the spots have positive contrast; subtracting the black spot value from the white background value
gives a positive number. Negative contrast occurs when white spots occur against a dark background. In the “negative” of
Figure 2.1A, Figure 2.1B shows the ladybug’s spots as white. They have high negative contrast against the darker wings;
subtracting the white spot value from the black background gives a negative number.

Figure 2.1 Grayscale and color contrast of ladybugs on a leaf. Positive-contrast images (A and C) compared with negative-contrast images
(B and D). In the positive-contrast images, the ladybugs’ spots appear dark against a lighter background. In the negative-contrast images, the
spots appear light against a darker background. The contrast between the ladybugs and the leaf in C is good because the colors red and
green are nearly complementary. A negative image (D) produces complementary colors, and the negative or complementary color to leaf
green is magenta. (E) Histograms display the number of pixels at each intensity. Grayscale positive and negative images have mirror-image
histograms. (F) The histograms of color images show the number of pixels at each intensity of the primary colors, red, green, and blue. A color
negative shows the mirror image of the histogram of the color positive: making a negative “flips” the histogram. Photo by L. Griffing.

Imaging Life: Image Acquisition and Analysis in Biology and Medicine, First Edition. Lawrence R. Griffing.
© 2023 John Wiley & Sons, Inc. Published 2023 by John Wiley & Sons, Inc.
Companion Website: www.wiley.com/go/griffing/imaginglife
2.2 Pixel Values Determine Brightness and Color 21

The terms positive contrast and negative contrast come directly from the
algebraic definition of percent contrast in Figure 2.2. If pixels in the
background have higher intensity than the pixels of the object, then the value
of the numerator is positive, and the object has positive contrast. If the
object pixels have a higher intensity than the background pixels, then the
value in the numerator is negative, and the object has negative contrast.
The negatives of black-and-white photographs, or grayscale photographs,
have negative contrast. Although the information content in the positive and Figure 2.2 Algebraic definition of percent contrast.
negative images in Figure 2.1 is identical, our ability to distinguish features in If Ibkg > Iobj, there is positive contrast. If Iobj > Ibkg,
the two images depends on the perception of shades of gray by eye and on there is negative contrast. Diagram by L. Griffing.
psychological factors that may influence that perception.
In a color image, intensity values are combinations of the intensities of the primary colors, red, green, and blue. While
the human eye (see Section 4.2) can only distinguish 50–60 levels or tones of gray on a printed page (Figure 2.3), it can
distinguish millions of colors (Figure 2.4). Consequently, color images can have much more contrast and more information
than grayscale images. In Figure 2.1C, the distinction between the orange and red ladybugs is more apparent than
in Figure 2.1A. Figure 2.1D shows the negative color contrast image of Figure 2.1C. The negative of a color is its comple-
mentary color (see Figure 2.4). The highest contrast between colors occurs when the colors are complementary.

Figure 2.3 Grayscale spectra. (A) Spectrum with


256 shades of gray. Each shade is a gray level or
tone. The tones blend so that each individual tone is
indistinguishable by eye. (B) Spectrum with 64
shades of gray. The tones at this point cease to
blend, revealing some of the individual “slices” of
gray. (C) Spectrum with 16 shades of gray. The
individual bands or regions of one gray level are
apparent. Such color or gray-level banding is
“posterized.” Diagram by L. Griffing.

Figure 2.4 (A) Red, green, blue (RBG) color


spectrum. (B) Color negative of (A), with
complementary colors of the primary RGB shown
with arrows. Diagram by L. Griffing.

2.2 Pixel Values Determine Brightness and Color

That pixels have intensity values is implicit in the definition of contrast (see Figure 2.2). In a black-and-white, or grayscale,
image, intensity values are shades of gray. If the image has fewer than 60 shades of gray, adjacent regions (where the gray
values should blend) become discrete, producing a banded, or posterized, appearance to the image. Dropping the number
of gray values from 64 to 16 produces a posterized image, as shown in Figure 2.3B and C. Likewise, as the number of gray
values diminishes below 64, more posterization becomes evident (Figure 2.5).
22 2 Pixel Values and Image Contrast

Figure 2.5 Grayscale images at various pixel


depths of a plant protoplast (a plant cell with the
outer wall removed) accompanied by their
histograms. The differential interference contrast
light micrographs show the object using a
gradient of gray levels across it (see Section 16.10).
Histograms can be evaluated in Photoshop by
selecting Window > Histogram; for ImageJ, select
Analyze > Histogram. (A) An 8-bit image (8 bits of
information/pixel) has 28 or 256 gray levels. Scale
bar = 12 μm. (B) A 6-bit image has 26 or 64 gray
levels and discernible gray-level “bands” or
posterization in the regions of shallow gray-level
gradients begin to appear. The features of the cell,
like the cell cortical cytoplasm, are still
continuous. (C) A 4-bit image has 24 or 16 gray
levels. Posterization is severe, but most of the cell,
such as the cortical cytoplasm and cytoplasmic
strands, are recognizable. (D) A 2-bit image has 22
or 4 gray levels. Much of the detail is lost; the cell
is becoming unrecognizable. (E) A 1-bit image has
21 or 2 gray levels. The cell is unrecognizable.
(a–e) Histograms of images A–E. This plots the
number of pixels occurring in the image at each
gray level against the possible number of gray
levels in the image, which determines the pixel
depth. This is just a plot of the number of gray
levels, removing all of the spatial information.
Note that this image has no pixels in the lowest
and highest intensity ranges at pixel depths
between 8 bit and 4 bit. In these images, the tonal
range of the image is lower than the pixel depth
of the image. Photo by L. Griffing.
2.2 Pixel Values Determine Brightness and Color 23

In digital imaging, the image information comes in discrete information bits, the bit being a simple “on/off” switch hav-
ing a value of 0 or 1. The more bits in a pixel, the larger the amount of information and the greater the pixel depth.
Increasing the number of bits increases the information by powers of 2 for the two states of each bit. Consequently, an
image with 8 bits, or one byte, per pixel has 28, or 256, combinations of the “on/off” switches. Because 0 is a value, the
grayscale values range from 0 to 255 in a 1-byte image.
Computers that process and display images with large pixel depth have a lot of information to handle. To calculate the
amount of information in a digital image, multiply the pixel dimensions in height, width, and pixel depth. A digitized
frame 640 pixels in height × 480 pixels in width × 1 byte deep requires 307.2 kilobytes (kB) for storage and display. A color
image with three 1-byte channels and the same pixel height and width will be 921 kB.
The pixel depth of the image limits the choice of software for processing the image (Table 2.1). A major distinguishing
feature of different general-purpose image processing programs is their ability to handle high pixel depth commonly found
in scientific cameras and applications (see Table 2.1). Regardless of the pixel depth, a graph of how many pixels in the
image have a certain pixel value, or the image histogram, is in most software.

Table 2.1 Raster Graphics Image Processing Programs Commonly Used for Contrast Analysis and Adjustment.a

File Formats
Color Spaces and Image Modes Supported + PNG,
Supported Features Supported JPG, RAWh

Operating Large
Software Systemsb: Editable Pixel sRGB
Package Win OSX Lin Histogram Selectionc Layersd Depthe aRGBf CMYKg Indexed Grayscale TIFF SVG XCF

Proprietary–
purchase
Adobe Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No
Photoshop
Corel Paint Yes No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No
Shop Pro
Proprietary–
freeware
IrfanView Yes No No Yes Yes No No sRGB No Yes Yes Yes Plgin No
Paint.Net Yes No Yes Yes Yes Yes No sRGB Some Some Some Plgin Plgin Yes
Google Yes Yes Yes Yes No No No sRGB No Some Some Imprt Yes No
Photos
Open
Source
GIMP2 (or Yes Yes Yes Yes Yes Yes Yes Yes Some Yes Yes Some Yes Yes
GIMPShop)
ImageJ Yes Yes Yes Yes Yes Some Yes Yes No Yes Yes Yes Some No
a
There is no standard image processing software for the “simple” tasks of contrast enhancement in science. (The image analysis software for
image measurement is described in Chapter 8). The demands of scientific imaging include being able to recognize images of many different
formats, including some RAW formats proprietary to certain manufacturers, and large pixel depths of as much as 32 bits per channel. XCF is
the native format for GIMP2. PNG, JPG, and RAW are all supported.
b
Win = Windows (Microsoft), OSX = OSX (Apple), and Lin = Unix (The Open Group) or Linux (multiple distributors).
c
Editable selections can be either raster or vector based. Vector based is preferred.
d
Layers can include contrast adjustment layers, whereby the layer modifies the contrast in the underlying image.
e
Usually includes 12-, 16-, and 32-bit grayscale and higher bit depth multi-channel images.
f
aRGB = Adobe (1998) RGB colorspace, sRGB = standard RGB.
g
CMYK = cyan, magenta, yellow and black colorspace.
h
PNG = portable network graphic, JPG = joint photographic experts group, RAW = digital negative, TIFF = tagged image file format,
SVG = scalable vector graphic, XCF = experimental computing facility format.
Imprt, opens as an imported file; Plgin, opens with Plugin.
24 2 Pixel Values and Image Contrast

2.3 The Histogram Is a Plot of the Number of Pixels in an Image at Each Level of Intensity

The image histogram is a plot of the number of pixels (y-axis) at each intensity level (x-axis) (see Figure 2.5a–e). As the bit
depth and intensity levels increase, the number on the x-axis of the histogram increases (see Figure 2.5a–e). For 8-bit images,
the brightest pixel is 255 (see Figure 2.5a). The blackest pixel is zero. Histograms of color images show the number of pixels at
each intensity value of each primary color (see Figure 2.1F). To produce a negative image, “flip” or invert the intensity values
of the histogram of grayscale images (see Figure 2.1E) and of each color channel in a color image (see Figure 2.1F).
As the pixel depth decreases, the number of histogram values along the x-axis (see Figure 2.5a–e) of the histograms
decreases. When there are only two values in the histogram, the image is binary (Figure 2.5E). Gray-level detail in the
image diminishes as pixel depth decreases, as can be seen in the posterization of the protoplast in Figure 2.5B–E.
The histogram has no spatial information in it other than the total number of pixels in the region represented by the his-
togram, which, for purposes of discussion in this chapter, is the whole image. To get spatial representations of pixel inten-
sity, intensities can be plotted along a selected line, the intensity plot (Figure 2.6A and B) or the intensities can be mapped
across a two-dimensional region of interest (ROI) with a three-dimensional surface plot, (Figure 2.6A and C).

Figure 2.6 Intensity plots of a grayscale light micrograph of a plant protoplast taken with differential interference contrast optics.
(A) Plant protoplast with a line selection (yellow) made across it in ImageJ. (B) Intensity plot of line selection (A) using the Analyze > Plot
Profile command in ImageJ. (C) Surface plot of the grayscale intensity of the protoplast in (A) using the Analyze > Surface Plot command
in ImageJ. Photo by L. Griffing.
2.4 Tonal Range Is How Much of the Pixel Depth Is Used in an Image 25

Information about the histogram in standard histogram displays (Figure 2.7, insets B and D) includes the total number
of pixels in the image, as well as the median value, the mean value, and the standard deviation around the mean value. The
standard deviation is an important number because it shows the level of contrast or variation between dark and light,
higher standard deviations meaning higher contrast. One way to assess sharpening of the image is by pixel intensity
standard deviation, with higher standard deviations producing higher contrast and better “focus” (see Section 3.8).
However, there is a trade-off between contrast and resolution (see Section 5.15) – higher contrast images of the same scene
do not have higher resolving power.

2.4 Tonal Range Is How Much of the Pixel Depth Is Used in an Image

The number of gray levels represented in the image is its tonal range. The ideal situation for grayscale image recording
is that the tonal range matches the pixel depth, and there are pixels at all the gray values in the histogram. If only a small
region of the x-axis of the image histogram has values in the y-axis, then the tonal range of the image is too small. With
a narrow range of gray tones, the image becomes less recognizable, and features may be lost as in Figure 2.7A, in which
the toads are hard to see. Likewise, if the tonal range of the scene is greater than the pixel depth (as in over- and under-
exposure; see Section 2.5), information and features can be lost. In Figure 2.7C, the tonal range of the snail on the leaf is
good, but there are many pixels in the histogram at both the 0 and 255 values (Figure 2.7D). Objects with gray-level
values below zero, in deep shade, and above 255, in bright sunlight, have no contrast and are lost. The values above 255
saturate the 255 limit of the camera. The ROI, such as the snail on the leaf in Figure 2.7C, may have good tonal range
even though there are regions outside the ROI that are over- or underexposed. A histogram of that ROI would reveal its
tonal range.
Do not confuse the sensitivity of the recording medium with its pixel depth, its capacity to record light gray levels or
gradations. The ISO setting on cameras (high ISO settings, 800–3200, are for low light) adjusts pixel depth, but this does not
make the camera more sensitive to light; it just makes each pixel shallower, saturating at lower light levels (see Section
5.11). Scene lighting and camera exposure time are the keys to getting good tonal range (see Section 9.5). Many digital
single lens reflex (SLR) cameras display the image histogram of the scene in real time. The photographer can use that
information to match the tonal range of the scene to the pixel depth of the camera, looking for exposure and lighting con-
ditions where there are y-axis values for the entire x-axis on the image histogram. After taking the image, digital adjust-
ment (histogram stretching; see Section 2.10) of the tonal range can help images with limited tonal range that are not
over- or underexposed. However, it is always better to optimize tonal range through lighting and exposure control (see
Sections 9.5 and 9.10).

Figure 2.7 Grayscale images and their histograms.


(A) Grayscale image of toads. (B) Histogram of
image in (A) shows a narrow tonal range of the
image with no pixels having high- or low-intensity
values. (C) Image of a land snail on plants. (D) Pixels
in (C) have a wide tonal range. The histogram shows
pixels occupy almost all of the gray values. Photos
by L. Griffing.
26 2 Pixel Values and Image Contrast

2.5 The Image Histogram Shows Overexposure and Underexposure

Underexposed images contain more than 5% of pixels in the bottom four gray levels of a 256 grayscale. The ability of the
human eye to distinguish 60 or so gray levels (see Chapter 5) determines this; four gray levels is about 1/60th of 256. Hence,
differences within the first four darkest intensities are indistinguishable by eye. A value of zero means that the camera was
not sensitive enough to capture light during the exposure time. Assuming that the ROI is 100% of the pixels, then underex-
posure of 5% of the pixels is a convenient statistically acceptable limit. There are underexposed areas in Figure 2.7C but
none in Figure 2.7A. However, less than 5% of Figure 2.7C is underexposed. Underexposed Figure 2.8A has more than 40%
of the pixels in the bottom four gray levels.
The argument is the same for overexposure, the criterion being more than 5% of the pixels in the top four gray levels of
a 256 grayscale. For pixels with a value in the highest intensity setting, 255 in a 256–gray-level spectrum, the camera pixels
saturate with light and objects within those areas have no information. Figure 2.7C has some bright areas that overexposed,
as shown by histogram in Figure 2.7D. However, the image as a whole is not overexposed. Figure 2.9A is bright but not
overexposed, with just about 5% of the pixels in the top four gray levels (Figure 2.9B).

Figure 2.8 Low-key grayscale image and its


histogram. (A) Low-key image of the upper
epidermis of a tobacco leaf expressing green
fluorescent protein engineered so that it is
retained it in the endoplasmic reticulum. Like
many fluorescence microscopy images, most of the
tonal range is in the region of low intensity values,
so it is very dark. (B) Histogram of fluorescence
image. The large number of pixels at the lowest
intensities show that the image is low key and
underexposed. Photo by L. Griffing.

Figure 2.9 High key grayscale image and its histogram. (A) High-key image of polar bears. Most of the tonal range is in the region of
high intensity values, so it is very light. (B) Histogram of the polar bear image. Note that although there are primarily high intensity
values in the image, there are very few at the highest value; the image is not overexposed. Photo © Jenny Ross, used with permission.
2.7 Color Images Have Various Pixel Depths 27

2.6 High-Key Images Are Very Light, and Low-Key Images Are Very Dark

Figure 2.9A is high key because most of the pixels have an intensity level greater than 128, half of the 255 maximum inten-
sity value. Figure 2.8A is low key because most of the pixels have less than 128. For these scenes, the exposure – light
intensity times the length of time for the shutter on the camera to stay open – is set so that the interesting objects have
adequate tonal range. Exposure metering of the ROI (spot metering; see Sections 5.2 and 9.4) is necessary because taking
the integrated brightness of the images in Figures 2.8 and 2.10 for an exposure setting would result in overexposed
fluorescent cells. Hence, the metered region should only contain fluorescent cells. Over- or underexposure of regions that
are not of scientific interest is acceptable if the ROIs have adequate tonal range. Low-key micrographs of fluorescent (see
Section 17.3) or darkfield (see Sections 9.3 and 16.3) objects are quite common.

2.7 Color Images Have Various Pixel Depths

Pixels produce color in a variety of modes. Subpixel color (see Section 1.1) produces the impression that the entire pixel
has a color when it is composed of different intensities of three primary colors. The lowest pixel depth for color images
is 8 bit, in which the intensity of a single color ranges from 0 to 255. These are useful in some forms of monochromatic
imaging, such as fluorescence microscopy, in which a grayscale camera records a monochromatic image. To display the
monochromatic image in color, the entire image is converted to indexed color and pseudocolored, or false colored, with
a 256-level color table or look-up table (LUT) (Figure 2.10). Color combinations arise by assigning different colors, not
just one color, to the 256 different pixel intensities in an 8-bit image; 8-bit color images are indexed color images.
Indexed color mode uses less memory other color modes because they are only one channel. Not all software supports
this mode (see Table 2.1).

Figure 2.10 Indexed color image of Figure 2.8 pseudocolored using the color table shown. The original sample emitted
monochromatic green light from green fluorescent protein captured with a grayscale camera. Converting the grayscale image to an
indexed color image and using a green lookup table mimics fluoresced green light. Photo by L. Griffing.
28 2 Pixel Values and Image Contrast

Figure 2.11 Color image and its associated red, green, and blue channels. (A) Full-color image of a benthic rock outcrop in Stetson
Bank, part of the Flower Gardens National Marine Sanctuary, shows all three channels, red, green, and blue. There is a circle around
the red fish. (B) Red channel of the color image in A, shows high intensities, bright reds, in the white and orange regions of the image.
Note the red fish (circle) produces negative contrast against the blue-green water, which has very low red intensity. (C) Green channel
of the image. Note that the fish (circle) is nearly invisible because it had the same intensity of green as the background. (D) Blue
channel of image. Note that the fish (circle) is now visible in positive contrast because it has very low intensities of blue compared
with the blue sea surrounding it. Photo by S. Bernhardt, used with permission.

A more common way of representing color is to combine channels of primary colors to make the final image, thereby
increasing the pixel depth with each added channel. The two most common modes are RGB (for red, green, and blue) and
CMYK (for cyan, magenta, yellow, and black). Figure 2.11 is an RGB image in which a red fish (circled) appears in the
red (negative contrast) and blue (positive contrast) channels but disappears in the green channel. The three 8-bit chan-
nels, each with its own histogram (see Figure 2.1), add together to generate a full-color final image, producing a pixel
depth of 24 bits (3 bytes, or three channels that are 8 bits each). Reducing the pixel depth in the channels produces a
color-posterized image (Figure 2.12B–D). At low pixel depths, background gradients become posterized (arrows in Figure
2.12), and objects such as the fish become unrecognizable (Figure 2.12D). Video and computer graphic displays use RGB
mode, whereby adding different channels makes the image brighter, producing additive colors. The CMYK mode uses
subtractive colors, whereby combining different cyan, magenta, and yellow channels make the image darker, subtracting
intensity, as happens with inks and printing. Because the dark color made with these three channels never quite reaches
true black, the mix includes a separate black channel. Consequently, the CMYK uses four 8-bit channels, or has a pixel
depth of 32 bits (4 bytes). Because these color spaces are different, they represent a different range, or gamut, of colors
(see Section 4.3). Even within a given color space, such as RGB, there are different gamuts, such as sRGB and Adobe RGB
(see Table 2.1).
Another random document with
no related content on Scribd:
Leonia, 414
Lepeta, 405
Lepetella, 405
Lepetidae, radula, 227
Lepidomenia, 404;
radula, 229
Leptachatina, 327
Leptaena, 500, 501, 502, 503, 505;
stratigraphical distribution, 507, 508
Leptaxis, 441
Leptinaria, 357, 358, 442
Leptochiton, 403
Leptoconchus, 75, 423
Leptoloma, 348, 351
Lepton, 453;
parasitic, 77;
commensal, 80;
mantle-edge, 175, 178
Leptoplax, 403
Leptopoma, 316, 319, 338, 414
Leptoteuthis, 390
Leptothyra, 409
Leroya, 331
Leucochila, 442
Leucochloridium, 61
Leucochroa, 292, 295, 441
Leuconia, 439
Leucotaenia, 335, 359, 441
Leucozonia, 64, 424, 424
Levantina, 295
Libania, 295
Libera, 327, 441;
egg-laying, 128
Libitina, 451
Licina, 414
Life, duration of, in snails, 39
Ligament, 271
Liguus, 349, 351, 442
Lima, 178, 179, 450;
habits, 63
Limacidae, radula, 232
Limacina, 59, 249, 436, 436
Limapontia, 429, 432;
breathing, 152
Limax, 245, 440;
food, 31, 179;
variation, 86;
pulmonary orifice, 160;
shell, 175;
jaw, 211;
radula, 217;
distribution, 285, 324;
L. agrestis, eats May flies, 31;
arborum, slime, 30;
food, 31;
flavus, food, 33, 36;
habits, 35, 36;
gagates, 279, 358;
maximus, 32, 161;
eats raw beef, 32;
cannibalism, 32;
sexual union, 128;
smell, 193 f.
Limea, 450
Limicolaria, 329–332, 443
Limnaea, 439;
self-impregnation, 44;
development and variation, 84, 92, 93;
size affected by volume of water, 94;
eggs, 124;
sexual union, 134;
jaw, 211;
radula, 217, 235;
L. auricularia, 24;
glutinosa, sudden appearance, 46;
Hookeri, 25;
involuta, 82, 278, 287;
peregra, 10, 180;
burial, 27;
food, 34, 37;
variation, 85;
distribution, 282;
palustris, distribution, 282;
stagnalis, food, 34, 37;
variation, 85, 95;
circum-oral lobes, 131;
generative organs, 414;
breathing, 161;
nervous system, 204;
distribution, 282;
truncatula, parasite, 61;
distribution, 282
Limnocardium, 455
Limnotrochus, 332, 415
Limopsis, 448
Limpet-shaped shells, 244
Limpets as food for birds, 56;
rats, 57;
birds and rats caught by, 57;
as bait, 118
Lingula, 464, 467, 468, 471, 472, 473, 475, 477, 478, 487;
habits, 483, 484;
distribution, 485;
fossil, 493, 494, 503;
stratigraphical distribution, 506, 508, 510, 511
Lingulella, 493, 503;
stratigraphical distribution, 506, 508, 511
Lingulepis, 503, 511
Lingulidae, 485, 487, 496, 503, 508
Linnarssonia, 504;
stratigraphical distribution, 506, 508
Lintricula, 426
Liobaikalia, 290
Liomesus, 424
Lioplax, 340, 416
Liostoma, 424
Liostracus, 442
Liotia, 408
Liparus, 324, 359, 441
Lissoceras, 399
Lithasia, 340, 417
Lithidion, 414
Lithocardium, 455
Lithodomus, 449
Lithoglyphus, 294, 296, 297, 415
Lithopoma, 409
Lithotis, 302, 443
Litiopa, 30, 361, 415
Littorina, 413;
living out of water, 20;
radula, 20, 215;
habits, 50;
protective coloration, 69;
egg-laying, 126;
hybrid union, 130;
monstrosity, 251, 252;
operculum, 269;
erosion, 276;
L. littorea, in America, 374;
obtusata, generative organs, 135;
rudis, 150;
Prof. Herdman’s experiments on, 151 n.
Littorinida, 415
Lituites, 247, 395
Liver, 239;
liver-fluke, 61
Livinhacea, 333, 359, 441
Livona, 408;
radula, 226;
operculum, 268
Lloyd, W. A., on Nassa, 193
Lobiger, 432
Lobites, 397
Loligo, 378–389;
glands, 136;
modified arm, 139;
eye, 183;
radula, 236;
club, 381;
L. punctata, egg-laying, 127;
vulgaris, larva, 133
Loligopsis, 391
Loliguncula, 390
Loliolus, 390
Lomanotus, 433
Lophocercus, 432
Lorica, 403
Lowe, E. J., on growth of shell, 40
Loxonema, 417
Lucapina, 406
Lucapinella, 406
Lucerna, 441
Lucidella, 348–351, 410
Lucina, 270, 452
Lucinopsis, 454
Lung, 151, 160
Lunulicardium, 455
Lutetia, 452
Lutraria, 446, 456
Lychnus, 442
Lyonsia, 458
Lyonsiella, 458;
branchiae, 168
Lyra, stratigraphical distribution, 507
Lyria, 425
Lyrodesma, 447
Lysinoe, 441
Lytoceras, 398

Maackia, 290
Macgillivrayia, 133
Machomya, 458
Maclurea, 410
Macroceramus, 343–353, 442
Macroceras, 440
Macrochilus, 417
Macrochlamys, 296, 299, 301 f., 310, 316–322, 440
Macrocyclis, 358, 359, 442
Macron, 424
Macroön, 441
Macroscaphites, 247, 399, 399
Macroschisma, 265, 406
Mactra, 271, 446, 454
Macularia, 285, 291, 292 f., 441
Magas, 506;
stratigraphical distribution, 507, 508
Magellania, 500
Magilus, 75, 423
Mainwaringia, 302
Malaptera, 418
Malea, 419
Malletia, 447
Malleus, 449
Mangilia, 426
Mantle, 172 f., 173;
lobes of, 177
Margarita, 408;
radula, 225
Marginella, 425;
radula, 221
Mariaella, 314, 338, 440
Marionia, 433
Marmorostoma, 409
Marrat, F. P., views on variation, 82
Marsenia, 133
Marsenina, 411
Martesia, 305, 457
Mastigoteuthis, 390
Mastus, 296, 442
Matheronia, 455
Mathilda, 250, 417
Maugeria, 403
Mazzalina, 424
Megalatractus, 424
Megalodontidae, 451
Megalomastoma, 344, 414
Megalomphalus, 416
Megaspira, 358, 442
Megatebennus, 406
Megerlia, distribution, 486, 487
Meladomus, 249, 328, 331, 416
Melampus, 18, 199, 250, 439, 439
Melanatria, 336
Melania, 276, 417, 417;
distribution, 285, 292 f., 316 f., 324, 336
Melaniella, 442
Melaniidae, origin, 17
Melanism in Mollusca, 85
Melanopsis, 417;
distribution, 285, 291, 292 f., 323, 326
Melantho, 340, 416
Melapium, 424
Meleagrina, 449
Melia, 348
Melibe, 432
Melongena, 424;
radula, 220;
stomach, 238
Merica, 426
Merista, 505, 508
Meroe, 454
Merope, 327
Mesalia, 417
Mesembrinus, 356, 442
Mesodesma, 454
Mesodon, 340, 441
Mesomphix, 340, 440
Mesorhytis, 377
Meta, 423
Metula, 424
Meyeria, 424
Miamira, 434
Microcystis, 323, 324, 327, 338, 440
Microgaza, 408
Micromelania, 12, 297
Microphysa, protective habits, 70
Microplax, 403
Micropyrgus, 415
Microvoluta, 425
Middendorffia, 403
Milneria, 451
Mimicry, 66
Minolia, 408
Mitra, 425;
radula, 221
Mitrella, 423
Mitreola, 425
Mitrularia, 248, 412
Modiola, 446, 449;
habits, 64;
genital orifice, 242
Modiolarca, 449
Modiolaria, 449;
habits, 78
Modiolopsis, 452
Modulus, 417
Monilia, 408
Monkey devouring oysters, 59
Monoceros, 423
Monocondylaea, 452
Monodacna, 12, 297, 455
Monodonta, 408, 408;
tentaculae, 178
Monogonopora, 134, 140
Monomerella, 496, 504
Monopleura, 456
Monotis, 449
Monotocardia, 9, 170, 411
Monstrosities, 250
Montacuta, 452;
M. ferruginosa, commensal, 80;
substriata, parasitic, 77
Mopalia, 403
Moquin-Tandon, on breathing of Limnaeidae, 162;
on smell, 193 f.
Moreletia, 440
Morio, 420
Mormus, 356, 442
Moseley, H. N., on eyes of Chiton, 187 f.
Moussonia, 327
Mouth, 209
Mucronalia, 422
Mucus, use of, 63
Mulinia, 272
Mülleria, 344, 452
Mumiola, 422
Murchisonia, 265, 407
Murchisoniella, 422
Murex, 423;
attacks Arca, 60;
use of spines, 64;
egg-capsules, 124;
eye, 182;
radula, 220;
shell, 256
Musical sounds, 50
Mussels, cultivation of, 115;
as bait, 116;
poisonous, 117;
on Great Eastern, 116
Mutela, 294, 328, 331, 336, 452
Mutyca, 425
Mya, 271, 275, 446, 456;
stylet, 240;
M. arenaria, variation, 84
Myacea, 456
Myalina, 449
Mycetopus, 307, 316, 344, 452
Myochama, 458
Myodora, 458
Myophoria, 448
Myopsidae, 389
Myrina, 449
Myristica, 424
Mytilacea, 448
Mytilimeria, 458
Mytilops, 452
Mytilopsis, 14
Mytilus, 258, 449;
gill filaments, 166, 285;
M. edulis, 14, 165;
attached to crabs, 48, 78;
pierced by Purpura, 60;
Bideford Bridge and, 117;
rate of growth, 258;
stylet, 240
Myxostoma, 414

Nacella, 405
Naiadina, 449
Nanina, 278, 300 f., 335, 440;
radula, 217, 232
Napaeus, 296–299, 316, 442
Naranio, 454
Narica, 412
Nassa, 423;
egg-capsules, 126;
sense of smell, 193
Nassodonta, 423
Nassopsis, 332
Natica, 246, 263, 411;
spawn, 126;
operculum, 268
Naticopsis, 409
‘Native’ oysters, 106
Nausitora, 15
Nautiloidea, 393
Nautilus, 254, 392, 395;
modified arms, 140;
eye, 183;
nervous system, 206;
radula, 236;
kidneys, 242
Navicella, 267, 268, 324, 327, 410;
origin, 17
Navicula, 358, 442
Navicula (Diatom), cause of greening in oysters, 108
Nectoteuthis, 389
Neda, 431
Nematurella, 12, 297
Nembrotha, 434
Neobolus, 504
Neobuccinum, 424
Neocyclotus, 357, 358
Neomenia, 8, 133, 216, 228, 404, 404;
breathing organs, 154;
nervous system, 203
Neothauma, 332
Neotremata, 511
Neptunea, 252, 262, 423;
egg-capsules, 126;
capture, 193;
monstrosity, 251
Nerinea, 417
Nerita, 17, 410;
N. polita used as money, 97
Neritidae, 260, 410;
radula, 226
Neritina, 256, 410;
origin, 16, 17, 21;
egg-laying, 128;
eye, 181;
distribution, 285, 291 f., 324, 327;
N. fluviatilis, habitat, 12, 25
Neritoma, 410
Neritopsis, 409;
radula, 226;
operculum, 269
Nervous system, 201 f.
Nesiotis, 357, 442
New Zealanders, use of shells, 99
Nicida, 413
Ninella, 409
Niphonia, 408
Niso, 422
Nitidella, 423
Nodulus, 415
Notarchus, 431
Nothus, 358, 442
Notobranchaea, 438
Notodoris, 434
Notoplax, 403
Novaculina, 305
Nucula, 254, 269, 273, 447
Nuculidae, otocyst, 197;
foot, 201
Nuculina, 448
Nudibranchiata, 432;
defined, 10;
protective and warning colours, 71 f.;
breathing organs, 159
Nummulina, 295
Nuttallina, 403

Obba, 311, 315, 441


Obbina, 306, 311, 312, 314, 319
Obeliscus, 442
Obolella, 496, 504;
stratigraphical distribution, 506, 508
Obolidae, 496, 504, 508
Obolus, 504, 508;
embryonic shell, 509
Ocinebra, 423
Octopodidae, hectocotylised arm, 137, 139, 140
Octopus, 379–386;
egg-capsules, 127;
vision, 184;
radula, 236;
crop, 238
Ocythoe, 384;
hectocotylus, 138
Odontomaria, 407
Odontostomus, 358, 442
Odostomia, 250, 422;
parasitic, 78
Oesophagus, 237
Ohola, 434
Oigopsidae, 390
Oldhamina, 506, 508
Oleacina, habits, 55
Oliva, 199, 255, 275, 425, 426
Olivancillaria, 426
Olivella, 260, 267, 426;
O. biplicata as money, 97
Olivia, 408
Omalaxis, 413
Omalonyx, habitat, 23
Ommastrephes, 6, 378, 390
Ommatophores, 180, 187
Omphalotropis, 306, 309, 316, 324, 327, 338, 414
Onchidiella, 443
Onchidiidae, 245;
radula, 234;
anus, 241
Onchidiopsis, 411
Onchidium, 443;
breathing, 163;
eyes, 187
Onchidoris, radula, 230
Oniscia, 420
Onoba, 415
Onychia, 390
Onychoteuthis, 390;
club, 386
Oocorys, 420
Oopelta, 329, 440
Opeas, 442
Operculum, 267 f.
Ophidioceras, 247, 395
Ophileta, 413
Opis, 451
Opisthobranchiata, 427;
defined, 9;
warning, etc., colours, 71 f.;
generative organs, 144;
breathing organs, 158;
organs of touch, 178;
parapodia, 199;
nervous system, 203;
radula, 229
Opisthoporus, 266, 300, 314–316, 414
Opisthostoma, 248, 309, 413
Oppelia, 399
Orbicula, 464
Orbiculoidea, 504, 510
Orders of Mollusca, 5–7
Organs of sense, 177
Origin of land Mollusca, 11 f.
Ornithochiton, 403
Orphnus, 356, 441
Orpiella, 440
Orthalicus, 342–358, 355, 442;
habits, 27;
variation, 87;
jaw, 211;
radula, 233, 234
Orthis, 505;
stratigraphical distribution, 506, 507, 511
Orthoceras, 394, 394
Orthonota, 457
Orthothetes, 505;
stratigraphical distribution, 507, 508
Orygoceras, 247
Osphradium, 194 f.
Ostodes, 327
Ostracotheres, 62
Ostrea, 252, 258, 446, 449;
intestine, 241
Otina, 18, 439
Otoconcha, 326, 440
Otocysts, 196 f., 197
Otopleura, 422
Otopoma, 331, 338, 414
Otostomus, 353, 442
Ovary, 135
Ovoviviparous genera, 123
Ovula, 419;
protective coloration, 70, 75;
radula, 80, 224;
used as money, 97
Ovum, development of fertilised, 130
Oxychona, 358
Oxygyrus, 422;
foot, 200
Oxynoe, 432;
radula, 230
Oyster-catchers, shells used by, 102
Oyster, cultivation, 104–109;
living out of water, 110;
enemies, 110 f.;
reproduction, 112 f.;
growth, 114;
cookery, 114;
poisonous oysters, 114;
vision, 190

Pachnodus, 329–335, 441, 442


Pachybathron, 425
Pachychilus, 354
Pachydesma crassatelloides, money made from, 97
Pachydomidae, 451
Pachydrobia, 307, 415
Pachylabra, 416
Pachyotus, 334, 336, 355, 358, 441
Pachypoma, 409
Pachystyla, 337, 440
Pachytypus, 451
Padollus, 407
Palaearctic region, 284 f.
Palaeoneilo, 447
Palaeosolen, 457
Palaina, 327, 413
Palio, 434
Pallial line and sinus, 270
Pallifera, 340, 440
Palliobranchiata, 464
Paludina, 416;
penis, 136;
eye, 181;
vision, 184;
P. vivipara, 24—see also Vivipara
Paludomus, 332, 336, 338, 417
Panama, Mollusca of, 3
Panda, 322, 325, 335
Pandora, 458
Papuans, use of shells, 99
Papuina, 309, 319–324, 441
Paramelania, 332
Paramenia, 404
Parasitic worms, 60 f.;
Mollusca, 74 f.
Parastarte, 451
Parkinsonia, 398
Parmacella, 245, 291, 294 f., 438 n., 440;
radula, 232;
shell, 175
Parmacochlea, 322, 326, 440
Parmarion, 309, 440
Parmella, 326, 440
Parmophorus, 406
Parthena, 349–352, 350, 441
Parts of univalve shell, 262;
bivalve, 269
Partula, 319–327, 326, 442;
radula, 233
Paryphanta, 321, 325, 440
Paryphostoma, 415
Passamaiella, 332
Patella, 405, 464;
as food, 56 f.;
eye, 182;
radula, 214, 215, 227;
crop, 238;
anus, 241;
kidneys, 242;
shell, 262;
P. vulgata, veliger, 132;
breathing organs, etc., 156, 157
Patelliform shell in various genera, 19
Paterina, 509, 510, 511
Patinella, radula, 227
Patula, 297, 298, 318–338, 340, 441
Paxillus, 413
Pearl oysters, 100
Pecten, 446, 450, 450;
organs of touch, 178;
ocelli, 191;
flight, 192;
nervous system, 206;
genital orifice, 242;
ligament, 271
Pectinodonta, 405;
radula, 227
Pectunculus, 448
Pedicularia, 75, 419;
radula, 224
Pedinogyra, 319, 322, 442
Pedipes, 18, 199, 439, 439
Pedum, 450
Pelagic Mollusca, 360
Pelecypoda, 7, 445;
development, 145;
generative organs, 145;
branchiae, 166–169;
organs of touch, 178;
eyes, 189 f.;
foot, 201;
nervous system, 205
Pella, 333
Pellicula, 352, 442
Peltoceras, 399
Pentadactylus, 423
Peraclis, 436
Pereiraea, 418
Perideris, 328–330, 443
Periodicity in breeding, 129
Periophthalmus, 187
Periostracum, 275
Periploma, 459
Perisphinctes, 399
Perissodonta, 418
Perissolax, 424
Peristernia, 424
Perna, 449;
ligament, 271
Pernostrea, 449
Peronaeus, 358, 442
Peronia, 443
Perrieria, 319, 442
Perrinia, 408
Persicula, 425
Persona (= Distortio), 420
Petenia, 353, 440
Petersia, 420
Petraeus, 295, 331, 442
Petricola, 454
Phacellopleura, 403
Phanerophthalmus, 430
Phaneta, 408
Phania, 312, 441
Pharella, 457
Pharus, 457
Pharynx, 210
Phasianella, 409
Phasis, 333
Phenomena of distribution, 362
Philine, 245, 428, 430;
protective coloration, 73;
radula, 229, 230
Philomycus, 245, 318, 440
Philonexis, 138
Philopotamis, 304, 417
Phoenicobius, 315, 441
Pholadacea, 457
Pholadidea, 457
Pholadomya, 459
Pholas, 245, 274, 447, 457;
in fresh water, 15
Phos, 424
Photinula, 408
Phragmophora, 386
Phyllidia, 434;
breathing organs, 159
Phyllirrhoe, 360, 428, 433
Phyllobranchus, 432
Phylloceras, 398, 398;
suture, 396
Phylloteuthis, 390
Physa, 439;
aestivating out of water, 27;
spinning threads, 29;
sudden appearance, 46;
osphradium, 195;
nervous system, 205;
radula, 235;
P. hypnorum, 23, 27
Pileolus, 410
Pileopsis, 76
Piloceras, 394
Pinaxia, 423

You might also like