You are on page 1of 35

This content has been downloaded from IOPscience. Please scroll down to see the full text.

Download details:

IP Address: 14.139.123.36
This content was downloaded on 24/11/2017 at 14:28

Please note that terms and conditions apply.

You may also be interested in:

Correction of Non–Common-Path Error for Extreme Adaptive Optics


Deqing Ren, Bing Dong, Yongtian Zhu et al.

Biological imaging with neutrons

Statistical-mechanical approaches to the problem of phase retrieval in adaptive optics in astronomy


Yohei Saika and Hidetoshi Nishimori

Adaptive Optics with a Liquid-Crystal-on-Silicon Spatial Light Modulator and Its Behavior in
Retinal Imaging
Tomohiro Shirai, Kohei Takeno, Hidenobu Arimoto et al.

Hybrid Deconvolution of Adaptive Optics Retinal Images from Wavefront Sensing


Tian Yu, Rao Chang-Hui, Rao Xue-Jun et al.

The Strehl Efficiency of Adaptive Optics Systems


René Racine

Invited: Integrated Optics


Amnon Yariv

Adaptive Optics Optical Coherence Tomography Based on a 61-Element Deformable Mirror


G H Shi, Z H Ding, Y Dai et al.

Compensation and Improvement of Intensity and Distribution in Reconstructed Image Using Adaptive
Optics in Holographic Data Storage
Tetsuhiko Muroi, Sayaka Sekiguchi, Nobuhiro Kinoshita et al.
Adaptive Optics in Biology
Adaptive Optics in Biology
Carl J Kempf
Iris AO Inc, Berkeley, California, US

IOP Publishing, Bristol, UK


ª IOP Publishing Ltd 2017

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system
or transmitted in any form or by any means, electronic, mechanical, photocopying, recording
or otherwise, without the prior permission of the publisher, or as expressly permitted by law or
under terms agreed with the appropriate rights organization. Multiple copying is permitted in
accordance with the terms of licences issued by the Copyright Licensing Agency, the Copyright
Clearance Centre and other reproduction rights organisations.

Permission to make use of IOP Publishing content other than as set out above may be sought
at permissions@iop.org.

Carl J Kempf has asserted his right to be identified as the author of this work in accordance with
sections 77 and 78 of the Copyright, Designs and Patents Act 1988.

ISBN 978-0-7503-1548-7 (ebook)

DOI 10.1088/978-0-7503-1548-7

Version: 20171101

Physics World Discovery


ISSN 2399-2891 (online)

British Library Cataloguing-in-Publication Data: A catalogue record for this book is available
from the British Library.

Published by IOP Publishing, wholly owned by The Institute of Physics, London

IOP Publishing, Temple Circus, Temple Way, Bristol, BS1 6HG, UK

US Office: IOP Publishing, Inc., 190 North Independence Mall West, Suite 601, Philadelphia,
PA 19106, USA
Contents

Abstract vi
Acknowledgements vii
Author biography viii

Adaptive Optics in Biology 1


1 Introduction 1
2 Background 2
Importance of in vivo imaging 2
Imaging deep into tissue and specimen-induced aberrations 2
Diffraction limited imaging and the effect of aberrations 3
Wave-fronts and modal representations 4
Adaptive optics history and key concepts 6
3 Current directions 8
Requirements for adaptive optics in biological imaging 8
Wide-field microscopes 9
Confocal and multi-photon microscopes 10
Optical coherence tomography 11
Image post-processing and deconvolution 12
Adaptive optics technology: wave-front correctors 13
Adaptive optics technology: feedback alternatives 15
Adaptive optics technology: direct wave-front sensing 15
Adaptive optics technology: indirect wave-front sensing 19
Example application: indirect wave-front sensing 20
Example application: retinal imaging 22
4 Outlook 23
Additional resources 24

v
Abstract

Biological imaging at the cellular level, particularly in vivo, is a key enabling tool for
understanding biological processes, disease diagnosis and drug development.
Unfortunately, specimens under observation often introduce optical aberrations
that degrade the imaging resolution, particularly when imaging deep into tissue.
Adaptive optics counteracts these effects. This technology, originally developed and
applied in astronomy, has been increasingly applied to ophthalmics and microscopy.
This book introduces the key concepts and technologies behind adaptive optics. The
wide diversity of imaging methods used in biological imaging requires multiple
approaches to adaptive optics. Methods fall into two broad categories—those that
use a direct measurement of the aberration and those that do not. Relative
advantages of each are described. Application results are presented. Finally, current
technology trends and promising future application areas are discussed.

vi
Acknowledgements

The author thanks Michael Helmbrecht at the staff at Iris AO for the support on this
project and Marinko Sarunic and Yifan Jian at Simon Fraser University for use of
images. Technologies developed by Iris AO featured in this e-book have been
supported in part by SBIR research grants from the National Science Foundation
(IIP-0750521), the National Institutes of Health (5R44EY015381-03) and NASA
(NNG07CA06C).

vii
Author biography

Carl J Kempf
Carl Kempf PhD, PE is a senior systems engineer at Iris AO, Inc.
He has worked on sensing, actuation, and control systems for high
precision devices for over 30 years. His background spans data
storage devices, factory automation, laboratory instruments, and
adaptive optics. He has researched and published within the general
field of mechatronics, with emphasis on repetitive control,
disturbance observer techniques, wave-front control, system
identification and modeling. Kempf’s technical interests include real-world
implementation, meaning extensive digital, analog, and power electronics design,
construction and debugging. This goes hand-in-hand with low-level firmware
development on embedded systems such as digital signal processors and field
programmable logic devices. Kempf is a registered Profession Engineer in
California.

viii
Physics World Discovery

Adaptive Optics in Biology


Carl J Kempf

1 Introduction
Observations at the cellular level are driving the need for imaging systems that can
perform at or beyond the resolution limits of many existing wide-field microscope
systems. Imaging in vivo is particularly useful for understanding basic biological
processes, disease diagnosis, and drug development. Observed regions are frequently
3D volumes and maintaining high resolution throughout the volume is a major
challenge.
New technologies have enabled major advances in biological imaging in recent
decades. Ultrasound and magnetic resonance imaging are now routine clinical
procedures. Impressive as these are for the ability to scan very deep into tissue, their
resolutions are typically no better than hundreds of microns. Optical microscopes of
various types still provide the highest levels of resolution. These commonly yield
stunning images with resolutions reaching sub-micron levels, enabling optical
microscopes to observe cellular and sub-cellular structures. However, the resulting
images are often degraded by optical aberrations occurring in the samples being
observed. These arise from variations in refractive index within the biological tissue
and the effects become more acute when peering deep into tissue.
A technology known as adaptive optics (AO) has the capability to counteract the
undesired effects of optical aberrations. Originally developed for astronomy, this
technology has taken hold in biological imaging over the past two decades. It is now
common in a few imaging modalities in research applications. Manufacturers of
adaptive-optics equipment have simultaneously adapted the size, ease of use, and
price of components to better fit biological imaging applications, thus enabling
further application.
Within the broad field of biological imaging, my discussion in this ebook will be
aimed at high-resolution optical imaging. This means, essentially, microscopes and
ophthalmoscopes. But even when restricting discussion to these instruments, there
remains a wide variation in design and operation that has important consequences

doi:10.1088/978-0-7503-1548-7ch1 1 ª IOP Publishing Ltd 2017


Adaptive Optics in Biology

when applying adaptive optics. This work reviews the important classes of instru-
ments and the corresponding challenges. It then moves on to key concepts and
technologies in adaptive optics that guide a practitioner interested in weighing
design options. Finally, results from representative systems are presented and future
directions are discussed.

2 Background
Importance of in vivo imaging
Imaging live subjects, both human and animal, is a key tool for advancing medical
care. A familiar example is clinical care of human patients—think of making a
detailed observation of a patient’s retina to check for the onset of disease. An even
larger application area involves imaging animal subjects—from fruit flies to
primates—in research settings. The mouse, in particular, receives a lot of attention
for its use as a model organism for treating human diseases. Regardless of the
organism, however, the goal is to observe ongoing physiological process in as much
detail as possible, as rapidly as possible, and with minimum distress or damage.
Further, the imaging system must be able to repeat measurements on a periodic basis
as consistently as possible. This is where a well-functioning adaptive-optics system
adds value. Ideally, such a system enables imaging at a level of resolution that would
otherwise be impossible, at depths that would otherwise be unattainable, and within
a time frame that is not detrimental to the subject.

Imaging deep into tissue and specimen-induced aberrations


Microscope builders trim out nearly all the aberration from the instrument itself to
give nearly diffraction-limited imaging, which is the maximum possible under ideal
conditions. The specimen itself is therefore the main contributor of aberrations that
degrade images. But since every specimen is unique, it is not possible to predict and
correct these aberrations a priori. Moreover, aberrations accumulate when imaging
deeper into samples to obtain 3D volumetric images. Maintaining imaging reso-
lutions at or near the micron level becomes very challenging. This problem is made
worse when looking deep into specimens: light absorbs and scatters a lot, leaving less
available to form an image. Depending on the tissue being observed, the wavelength
of light used, and the particular imaging modality, the depth limits vary signifi-
cantly. Under favourable conditions, this maximum depth at which a useful image
can be taken is currently limited to hundreds of microns. Adaptive optics cannot
restore the light that has been lost to absorption and scattering, but the technique
can make the best possible use of what light is available—restoring contrast and
sharpness to what would otherwise be a faint and fuzzy image.
To compound the challenge, aberrations within the sample can vary with time
and location. For in vivo measurements, this time variation could be due to natural
movements or to biological processes. In ophthalmic imaging, for example, the eye
is seldom completely stationary. Good aberration correction requires correcting
aberrations that can change at frequencies up to a few hertz. Even for ex vivo
imaging, samples frequently dry out or undergo other changes that require some

2
Adaptive Optics in Biology

degree of tracking slowly time-varying events. Fortunately, the time scales are slow
compared to atmospheric corrections, so technology already exists that can operate
at the requisite speeds. Spatial variations can present a different challenge. In optical
systems that scan a beam at high frequencies, the scanning frequency is often
kilohertz or tens of kilohertz—rates beyond which adaptive optics can keep up.
These can be handled by doing local optimizations within an image and, with
sufficiently fast electronics synchronized to the scanning, corrections determined by
the optimization process can be applied in an open-loop fashion during the scanning.

Diffraction limited imaging and the effect of aberrations


In any optical system, fundamental properties of light impose limits on how good
the imaging can be. Any optical detector—even something as fundamental as our
eyes—sees an image of the object being observed. This distinction between the image
and the underlying object is important; the image is never a perfect rendering of the
object. Exactly how well the image reveals the features present in the underlying
object depends on the resolution of the system. Resolution is simply the ability to
distinguish closely spaced objects. In a high-resolution image, closely spaced features
in the object can be seen as distinct, separate features. In a lower resolution image,
these blur together.
The limit to the resolving power of a system—i.e. the diffraction limit—is
imposed by the wave nature of light. As the light passes through an optical system,
constructive and destructive interference effects limit the sharpness of the image that
is formed. Consider a simple system in which a circular beam of coherent,
monochromatic light enters a single ideal lens. At the focal length of the lens, the
light is concentrated down to a region of high irradiance. This region is not an
infinitely small point, however, as this would imply an infinitely high energy density.
The reason why the light spreads goes back to how the light is collected by the lens.
Rays of light at all points throughout the circular beam travel slightly different
distances to reach the focal plane of the lens. These path differences lead to both
constructive and destructive wave interference. Only along the centre axis of the lens
do the waves have a predominantly reinforcing effect, leading to the region of high
irradiance that we think of as the focused spot of light. This region is therefore a 3D
shape that extends along the axis of the beam. The cross section through the region
of maximum intensity has a sharp, but not infinitely high, intensity peak. Known as
the ‘point spread function’ (PSF), this spreading of the light is key to understanding
optical systems.
The smaller and more sharply peaked the PSF, the higher the resolving power of
the optical system. The factors governing shape of the PSF for this simple system are
the wavelength of light, the beam diameter, and the focal length of the ideal lens.
This notion of diffraction-limited resolution applies directly to more complex optical
instruments assembled out of multiple lenses and mirrors. In this case, the single
ideal lens is replaced by series of real lenses and mirrors. Instrument manufacturers
strive to make this assembly of components function together as a near-perfect
system that approaches the diffraction limited resolution. Even small deviations

3
Adaptive Optics in Biology

Figure 1. Point spread function (PSF) images. On the left is a well-formed PSF, largely free of aberrations. The
centre image has a small amount of defocus aberration; note that the central core has spread evenly and peak
intensity is lower. The right-hand image has a small amount of coma aberration; note that that intensity
spreading is in the vertical direction.

from perfection in the shape and placement of lenses in the assembly cause small
additional variations in the optical path lengths known as aberrations. These, in
turn, lead to a PSF for the overall system that spreads the light out over a larger
region of lower intensity, thus falling short of the diffraction limit. The result is
images that have more blurring and less resolution.
Figure 1 shows images of three point spread functions taken from an optical test
bed. The image on the left is a relatively good PSF, largely free of aberrations. The
image in the center shows the same PSF with a small amount of focus aberration.
Note that the peak irradiance has dropped significantly and the light has spread out
over a large area. Focus aberrations impart a bowl-shaped optical path length
variation that is symmetric about the optical axis and thus the spreading of light at
the PSF is symmetric around the optical axis. The figure at the right shows the
original PSF, this time corrupted by a small amount of coma error. Coma, unlike
focus, imparts aberrations that are not symmetric. Note that the peak irradiance has
dropped again, but the light has spread out in a more complicated pattern along the
vertical direction in the image. Faint bands of slightly higher intensity are visible
below the bright region.

Wave-fronts and modal representations


Diffraction-limited performance, which instrument makers go to great lengths to
achieve, is lost when the sample itself introduces aberrations. One well known issue
in microscopy is the matching of the refractive index between the cover glass and the
sample. If this index deviates from the anticipated value, it induces aberrations. With
careful preparation, these can be reduced or avoided. The more challenging cases are
aberrations induced by refractive-index variations within the sample. As these
variations are inherent in the specimen, they cannot normally be altered, particularly
for in vivo imaging where the goal is to observe with minimal intrusiveness.
One very useful way to understand and quantify effects of aberrations is to
consider the effect upon a wave-front. A wave-front simply represents how a cross

4
Adaptive Optics in Biology

section of the beam propagates through space. For a beam that is neither converging
nor diverging and free of aberrations, the wave-front is simply a flat disk propagat-
ing along the beam axis. When aberrations are present, these add variations in the
optical path length. These variations in the path length are represented by the
retardation of portions of the wave-front, thus warping the otherwise flat disk. If a
focus aberration is applied to an otherwise flat wave-front it becomes bowl-shaped.
If astigmatism is applied to an otherwise flat wave-front, it becomes saddle-shaped.
Our eyes are a good example of an imperfect optical system. The cornea and lens
of the eye work together to focus light on the retina at the back of the eye. Unlike the
ideal lens example above, these elements are seldom free from aberrations. These
aberrations cause additional optical path-length changes over the wave-front. In
terms of the point spread function, these path-length changes result in further
spreading of the region of high irradiance. This, in turn, leads to blurred images. We
wear spectacles to apply the opposite aberrations to the light entering our eye. This
‘pre-warps’ the light so that after it has passed through the lens and cornea, the
aberrations cancel the pre-warping and thus restore the flat, error free wave-front.
Various methods exist to represent wave-fronts. One method that is widely used
in optics is ‘modal decomposition’. As in other fields of engineering, this involves
representing a function as a collection of basis functions, with each one scaled by a
weighting coefficient. This concept is usually introduced in engineering and physics
by showing how a square wave can be built up as a series of sine waves of
successively higher frequencies. In the case of a wave-front, the goal is to represent a
shape defined over a 2D circular domain. A modal basis set widely used in optics is
the Zernike polynomials. A description of these can be found in, for example, the
text by Born and Wolf (see ‘Additional resources’). Figure 2 shows how a wave-front
with aberration can be represented as a sum of modal basis functions.

Figure 2. Wave-front (top) generated by adding together three low-order basis functions from the set of
Zernike polynomials.

5
Adaptive Optics in Biology

The shape of the wave-front strongly affects how easily aberrations can be
corrected with adaptive optics. Ignoring, for the moment, how these wave-front
errors vary in time, consider how they vary in space. Shapes that have fewer
curvature reversals (such as bowl shapes and saddle shapes) are generally easier to
correct. These are described as having a low ‘spatial frequency’. On the other hand,
wave-fronts that are very wavy are said to have high spatial frequency. In addition
to the shape, the size of the errors is significant. Any adaptive-optics system will
ultimately encounter limits in correcting wave-front errors as they become large in
magnitude or have high spatial frequencies.

Adaptive optics history and key concepts


Since the first telescopes were built, astronomers noticed that the quality of the
images they observed depended greatly upon the prevailing winds. We now know
this is because light rays reaching the ground travel through regions of the
atmosphere with varying density. As the density changes, so does the refractive
index. These density changes are far from uniform and vary with time. In other
words, ground-based telescopes face a set of rapidly changing aberrations. These
nudged imaging performance away from the diffraction-limited-imaging that was
potentially available with the best telescopes under the best possible observation
conditions.
In 1953 astronomer Horace Babcock proposed a clever solution, which included
the main elements of any modern adaptive-optics system. First, light is made to
reflect off a device that can rapidly impart optical path-length corrections at
different regions throughout the beam to flatten the wave-front and thus counteract
the effects of aberration. Second, the remaining wave-front errors are measured after
the correction. Finally, a feedback control loop uses the measurement to continu-
ously adjust the corrections applied to the wave-front. At the time this scheme was
first proposed, the requisite technology was in its infancy. The wave-front corrector,
for example, was adapted from an entirely different application and the develop-
ment of suitable technology progressed slowly at the outset.
Astronomers were not, however, the only people interested in looking up into the
sky. Various defence-related agencies were also very keen to look at items passing
overhead, with the advent of orbiting satellites strongly stimulating this interest.
Indeed, the technical challenges are very similar to those encountered when looking
at distant stars or planets. Unlike stars and planets, though, the items of interest to
defence agencies may pass overhead rather quickly—and simply waiting for weather
conditions to improve was not an option. By the late 1960s and throughout the
1970s, defence agencies began investing heavily in adaptive optics. With these larger
budgets, the development of adaptive-optics technology progressed rapidly and a
handful of pioneering systems with stunning levels of performance were built.
These systems heavily influenced the various systems that followed, even for
applications outside of atmospheric correction. In modern adaptive-optics systems,
the device most commonly used to impart optical path length changes is a
‘deformable mirror’ (DM). This device has a reflective surface that can be

6
Adaptive Optics in Biology

modulated rapidly by a computer. The key idea is simple—if the mirror takes on a
shape that is an inverted version of the wave-front, then after the beam is reflected
off the deformable mirror the optical path lengths will once again be uniform and
the effects of atmospheric aberrations will be cancelled. The sensing element in
modern adaptive-optics systems is a ‘wave-front sensor’, which is a specially
constructed camera that takes measurements that can be used to estimate the shape
of the wave-front. The feedback control system in modern adaptive-optics systems is
almost always implemented using some form of computer, with the processing
power determined largely by how fast the system needs to operate.
Figure 3 shows a simplified representation of a telescope equipped with adaptive
optics. The aberrated wave-front entering the telescope is represented by a wavy line
and as the light proceeds through the telescope, it bounces off a deformable mirror.

Figure 3. Simplified astronomical telescope with adaptive optics. Note that the deformable mirror (DM) has a
shape that is the inverse of the wave-front error, scaled so that reflection from the DM imparts optical path-
length changes that exactly cancel the error.

7
Adaptive Optics in Biology

The mirror is, however, shaped to cancel optical path-length variations. So when
shaped correctly, the wave-front leaving the mirror becomes flat again. A portion of
the beam directed to the camera used to record scientific images is split off and
passed to the wave-front sensor. Any residual aberrations present after correction
are detected and used to apply additional corrections to the DM.
By the late 1980s, the legacy of the defence spending was a fledgling industry
building deformable mirrors, making wave-front sensors, and supplying expertise in
implementing adaptive-optics systems. At first, these components were ill-suited to
biological imaging, being big, expensive and difficult to obtain. Nevertheless, by the
1990s the situation had improved enough that adaptive optics began to be applied in
ophthalmic research. Some of the first instruments to demonstrate this were
developed at the University of Rochester in the 1990s, with the technique
subsequently being adopted in various areas of microscopy.
The trend toward adoptive-optics application in life sciences is currently accel-
erating. In vision science and ophthalmic care, adaptive optics is firmly established
in high-end research systems. Microscopy has lagged ophthalmics, but is now a
rapidly growing application area. This trend is driven by growing demand for higher
resolution images and supported by the availability of correctly sized and reasonably
priced components. Importantly, individuals with experience in developing and
applying adaptive optics are no longer cloistered within the defense community. A
wealth of expertise has built up around the application of adaptive optics in
biological imaging. There is now a deep well of publications that validate the
technology provide a basic roadmap for what is known to work effectively, with the
texts edited by Jason Porter and colleagues, as well as by Joel Kubby (see Additional
resources), being required reading for those serious about applying adaptive optics
in ophthalmics and microscopy, respectively.

3 Current directions
Requirements for adaptive optics in biological imaging
Capturing an exact set of requirements that an adaptive-optics system must meet in
biological imaging is difficult because there are so many different types of system. A
few general themes do, however, emerge.
First, the optical power levels are usually low. It might be tempting to apply high
levels of illumination in the quest for more returned light and thus a better image,
but this approach has its limits, particularly because many samples are degraded or
damaged by exposure to light. The light may be toxic to the system under
observation and thus cumulative exposure must be limited. Light will also often
have a bleaching effect that makes structures harder to resolve as the cumulative
exposure increases. Finally, low optical power levels may be a matter of safety and
comfort for live human or animal subjects. The one exception is in laser surgery, in
which case the adaptive-optic components need to stand up to average power levels
that may be as high as a few watts. This usually requires specialized optical coatings
than can stand the high peak intensity of a pulsed laser with very short pulses.

8
Adaptive Optics in Biology

Furthermore, the components may have to be actively cooled to ensure that optical
quality is not lost due to thermally induced deformations.
Second, optical aberrations in biological systems tend to change very slowly. In
spite of this, many biological imaging systems require fast update rates for the
deformable mirror or similar device used to modulate the wave-front. This is due to
the growing popularity of systems that sense the aberrations indirectly instead of
using a wave-front sensor. These systems will be described in detail later in this
ebook. A key property of these systems is that they are iterative and require tens or
even hundreds of shapes to be imparted to the wave-front in order to converge on a
shape that best cancels the effects of aberrations. To do this quickly and limit
cumulative exposure of the specimen, a deformable mirror with fast update rates is
needed.
Third, the wave-front correction often needs to be synchronized with cameras,
scanners and other elements of the system. For example, a camera’s image
acquisition may need to be precisely timed to occur immediately after a wave-front
correction has been applied to minimize the overall time required for an iterative
algorithm.
Finally, no discussion of adaptive optics is ever complete without talking about
what kind of deformable mirror or similar device is needed in order to apply optical-
phase adjustments to the wave-front. These requirements vary in terms of the overall
amplitude of the phase correction that is needed, the spatial fidelity (i.e. the level of
‘waviness’) that must be achieved, and how quickly the device needs to respond. This
really depends upon the particular application and the nature of the aberrations to
be cancelled. In general, however, correcting aberrations in biological imaging is less
demanding than correcting atmospheric turbulence in astronomy. Whereas astron-
omers usually need thousands of points of actuation (for adequate spatial fidelity),
those studying biological samples can make do with hundreds of points of actuation.
And whereas astronomers need to be closing a control loop at kilohertz rates,
biological imaging can usually perform well operating at tens of hertz. The overall
amount of phase correction required depends on what it being observed—the most
demanding cases are usually aberrations of the eye in which the requisite wave-front
correction can be on the order of 10 microns.
Importantly, the type of imaging system usually has a strong impact on the design
of an adaptive-optics system. A few of the key types of instruments and related
technologies are reviewed here as background before delving into more details of
adaptive-optics systems.

Wide-field microscopes
The wide-field microscope remains the fundamental high-resolution imaging system
in many labs. It involves illuminating a sample and then collecting light from it
through adjustable levels of magnification. Illumination can be projected onto the
back of the sample before collecting the transmitted light. Alternatively, illumina-
tion can be directed onto the top of the sample before collecting the reflected light.
One widely used variant is the fluorescence microscope. In such a system, the

9
Adaptive Optics in Biology

illumination excites fluorescent emission from the sample. The emission can come
from naturally occurring proteins or, commonly, fluorescent dyes that bind to
specific sites within the specimen. Illumination is at a shorter wavelength (higher
energy) than the emission. The specificity of the binding and the light emission at
particular wavelengths make these systems particularly good at revealing biological
structures. The resolving power of wide-field systems has been extended through a
number of techniques, the most notable of which is ‘structured illumination’, where
the illumination is intentionally patterned in a small number of different ways. These
patterns are successively applied to the sample and the images for each are recorded.
These images are then computationally reconstructed to form an image at higher
resolution would have been possible with a single image using ordinary illumination.
Other extensions to resolution have been obtained by carefully structuring the
stimulation of fluorescence (either in space or in time) and then capturing and
reconstructing images—these systems form a long and growing list of acronyms such
as STED, PALM, and STORM.
These various wide-field systems share some attributes. First, they can be used to
generate 3D volumetric images by stepping the focal plane through the object under
observation, taking digital images at each step. These image stacks are then
computationally combined into a 3D volume image.
One shortcoming in most wide-field applications is that any stray light that has
been reflected from or scattered off planes other than the current focal plane can turn
up in the image, which leads to a loss of contrast. In volumetric imaging, the images
taken deep into tissue are most strongly degraded by the effects of stray light from
other planes, with the light from the plane of interest becoming successively dimmer
while the noise from back reflections from other layers grows. As a result, features of
interest fade into the growing background noise. This problem is compounded by
aberration effects that degrade the true signal returning from the layer of interest.
Fluorescence techniques are popular because the wavelength separation between
stimulation and emission allows back reflections to be filtered, but because the
fluorescent light leaving the sample is still subject to aberration, higher levels of
imaging resolution can therefore be obtained by correcting these with adaptive optics.

Confocal and multi-photon microscopes


Confocal microscopes are so named because they block light returning from planes
other than the focal plane reaching the detector. They do this thanks to a relay
telescope that is placed in the optical path, typically right before the detector that is
used to capture the image. The telescope makes the light converge and then diverge
in an hour-glass shape, with a precisely sized pinhole carefully aligned to match the
narrowest point of the hour-glass shape. Only the light that is correctly focused will
then make it through the pinhole. Light from other focal planes follows an hour-
glass shaped path, but the narrowest point will shift axially ahead of or behind the
pinhole, thus blocking the vast majority of light from these out-of-focus planes.
Confocal systems are often implemented as scanning systems that sweep the
focused beam through a rectangular area in the specimen. Only a small amount of

10
Adaptive Optics in Biology

light is reflected back from the specimen, with the amount of returning light being
sometimes three or four orders of magnitude less than the illumination. A highly
sensitive detector is therefore an essential part of a confocal-microscopy system.
Aberrations on both the inbound and outbound paths are important. Correction
on the inbound path makes sure that that stimulus is localized to a small spot, thus
ensuring the reflections emulate a point source within the sample. Light collected
from the outbound path is directly affected by aberrations on the way to the
detector. Fortunately, the path traversed by the light in both directions is the same,
so if the aberrations detected on the outbound path can be correctly sensed and
applied, they will also correct the inbound beam. When scanning deep into tissue,
these systems do lose some signal, but the returned images remain free from the stray
signals that plague wide-field systems. Most high-resolution ophthalmoscopes are
confocal microscopes that have been optimized to look at the structures in the back
of the eye.
Rejecting out-of-plane light in this way leads to wonderfully sharp images. The
downside, however, is that such systems are technically complex. As well as needing
a detector that is highly sensitive, the detector sampling must be synchronized with
the scanning mirrors. Confocal systems therefore need more electronic hardware
and software than other microscopes.
An alternative device is the two-photon microscope, which has received a lot of
attention in recent years. The name comes from the method of generating light
emission within the sample. The core idea is that if the illumination intensity can be
very highly concentrated in space and time, two photons will simultaneously excite
fluorescence. Relative to single-photon excitation, the energy of this two-photon
excitation is doubled, which means that the emitted light will be at twice the
frequency of the excitation.
This wavelength distinction is very helpful. First, it provides the usual benefit of
filtering out back reflections from the excitation. Second, this method allows
excitation at longer wavelengths, which is very useful because longer-wavelength
light scatters less when penetrating deep into tissue—and the opposite of what
happens with a standard fluorescence microscope. Furthermore, the emission rises
with the square of the incident irradiance, so very little emission occurs outside the
focal plane of the incident stimulus. Two-photon systems are therefore inherently
confocal.
When it comes to adaptive-optics correction, the inbound stimulus is most critical
in two-photon systems. This ensures that the irradiance reaches the high levels
needed to strongly stimulate the two-photon emission. As with confocal systems,
two-photon systems use scanning techniques and can step the focal plane in the axial
direction in order to generate 3D volumetric images.

Optical coherence tomography


Optical coherence tomography (OCT) systems use interferometry to reject scattered
light when imaging deep into tissue. What happens is that after the light leaves the
source, it is split. Part of the light travels into the sample and is reflected back from

11
Adaptive Optics in Biology

various depths within the tissue. The rest of the light, meanwhile, travels down a
reference arm, reflects back and then re-combines with the other beam. But whereas
an ordinary interferometer uses light with a very specific frequency, OCT systems
use light sources that span a range of frequencies. When the light recombines, it
creates an interference pattern if the optical path-length of the light reflected from
the sample matches the reference arm. Light reflected from other depths within the
sample is then ignored. So by sweeping the position of the reflector in the reference
arm, the depth into the sample from which reflections are considered can be swept in
the axial direction. The resulting interference pattern is then cast onto a photo-
detector and the resulting signal is processed to recover the strength of the
interference pattern, thus giving the strength of the reflected signal from within
the sample. By adding scanning in lateral directions, the OCT system can build up a
volumetric image.
OCT systems can scan very deep into tissue, reaching depths of more than 1 mm.
This penetrating ability is due to the relatively long wavelengths that can be used and
the ability to completely reject reflections from layers other than the one of interest.
The resolution is lower than in most traditional forms of microscopy, but OCT
systems have nevertheless been widely applied in both clinical and research setting
for ophthalmic care. In the case of ophthalmics, adaptive optics has been success-
fully applied to OCT systems to increase resolution.

Image post-processing and deconvolution


Computerized post-processing of digital images is now standard practice in imaging
science, with one technique—deconvolution—being so widely used that it deserves
specific mention. A number of different implementations have evolved in recent decades
and it is a method that works particularly well with adaptive optics. Deconvolution
cleverly recognizes that an image of an object is formed through an optical process that
can be modelled and, to some degree, computationally reversed to obtain a higher
quality representation of the object. This method re-assigns light forming the image to
counteract the blurring effect of the PSF. Deconvolution can be applied to 2D images
or 3D volumes and is most commonly applied to wide-field images, as they are most
prone to the effects of out-of-focus light appearing in the image.
The name hints at the process. Image formation can be expressed as the
convolution of the system’s PSF with light coming from every point in the object.
Deconvolution is the reversal of this process. If we can somehow come up with a
digital representation of the object that, when convolved with the system PSF,
results in a computed image that matches the image actually captured then we know
that we have obtained a perfect representation of the underlying object.
This technique runs into practical limits. Because the PSF tends to blur the image,
the high spatial-frequency components of the image are lost. The deconvolution
process seeks to restore these. The limiting factor is the noise that is invariably
present in any image. Noise, typically a high-spatial frequency effect, eventually
becomes amplified to the point that the deconvolution method can no longer
improve the estimation of the underlying object.

12
Adaptive Optics in Biology

Since the deconvolution process depends on a model of the PSF, inaccuracies and
inconsistencies in this model further degrade the ability of deconvolution to improve
an image. Depending upon the particular deconvolution software implementation in
use, the computational model of the PSF could be based on theory, measurement, or
iteratively estimated during the deconvolution.
The net result is that deconvolution can restore some lost resolution, but can
never replace lost signal. Adaptive optics can complement deconvolution (or other
post-processing techniques) by removing aberrations and restoring signal that would
otherwise be indistinguishable from noise. Further, adaptive optics will sharpen the
PSF and, importantly, make it more consistent, thus assisting the post-processing.

Adaptive optics technology: wave-front correctors


The heart of any adaptive-optics system is the device used to modify the wave-front.
Recall from figure 3 that a DM was introduced to do this. There are a few other
technologies that can be used, but most have shortcomings that limit widespread use
and the only serious alternative to a DM is a spatial light modulator (SLM).
Most SLMs are based on liquid-crystal devices that contain lots of individual
pixels, each of which retards the phase of the light by a small amount. One
shortcoming of the SLM is that the retardation is usually less than one wavelength.
So, to compensate for larger path length differences, which are common in
biological imaging, a ‘phase-stepping algorithm’ is required. This algorithm is
applied to pixels for which the desired amount of phase adjustment exceeds the
capabilities of the device. A computational adjustment by an integer number of full
wavelengths is applied so that the remaining correction to be applied at that pixel
falls within the capabilities of the device.
Another shortcoming with SLMs is that the phase retardation depends upon the
wavelength, so they are best suited to monochromatic applications. These devices
are also sensitive to the polarization of the incident beam. Further, depending upon
the particular technology used, the devices can be optically lossy and thus inefficient.
Finally, the response speeds can be slow. On the other hand, the key benefit of SLMs
is the large number of actuation points—sometimes as many as thousands—at a
reasonable price. In systems where the light loss and polarization are not a concern
(such as correcting the inbound illumination in a multi-photon microscope) the SLM
can be an attractive option.
The most widely used phase modulation device is the DM. These are exactly what
they sound like—mirror surfaces that can be deformed rapidly under computer
control. The advantages are that these are highly reflective, insensitive to polarization,
do not require phase wrapping, and respond on millisecond or sub-millisecond time
scales. These devices typically have tens to hundreds (and sometimes even thousands)
of actuation points across the optical aperture. Early DMs developed for defense and
astronomy used piezoelectric actuators that push and pull on an optical surface,
typically a thin sheet of glass with an optical coating. Such devices were large with
diameters of a few hundred millimeters. This fits well with the very large primary
apertures in telescopes. On the other hand, this does not match well to the much

13
Adaptive Optics in Biology

smaller beam sizes common in biological imaging. Smaller, cheaper versions of these
DMs have been developed recently. These newer devices have diameters approaching
the 10 mm range and so are better suited to biological applications. The piezoelectric
devices have some hysteresis and require moderate-to-high amounts of transient drive
currents. Both these factors add some difficulty to controlling the devices.
Other DMs use magnetic or electrostatic actuation. These systems are typically
tens of millimetres in diameter. The magnetic systems use a combination of magnets
and coils to push or pull on an optical surface, typically a polished metal membrane
with an optical coating. The main distinction is that the magnetically actuated
mirrors can push or pull with a lot of force over a long distance, allowing the surface
of the mirror to be moved further than with a piezoelectric or electrostatic system.
However, the magnetic forces rely upon the continuous application of drive
currents, so the thermal stability of the device can be a problem.
The last major class of DMs is electrostatically actuated devices built out of
silicon using a microelectronic mechanical system (MEMS) process. The surface
itself can be either a single, thin plate or sub-divided into segments. These devices are
usually very small, with diameters of ten millimeters or less. This small size is very
well matched to beam diameters used in biomedical optics. The surface is supported
on an elastic (spring) structure and electrostatic forces are used to pull on the optical
surface against the restoring force of the elastic structure. These DMs are free of the
hysteresis associated with piezoelectric DMs, take very little power to operate, and
avoid the thermal stability problems associated with magnetically actuated DMs.
An example of a MEMS deformable mirror with a segmented surface is shown in
figure 4. This device has an optical aperture of 3.5 mm and has 111 actuators driving
37 hexagonal segments. Each segment can be actuated in a full three degrees of

Figure 4. Example of a 111 actuator, 37-segment deformable mirror with segments arranged in a tightly
packed hexagonal pattern with an inscribed diameter of 3.5 mm made with a MEMS process. (Image courtesy
of Iris AO, Inc.)

14
Adaptive Optics in Biology

freedom: piston, tip and tilt. The segments can be addressed individually or as an
entire array. Positioning repeatability is at the nanometer level. The segments are
relatively thick, an advantage for high power applications where high reflectivity
dielectric coatings are required.

Adaptive optics technology: feedback alternatives


As introduced in figure 3, adaptive optics is simply a way of counteracting the
detrimental effects of optical aberrations by measuring the wave-front and feeding it
back to the DM. This is not, however, the only way of implementing an adaptive-
optics loop. An alternative is to omit the direct measurement of the wave-front. The
best shape to put on the DM is then determined by making a series of trial
adjustments to it and observing the effects upon the science image being acquired.
This method is called an ‘indirect’ or ‘sensorless’ approach.
The two concepts are summarized in the highly simplified system shown in
figure 5. The system is representative of a microscope in which light is directed into a
sample, with the figure depicting returned light emanating from a point within the
sample. As the light leaves the specimen, it picks up aberrations, represented by a
non-flat wave-front propagating out of the sample into the objective lens. The
optical path is highly simplified, showing only those elements related to the adaptive-
optics correction between the objective lens and the camera used to form the
scientific image of interest.
The upper part of figure 5 represents a direct sensing method and the lower part
shows an indirect method. In both cases, the shape applied to the surface of the DM
is an inverse of the aberrations. Upon reflection, the added optical path length
restores a flat wave-front. As the wave-front propagates back through the system, it
eventually arrives at the scientific imaging camera. In the upper part of the figure,
note the presence of a dedicated wave-front sensor—this is the distinguishing feature
of the sensor-based system.

Adaptive optics technology: direct wave-front sensing


Directly sensing the wave-front is standard practice in most adaptive-optics systems
used for astronomy. It is also commonly used in ophthalmic imaging. The chief
advantage is speed, which is needed for keeping up with rapidly fluctuating
disturbances.
The most popular device for direct wave-front sensing is the Shack–Hartmann
sensor, which is simply a camera with a lenslet array mounted in front of the detector
(see figure 6). The array divides the incoming beam into sub-regions, typically on a
rectangular or hexagonal pattern. Each sub-region has a small lenslet that focuses
the light to a spot on the camera detector. If the incoming wave-front is flat, the spot
pattern on the camera is perfectly regular and the spots exactly coincide with the
centers of the lenslets. If, however, the wave-front is aberrated, the slope in a sub-
region entering a lenslet shifts the location of the corresponding spot. The camera
image is processed on a frame-by-frame basis and the spot shifts relative to the flat
wave-front case are determined.

15
Adaptive Optics in Biology

Figure 5. A sensor-based (direct) approach is shown in the upper drawing. A sensorless (indirect) approach is
shown in the lower drawing. The indirect approach involves less hardware and directs all the returned light to
the since camera, but is iterative and thus is better suited to slowly varying aberrations.

Once the spot shifts are known, the incoming slopes at the sub-regions can be
calculated. This is the first step in the process of reconstructing the incident wave-
front. There are various methods involved in reconstruction, but the basic concept is
shown in figure 7. One approach is to enforce continuity constraints at the sub-
region boundaries and then tile together flat facets representing the wave-front in

16
Adaptive Optics in Biology

Figure 6. Shack–Hartmann wave-front sensor concept. The lenslets focus incoming light to a series of spots on
the camera detector. A non-flat wave-front will result in light entering lenslets that has local tilt causing a shift
in the location of the corresponding spot. This is detectable and can be used to compute the shape of the wave-
front.

each sub-region. The displacements and slopes in each sub-region are chosen to best-
fit the slopes observed by the wave-front sensor. This is known as a zonal approach.
Another option is to represent the aberration to be identified as a collection of
modes. The weighting coefficients applied to each mode are chosen to best-fit first-
derivatives of each mode to the slopes actually observed at the wave-front sensor.
This is known as a modal approach. A fuller discussion of both approaches is offered
in the book by Hardy (see Additional resources).
Once the wave-front is reconstructed, the inverse of it is applied to the DM. A
factor of one-half is applied since the mirror operates in reflection, so the amount of
optical phase correction is twice the DM displacement.
Although simple in concept, the control algorithm usually ends up being a lot
more complicated in practice, with extra features usually being needed to tune
performance. Another consideration is how to handle cases where the spot quality is
poor, for example, if the spots are too dim, too bright, or corrupted by noise. Worse
still, the spots could become biased by unwanted reflections from within the optical
system or layers within the sample that are not at the optical plane of interest. The
performance of the system depends upon ensuring the wave-front sensor, DM, and
source of the aberrations are optically conjugate to one another. Even small details
are critical, such as ensuring the system is aligned properly or maintaining the
correct algebraic sign and magnitude of the feedback in a system with multiple
mirrors and relay telescopes. Getting these correct takes time and working out the
reasons why is never immediately obvious, but failing to do so leaves you with a
system that does not work properly.
Beyond the difficulties in implementing a robust algorithm, there are optical
downsides to the sensor-based approach. First, precious photons returning out of the

17
Adaptive Optics in Biology

Figure 7. Direct wave-front measurement and reconstruction. The upper drawing shows how a wave-front is
divided into sub-apertures and the local slope at each sub aperture results in a spot-shift on the detector array.
The middle drawing shows how the spot shifts are used to recover the local slopes (or, equivalently, normal
vectors) at each sub aperture. The lower drawing shows the reconstructed wave-front.

sample need to be split off to run the wave-front sensor. Second, the wave-front
sensor simply responds to any incoming light. Since the inbound illumination beam
is usually much stronger than the return, any stray ‘back reflections’ from within the
optical system or an out-of-focus plane within the sample itself add noise to wave-
front sensor measurement. They can often be tracked down and eliminated, but it all
adds to the effort required to get a system up and running. Third, the optical paths to
the science camera and the wave-front sensor are not identical, which gives rise to
non-common path errors. These errors can be tracked down and corrected by
adding the proper offsets to the wave-front sensor, but again this means more effort
to get a system up and running.
The bottom line is that the conceptual simplicity of the wave-front-sensor approach
can be overwhelmed by the actual effort required to get things running reliably in a
practical situation. The enduring attraction, however, is the ability to apply
corrections very quickly and thus cancel effects of rapidly fluctuating aberrations.

18
Adaptive Optics in Biology

Adaptive optics technology: indirect wave-front sensing


An alternative approach is to skip the direct wave-front measurement and instead
use the scientific image that is being acquired to determine the shape for the DM.
This approach requires less complex hardware and is therefore cheaper.
Importantly, precious photons do not need to be diverted away from the imaging
camera to run a wave-front sensor.
The indirect approach works by applying a series of trial shapes to the DM and
measuring the effects upon the image. The trial shape that improves the image the
most is kept and used as the initial condition for a successive set of trial shapes.
The process continues until there is no further improvement in the image. Ideally,
the DM has converged on the shape that is optimal in rejecting the disturbance—in
other words an inverse of the wave-front error, with the usual factor of one-half to
account for reflection.
There are, however, two downsides to the indirect approach. First, the algorithms
for applying trial shapes, analyzing image ‘goodness’, and guiding the iteration
toward convergence can become very complex. Second, the iteration is time
consuming. Convergence of a direct sensor-based approach requires only a handful
of steps. An indirect approach, on the other hand, may take tens or even hundreds of
iteration steps for convergence. Fortunately, aberrations encountered in biological
samples usually change very slowly. Nevertheless, the time for measurements should
be kept as short as possible for reasons outlined earlier. Two main questions arise in
the development of an indirect sensing algorithm. First, what is the best metric to use
as a measure of quality in the image? And second, what sort of trial shapes should be
applied?
In selecting the image metric, it could be the contrast of some feature within some
region of the image, the brightness of some feature within the image, or simply the
maximized brightness of the whole image. A good metric is one that, when
maximized, ensures that the quality of the image is improved as much as possible.
In practice, there may not be a way to know this with certainty. The best approach is
therefore to start with a simple metric that is easy to compute. Because of the
iterative nature of the solution, computational simplicity and speed are important.
In a well-designed system, the camera will be able to run at full frame-rates without
waiting around for computations to complete.
In selecting the set of trial shapes to put on the DM, the considerations should be
ease of analysis and overall speed. Taking advantage of any a priori knowledge
about the expected aberrations is always helpful. So while it is possible to put a large
collection of random shapes on the wave-front corrector and check for the best
result, this would take a lot of time. Most algorithms therefore use some sort of
iterative technique to improve the image metric—often referred to as ‘hill-climbing’
algorithms. The key idea is that shape of the DM evolves so that the changes in the
metric are maximized at each step. In the hill-climbing analogy, the path of steepest
ascent is favored.
One very straightforward method is to represent the optimal shape for the DM as
a series of modes with weighting coefficients that will be determined during the

19
Adaptive Optics in Biology

iteration. On a first full step of iteration, an initial best-guess of the ideal shape is
made. In the absence of any other knowledge, a guess of zero for all the coefficients
(i.e. a flat DM) can be used. The response in the image is recorded and the metric is
calculated. The modes are then perturbed slightly on a one-by-one basis. Those not
being perturbed are held at the current best guess. For each perturbation, the image
is acquired and the metric calculated. After going through all the modes, the
collection of image metric versus perturbation amplitude is evaluated for each mode.
The value of the perturbation that maximizes the image metric is used to update the
new best-guess for that modal coefficient. After all the coefficients are updated, the
resulting shape is now the new best-guess and the process repeats. The process ends
when the improvement diminishes to a sufficiently small interval. This process is
summarized in the flowchart shown in figure 8.
A quantitative metric of image quality must be selected. For example, you could
choose to maximize the total pixel intensity over the whole image, or even a sub-
region in the image. This simple ‘light-in-a-bucket’ approach is a good starting
point. If there is some a priori knowledge of what the image should look like, a more
sophisticated metric can be applied. If there is structure within the sample that is
known and predictable, it can be used. For example, fluorescent beads are some-
times added to samples and maximizing the peak of the return signal from a bead
will provide a correction that is optimized for that region of the image. If other
features in the sample result in particularly bright lines or bands, maximizing the
contrast along a line or multiple lines crossing these features can also be used.
Whatever metric is used, the core idea is making an initial best-guess of a shape to
put on the DM, adjusting it slightly with a series of trial shapes, sampling the result,
and modifying the best-guess. Like any iterative approach, the aim is to drive the
best-guess towards an overall, optimal, best image. In many practical cases, the
challenge is to find an algorithm that is robust and converges rapidly.

Example application: indirect wave-front sensing


A simple case of low-order correction is used to illustrate how such an algorithm
evolves over a few full steps of iteration. Figure 9 shows results from an optical test
bench and graphically depicts how the image, the metric, and the DM shape evolve
over five steps of iteration. The correction is provided by the DM shown in figure 4.
The top row in figure 9 is the evolution of the PSF image, including some
annotations (in blue) that show the cross-sectional shape. The second row is the
image metric, which is calculated by taking the integrated intensity squared of the
central core of the PSF. The third row is a series of bar charts representing the modal
weighting coefficients obtained as the best guess shape for the DM at the conclusion
of each of the five steps of iteration. The ordering of the coefficients is astigmatism,
focus, and oblique astigmatism. The bottom row is the corresponding shape applied
to the DM.
The aberration to be cancelled in figure 9 is predominantly cylindrical in shape,
generated using an optician’s trial lens. Cylindrical aberrations consist mainly of
focus and astigmatism modes. The initial conditions correspond to the column

20
Adaptive Optics in Biology

Figure 8. Indirect algorithm flow for mode-by-mode amplitude iterations and updates at full steps. The
algorithm uses a basis function (modal) representation of the wave-front and iteratively determines weighting
coefficients. The metric uses a feature (or combination of features) in the scientific image.

denoted as Step 0. Note that the DM is flat (meaning that modal coefficients are all
zero), the PSF is smeared out to a faint diagonal cloud, and the bar representing the
image metric is relatively small. After proceeding through a set of trial shapes, the
best guess shown in the column denoted as Step 1 is generated. Notice that the DM

21
Adaptive Optics in Biology

Figure 9. Example sensorless adaptive-optics algorithm progression for low-order correction. The image
metric is the integrated intensity squared of a PSF. The PSF image is shown in the top row and the successive
increases in the image metric value are shown in the second row. The third row shows the evolution of modal
coefficient weightings and the bottom row shows the corresponding DM shape. Note that the majority of the
adaptation occurs within the first two full steps.

is taking on a cylindrical shape and the PSF has tightened into a fuzzy spot. The
image metric has increased sharply. On the successive three steps the DM undergoes
subtle adjustments that further tighten the spot and drive the image metric higher.
The improvement in the image metric between Step 3 and Step 4 is relatively small,
showing that the algorithm is converging.

Example application: retinal imaging


One extremely active area for adaptive optics has been to image the retina of both
human and animal subjects. The most common instruments are confocal scanning
systems, OCT systems, as well as traditional flood-illumination. Regardless of the

22
Adaptive Optics in Biology

modality, however, what happens is that light enters the eye through the pupil in the
ordinary fashion, a tiny fraction reflects off the retina and travels back out of the eye
and into the instrument. Aberrations are common in the eye, as demonstrated by the
number of people who need their eyesight corrected by wearing spectacles.
Imaging of animal subjects is common, and often has special challenges due to
variations in the size and shape of their eyes. As mentioned at the outset, mice are
particularly interesting because of their use as model organisms when it comes to
developing drugs. Figure 10 shows images taken of a mouse retina using an OCT
system equipped with adaptive optics, which in this case used an iterative indirect
controller. The top row shows a layer of nerve fibers. The lower two rows show
capillary layers. These layers occur at different axial depths within the sample. The
improvement between the case when the adaptive optics is off (left column) and
when it is on (right column) is dramatic. Notice that the total amount of light is
enhanced and the image sharpness is increased. Structures that were otherwise only
faintly visible are clear through the use of adaptive optics.

4 Outlook
Looking ahead, I foresee a continuing demand for high-resolution in vivo imaging
deep into tissue, which will place several demands on companies and individuals
who manufacture adaptive-optical systems. The growing popularity of iterative
algorithms places speed demands on DMs and SLMs, while the update rates need to
be as fast as possible to reduce light exposure. Furthermore, tight hardware
integration into the system is needed. This means that triggering and synchroniza-
tion signals must be provided that can coordinate the DM or SLM updates with
camera image acquisition or other events within the system.
Next, the overall ease of use of adaptive-optics systems must continue to improve.
Factory calibrated DMs that have predicable open-loop operation and hold their
accuracy for years are needed. For research-grade systems, these qualities are
conveniences. But for commercial systems, these are fundamental requirements.
What’s more, flexible, extensible software interfaces for controlling the DMs or
SLMs are essential. Given these features, instrument designers can build imaging
systems that are efficient at getting the high quality images in a minimum amount of
time.
Beyond the basics of speed and ease of use, further advances will be required to
support emerging applications such as laser surgery. These frequently use lasers with
moderate to high power (>0.5 W) so the ability of the DM or SLM to handle this is
pivotal. Those devices that support high reflectance dielectric coatings, and come
with features for thermal management such as heat sinking and active cooling will be
favored.
Finally, costs must come down. The steady progress in research applications is
helping refine the technology and lower the price. Similarly, growing applications in
industrial systems will increase the production volumes and help lower unit costs.
Adaptive optics is currently too expensive for most low-end clinical applications.
The good news is not only that the price trend for adaptive-optics technology is

23
Adaptive Optics in Biology

Figure 10. Images of a mouse retina taken with an optical coherence tomography system equipped with
adaptive optics. The left column shows the images with the adaptive optics off and the right column shows it
with it on. The top images are of the retinal nerve fiber layer, middle is the inner blood capillary layer, and
lower is the outer blood capillary layer. Note that turning the adaptive-optics system on raises the overall
signal return level, increases image sharpness, and reveals details that are otherwise not visible. (Images
courtesy of M Sarunic and Y Jian, Simon Fraser University.)

downward but also that there is growing acceptance of this fascinating technique as
a fundamental tool in the research market too.

Additional resources
Booth M J, DeBarre D and Jesacher A 2012 Adaptive optics for biomedical microscopy Opt.
Photon. News 23 22
This is a highly readable paper that reviews accomplishments and challenges in microscopy.

24
Adaptive Optics in Biology

Booth M J 2014 Adaptive optical microscopy: the ongoing quest for a perfect image Light: Sci.
Appl 3 e165
Another good, readable review paper by Booth that covers material similar to the OSA paper
above. This one has an extensive reference list for those seeking to dig more deeply.
Born M and Wolf E 1999 Principles of Optics 7th edn (Cambridge: Cambridge University Press)
This is a definitive work on optics and related topics. It is a rigorous mathematical treatment.
Where you feel the book by Hecht may not give enough supporting theory, refer to this book.

Hardy J W 1998 Adaptive Optics for Astronomical Telescopes (New York: Oxford University
Press)
This book covers adaptive optics at a level depth that most other books don’t. Unfortunately for
the biological imaging researcher, much of the material is aimed at astronomers. In spite of
this, it can be a useful reference.
Hecht E 2002 Optics 4th edn (San Francisco, CA (Addison-Wesley)
This highly readable text sets the standard for covering essentials of optics. Where you feel the
book by Born and Wolf strays too far from the basic conceptual understanding, refer to this
book.
Jian Y et al 2014 Wavefront sensorless adaptive optics optical coherence tomography for in vivo
retinal imaging in mice Biomed. Opt. Exp 5 547
One of the first demonstrations of indirect (sensorless) adaptive optics working at speeds that
make it an attractive alternative to sensor-based approaches.
Kubby J (ed) 2013 Adaptive Optics for Biological Imaging (Boca Raton, FL: CRC Press)
This collection of works includes application examples covering both indirect (sensorless) and
direct (sensor-based) techniques.
Porter J et al 2006 Adaptive Optics for Vision Science (Hoboken, NJ: John Wiley & Sons)
This is a very extensive collection of works that cover theory and practice in vision science. Several
example designs are described.
Roorda A et al 2002 Adaptive optics scanning laser ophthalmoscopy Opt. Exp 10 405
This details one of the first systems to add adaptive optics to a scanning laser ophthalmoscope for
retinal imaging and demonstrates the benefits in terms of imaging resolution.
Wallace W, Schaefer L H and Swedlow J R 2001 A workingperson’s guide to deconvolution in
light Microscopy Biotechniques 31 1076
This paper, despite being somewhat dated, still holds up well due to the readability and practical
coverage of the basics of deconvolution.

25

You might also like