Professional Documents
Culture Documents
Robert J. Vanderbei
P RINCETON U NIVERSITY , P RINCETON , NJ 08544
E-mail address: rvdb@princeton.edu
THE AMATEUR ASTROPHOTOGRAPHER
Copyright
2003
c by Robert J. Vanderbei. All rights reserved. Printed in the United States of
America. Except as permitted under the United States Copyright Act of 1976, no part of this
publication may be reproduced or distributed in any form or by any means, or stored in a data
base or retrieval system, without the prior written permission of the publisher.
ISBN 0-0000-0000-0
The text for this book was formated in Times-Roman using AMS-LATEX(which is a macro pack-
age for Leslie Lamport’s LATEX, which itself is a macro package for Donald Knuth’s TEXtext
formatting system) and converted to pdf format using PDFLATEX. All figures were incorporated
into the text with the macro package GRAPHICX . TEX.
Many thanks to Krisadee, Marisa,
and Diana for putting up with many
weary days following long sleepless
nights.
Contents
Preface ix
vii
viii CONTENTS
2. Starquest 2002 64
Bibliography 67
Preface
Robert J. Vanderbei
January, 2003
ix
Part 1
Introduction
Over the past ten to fifteen years, there has been and still is a rev-
olution taking place in imaging technology resulting from advances in
computer hardware, image processing software, and most importantly
digital imaging devices called charge coupled devices or CCD’s. Ev-
eryone has seen the new breed of cameras called digital cameras. At
the heart of a digital camera is a CCD chip (or, recently, a CMOS chip
which is a different acronym but a similar type of device). These digi-
tal cameras have been used for astrophotography of fairly bright objects.
For faint astronomical targets, however, these cameras are currently lim-
ited by the build up of thermal noise which begins to be significant after
only a few seconds of exposure time. This is why these cameras are
generally limited to maximum exposures of only a few seconds. But,
the good news is that this thermal noise is proportional to the tempera-
ture of the chip. For this reason, there has been a parallel development
of new imaging cameras for astronomical use in which the CCD chip
is cooled to as much as 40◦ Celsius below the ambient temperature. At
such cold temperatures, modern CCD chips make excellent, extremely
sensitive, virtually linear, astronomical imaging devices. Furthermore,
these devices are very small, smaller than conventional film, and there-
fore make possible a trend toward miniaturization of photography. The
purpose of this book is to illustrate the amazing things that can be done
with a small telescope, a personal computer, and one of these new as-
tronomical CCD cameras.
In astronomical applications, conventional wisdom is that bigger is
better. This is true, but with the advent of CCD imaging, the definition
of big got a lot smaller. Today, one can take CCD pictures with a small
telescope and get quality that rivals the best one could do with film
technology and a much bigger telescope. I hope that this book convinces
the reader that these recent developments are little less than amazing.
3
4 1. INTRODUCTION
to live in a great dark-sky site such as one finds in remote parts of Ari-
zona, New Mexico, and Hawaii. But most of us aren’t so lucky. We live
where the jobs are—in populous areas. My home’s location is shown
on the light-pollution map in Figure 1.1. As New Jersey goes, my loca-
tion isn’t bad. But, it doesn’t compare to a truly dark site. For example,
the Milky Way is just barely visible when it is high in the sky late on a
moonless night. But, light pollution is getting worse every year. It won’t
be long and I will not be able to see the Milky Way at all. With digital
imaging, the images are saved on a computer and it is utterly trivial to
subtract the average level of background light pollution from the image.
Of course, one can scan film-based pictures into a computer and apply
the same techniques but with CCD imaging the picture is already on the
computer and the subtraction is trivial so one can consider it essentially
automatic. With film, one needs to make a real effort to accomplish the
same thing.
2. Is it Worth the Effort?
Taking pictures, whether with film or with a CCD camera, takes
much more effort than one needs for simple visual observing. So, why
bother? Why not just enjoy the visual observing? On this question,
2. IS IT WORTH THE EFFORT? 5
people hold differing views. Many prefer the visceral, dynamic feel of
real-time visual observing. Others, like myself, find imaging to be fun
and rewarding. The following little story gives the main reason I got
into CCD imaging. I bought my first telescope, a small 3.5” Questar,
a few years ago. I was, and am, very happy with the design, fit-and-
finish, and optical quality of this instrument. As it was my very first
telescope, I didn’t know much about what to expect. Having seen lots
of beautiful images on the internet and in magazines, I expected to see
some of those things myself. Indeed, Jupiter, Saturn, and the Moon
are certainly spectacular as are open clusters and double stars. But I
was hoping also to have great views of various nebulae and even some
galaxies—in this regard I was utterly disappointed. Visually, I can just
barely make out that M13, the Great Globular Cluster in Hercules, is
a globular cluster and M27, the Dumbbell Nebula, is a nebula. These
are two very different kinds of objects but they both just looked like
faint fuzzies in the eyepiece. True, M13 seemed to scintilate suggesting
that it is a star cluster, but it was more of a vague impression than a
clear observation. Figure 1.2 shows a rough approximation of how M13
looks to me visually in the Questar. I’ve purposely made the image
appear rather faint to simulate the effect that light pollution has on one’s
eyes’ dark adaptation. And, galaxies being even fainter are harder to
see. From my home, I am able to see four galaxies: M31, M81, M82,
and M104. That’s it. I’ve never seen M51 or M33. And, when I say
that I can see a galaxy one should be aware that what I see is just a faint
glow. I do not see any dust lanes in M31, nor do I see any structure in
the other three except for their overall shape. Based on the pictures I’d
seen in magazines it was clear that one could expect to do a lot better by
6 1. INTRODUCTION
Equipment
1. Choosing a Telescope
This book is about CCD imaging with a small telescope. There are
many excellent telescopes available today. Even inexpensive ones are
generally very good—at least for visual use. However, for astrophotog-
raphy there are certain issues and factors that must be considered and
that rule out many of the most popular and most inexpensive telescopes.
Nonetheless, there are many choices that will provide both excellent
visual enjoyment and superb imaging capabilities. In this section, I’ll
articulate the main issues to pay attention to.
Rigidity. With astrophotography, one often adds quite a bit of weight
to the optical system. In addition to the CCD camera, there might be a
filter-holder, a focal reducer, and various extension tubes. All together
this can result in a few pounds of stuff hanging off the back of the tele-
scope. It is important that this weight not create warpage in the optical
tube assembly (OTA). Although I haven’t tried astrophotography using
a telescope with a plastic OTA, I strongly suspect that such a telescope
will not be sufficiently solid. It is also important that the mount can
handle this extra weight.
Equatorial Mount. Telescopes with so-called GoTo mounts have
become extremely popular in recent years, especially with inexpensive
telescopes. It is amazing how well they work for finding objects and
then keeping them relatively well centered in the eyepiece over a long
period of observation. The inexpensive GoTo telescopes are generally
designed to be used in an Altitude/Azimuth (Alt-Az) setup. Afterall,
such a setup is less expensive as one doesn’t need an adjustable wedge to
achieve an accurately polar aligned equatorially mounted system. And,
the built-in guide computer can handle the tracking.
In turns out that, while these GoTo systems are great for visual ob-
serving, they generally aren’t accurate enough to take even 10 second
exposures without stars trailing. Suppose for the sake of argument that
you get a system that is sufficiently accurate to take 10, maybe even
30, second exposures. You might be thinking that you will stack these
9
10 2. EQUIPMENT
exposures in the computer to get the effect of one long exposure. This
can be done, but it is important to bear in mind that, over a period of
time, the field of view rotates in an Alt-Az setup. So, one is faced with
either getting a field derotator, which is itself rather complicated, or liv-
ing with final images that must be cropped quite severly to eliminate the
ragged borders created by a stack of rotated images. In the end, it is just
simpler to use an equatorial mount.
when dry and at room temperature, it doesn’t mean they will work when
wet and cold.
2. Choosing a Camera
There are several criteria affecting the choice of camera. When
thinking about a small telescope, the most obvious issue is the size
and weight of the camera. Santa Barbara Instruments Group (SBIG)
makes very fine astronomical CCD cameras but they are too big and too
heavy (tipping the scales at more than a kilogram) for use on a small
telescope like the 3.5” Questar. The same is true for many other manu-
facturers. One camera maker, Starlight Express, makes very small light-
weight cameras using (wouldn’t you know it) Sony CCD chips. Their
cameras are small partly just because that is a design objective for them
but also because they have chosen to keep the workings simple. Most
SBIG cameras have a built-in physical shutter, color filter wheel, off-
axis smaller secondary CCD chip for guiding, and a regulated Peltier
cooling system with electronics to monitor the temperature and a fan
to help with the regulation. Starlight Express cameras, on the other
hand, do not have shutters, built-in filter wheels or off-axis guiding
chips (more on their unique approach to guiding later). Furthermore,
while Starlight Express cameras do have Peltier cooling, they do not
regulate the cooling and there is no cooling fan. All of this makes for
2. CHOOSING A CAMERA 15
a very simple lightweight design. These cameras only weigh about 250
grams. Later we will discuss how one overcomes the apparent loss of
functionality caused by these design simplifications.
Three Fundamental Facts. There are several camera models sold
by Starlight Express and in the future there will likely be other man-
ufacturers of light-weight cameras. In order to decide which model is
best for a given telescope, it is important to consider three fundamental
facts.
First Fact. A telescope concentrates the light entering it as an image
in the focal plane. The actual physical scale of this image is proportional
to the focal length of the telescope; that is, if one doubles the focal
length then the image will be twice as big (in both directions and hence
have 4 times the area). Now, put a CCD chip in the focal plane. The
field of view is defined as how much of the sky fits onto the chip. It
is measured as a pair of angles (one for each direction—horizontal and
vertical). If the image scale is large, then the field of view is small. In
fact, the angular field of view is inversely proportional to focal length.
Of course, if one uses a different camera with a larger CCD chip, then
the field of view increases in accordance with the increased size of the
chip. We can summarize these comments by saying that the field of
view is proportional to the dimension of the CCD chip and inversely
proportional the the focal length of the instrument:
Dim of CCD chip
Angular field of view ∝
Focal length
For a specific reference point, the Starlight Express MX-916 camera has
a chip whose physical dimensions are: 8.5mm horizontally and 6.5mm
vertically. This chip is large by current CCD standards. But, its diagonal
dimension is 10.7mm, which is much smaller than standard 35mm film.
Using a focal length of 1350mm for the Questar telescope, we get that
the MX-916 camera has a 22 × 16.5 arcminute field-of-view. Many
objects are too large to fit in this field. For such objects a focal reducer
is required. We will discuss focal reducers in Chapter ??.
Second Fact. There are two fundamental physical dimensions that
characterize a telescope: it’s aperture and it’s focal length. Aperture
refers to the diameter of the primary mirror or lens whereas focal length
refers to the distance from the primary mirror/lens to the focal plane,
which is where the CCD chip goes. As we saw above, knowing the
focal length allows one to determine the field of view. It is a common
misconception that the aperture determines the brightness of faint ob-
jects at the eyepiece or, equivalently, the exposure time when imaging.
16 2. EQUIPMENT
Focal length
Focal ratio = .
Aperture
Again, it is the area that is the relevant measure and we can say that the
exposure time is inversely proportional to the area of a pixel.
These considerations can be summarized as the following funda-
mental fact about exposure time:
(Focal ratio)2
Exposure time ∝ .
Area of pixel
Third Fact. Because of the wave nature of light, it is impossible for
a telescope to concentrate a star’s light at a single point in the image
plane. Instead, what one sees is a small diffuse disk of light, called the
Airy disk, surrounded by a series of progressively fainter rings called
diffraction rings—see Figure 2.2. If the focal length is kept constant but
the aperture of the telescope is doubled, then the physical distance, in
microns, from the center of the Airy disk to the first diffraction ring is
cut in half. On the other hand, if the aperture is kept constant and the
focal length is doubled, then this distance to the first diffraction ring is
doubled. Hence, if we double both the aperture and the focal length,
then these two effects cancel out and the distance to the first diffraction
ring remains unchanged. This means that the size of the Airy pattern is
proportional to the focal ratio:
At f/15, the first diffraction ring is 10 microns from the center of the
Airy disk.
MORE TEXT HERE
3.5” Questar at f/16 vs. 7” Telescope at f/8
• Same Field-of-View.
• Quadrupled exposure times.
Conclusion: Look for a lightweight camera with large pixels.
The Starlight Express MX-916.
• Approximate equivalent ASA (ISO): 20,000
• Linear response—no reciprocity failure (prior to saturation)
• Quantum efficiency: ≈ 50%
18 2. EQUIPMENT
Image Acquisition
21
22 3. IMAGE ACQUISITION
• Image objects high in the sky—the higher, the better. The pho-
tons from an object at 45◦ altitude pass through 41% more
atmosphere than an object straight overhead. At 30◦ there is
100% more atmosphere to pass through.
• Allow enough time for the telescope to equilibrate with ambi-
ent temperature.
• Polar align carefully.
• Focus carefully.
In the next few sections, I’ll elaborate a little on these last two points.
1. Polar Alignment
Polar alignment is important. It is especially critical if doing un-
guided exposures. There are lots of methods for aligning. The goal is to
find a method that balances time and effort with the quality of the align-
ment. The highly accurate drift alignment method, in which one makes
small incremental adjustments to the alignment based on observed drift
of a star, yields essentially the best possible alignment. But, it can take
hours to accomplish. It is appropriate for use in permanent observatory
installations but is too time consuming for those of us who wish to set
up and tear down every night.
Most premium German equatorial mounts have a polar alignment
scope built into the polar axis. The alignment scope has a very wide
field and includes an illuminated reticle that shows some of the bright
stars near the north celestial pole. To use it, you rotate the reticle and
adjust the mount until the star pattern shown in the reticle matches the
observed star field. This method of aligning works very well and only
requires a minute or two to get a very good alignment.
A popular telescope mount is the so-called fork mount. My Ques-
tar has this type of mount as do most other Maksutov-Cassegrain and
Schmidt-Cassegrain telescopes. To set up such a telescope for astropho-
tography, it is necessary to place the mount on an equatorial wedge
which sits on a tripod or a pier. Polar alignment is achieved by adjust-
ing the wedge so that it is perpendicular to the axis of rotation of the the
Earth. In other words, it must be adjusted so that the fork arms point
parallel to the pole axis. I’ll describe two quick and easy methods to
achieve this.
Method 1. First, do a rough polar alignment in which we assume
that Polaris is at the north celestial pole. To do this, set the declination
to 90◦ and adjust the mount so that Polaris is centered in the eyepiece.
Then aim at a star with known right ascension use it to calibrate the
right ascension. Finally, point the telescope to the coordinates of Polaris
1. POLAR ALIGNMENT 23
(right ascension 2h 32m, declination 89◦ 16m) and adjust the mount so
that Polaris is in the center of the field. This method is easy and works
well. But, how well it works depends on how accurate the setting circles
are. Just because the declination circle reads 90◦ does not necessarily
mean that the telescope is aligned with the mount’s polar axis. The
following method does not rely on the precision of the setting circles.
Method 2. First it is important to set the declination to 90◦ . As
mentioned already, just because the declination setting circle reads 90◦
does not necessarily mean that the telescope is actually aligned with
the polar axis of the mount. It is important to check this. To check it,
look through the eyepiece and rotate the scope around the polar axis;
i.e., rotate in right-ascension. The center of rotation is supposed to be
in the center of the eyepiece. In practice it almost never is. Adjust the
declination to get the center of rotation as close to the center of the eye-
piece as possible. It is best to use a wide-field eyepiece for this—I use
a 32mm Brandon eyepiece. You will probably find that it is impossible
to get the center of rotation exactly to the center of the eyepiece. At its
best, it will be a few arcminutes either to the left or right of center. This
happens when the right-ascension and declination axes are not exactly
perpendicular to each other. On my telescope, I can only get to within
about 7 arcminutes of the center. If it is very far off, you might consider
making adjustments to your mount in order to bring the axes closer to
perpedicularity. Once you’ve got the declination set as close as possible
to true 90◦ , lock it.
The next step is to adjust the mount so that the north celestial pole
coincides with the mount’s polar axis, which we identified as the cen-
ter of rotation in the first step (and might not be at the center of the
eyepiece). Again, a wide-field eyepiece hopefully providing a 1◦ field
of view, or more, is desirable. Then you can star hop from Polaris to
the correct location using a little finder chart such as the one shown
in Figure 3.1. I recommend installing a planetarium program on your
computer and using it to make a customized finder chart. With such a
program, you can size the chart to any convenient size, you can flip the
chart east-west to match the view in the eyepiece, and you can adjust
the relative brightness of the stars and the magnitude threshhold so that
the picture matches very closely what you see in the eyepiece. The chart
shown in Figure 3.1 was carefully prepared to match the view through
the Questar using a 32mm Brandon eyepiece.
24 3. IMAGE ACQUISITION
2. Attachments
After getting a good polar alignment, the next step is to attach the
CCD camera. Most modern astronomical equipment is designed for T-
threaded attachments, which are 42mm in diameter and have 0.75mm
pitch. Questar telescopes and Starlight Express CCD cameras do not
use T-threads. They are both based instead on P-threads, which are
42mm in diameter but have 1.0mm pitch. These two threadings are
sufficiently similar that it is easy to cross thread a P-thread accessory
into a T-thread one. You get about half a turn and then the threads bind.
Forcing the threads can ruin both pieces, so it’s important to keep T-
threaded accessories distinct from P-threaded ones.
The Starlight Express cameras, like most CCD cameras, come with
nose pieces that allow the camera to be used like either a 1.25” or 2”
eyepiece. Using the 1.25” nose piece, a CCD camera can be placed in
the eyepiece port of the Questar telescope. The main disadvantage of
this configuration is that the camera cannot be locked down to prevent
rotation—the last thing one wants is for the camera to rotate during
3. FOCUSING 25
3. Focusing
Focusing is the most tedious step in preparing to take CCD images.
The basic process is as follows. First, fire up the image acquisition
software, which in my case is AstroArt, and point the telescope at a
fairly bright star. Next, take a short exposure image. After download-
ing the first image to the computer, an out-of-focus image of the star
appears. Draw a small rectangle around the out-of-focus star and then
enter focus mode. In focus mode, just the pixels inside the rectangle
are downloaded. Hence, downloads are much faster. Setting the fo-
cus exposure to say 0.3 seconds provides a few focus window updates
per second. In addition to an image of the star, the focus window also
shows the number of counts for the brightest pixel as well as the “width”
of the star image expressed as a “full-width-half-max” (FWHM) value
measured in pixels. Each of these pieces of information are helpful in
achieving critical focus. Each one also fluctuates from one image to the
26 3. IMAGE ACQUISITION
next making it a challenge to know exactly when best focus has been
achieved. When changing the direction of approach to critical focus,
the star’s image shifts rather significantly. This image shift is a common
problem with focus systems in which the primary mirror is moved back
and forth. Changes of direction cause the mirror to tilt slightly differ-
ently and so the image moves. The amount of image shift varies from
one instrument to the next. On Questars, it varies from about 20 to 60
arcseconds. This is enough to move the star out of the field of view of
the focus window. The focus tool provides the capability of moving the
window this way and that but it is tedious to do so. It is best to make
the focus window big enough so that the star won’t shift out of the field-
of-view but not too big because then image download time becomes too
large.
In addition to the steps described above, one can employ various
focus aids such as a Hartmann mask or a diffraction-spike mask. I use a
Hartmann mask as shown in Figure 3.2. It consists of a 4” PVC endcap
with three holes drilled in it and painted black. Placing this mask over
the dew shield, an out of focus stars appear as three smaller out of focus
stars. As critical focus is approached, the three star images converge on
each other to make just one in-focus star. The biggest danger of using
a Hartmann mask is forgetting to remove it once critical focus has been
achieved.
Image Processing
29
30 4. IMAGE PROCESSING
object. In fact, if this image were restretched to cover the entire range
from 2760 to 25987 with a gray scale, all we would see would be the
three brightest stars. And that is all one sees at the eyepiece of the 3.5”
Questar.
Okay, so we’ve managed to detect a faint fuzzy that one could not
hope to see visually through a telescope of modest aperture. But, the
images are not very aesthetically pleasing. They have a variety of obvi-
ous defects. Can we fix them? For the most part, the answer is yes. To
see how, it is important first to consider what is in the image. The im-
age consists of several “signals” overlayed on top of each other. Some
photons come from space—we like these. In the images in Figure 4.1,
we see that there are a number of stars and the galaxy NGC 3628 is
visible, barely, in the center of the frame. Another source of photons
is from diffracted light in the atmosphere. These are mostly from light
pollution and the moon—we usually don’t want them. Light pollution
normally gives a roughly uniform glow across the field. It is part of the
reason that the smallest pixel values in the images are more than 2750
(although only a small part of the reason as we shall see). Moon glow
is like sky glow but usually has a characteristic gradient to it. That is,
it will be brighter in the corner nearest the moon and darker in the op-
posite corner. The images in Figure 4.1 do appear to have a gradient
in them but these images were taken when the moon was not out. This
gradient is due to another source of photons—amplifier glow. There is
an amplifier circuit near the upper left corner of the chip in the Starlight
Express cameras. This amplifier generates a small amount of heat. All
warm bodies give off infra-red radiation and CCD cameras tend to be
quite sensitive in the infra-red. Hence, these photons too get recorded
as part of the signal that makes up the image. These infra-red photons
corrupt all parts of the image—there’s just more of it in the amplifier
corner. Altogether the signal due to infra-red photons is called dark
current. The colder the camera, the lower the dark current. In fact,
all astronomical CCD cameras are cooled, usually to 30 or 40 degrees
Celsius below ambient temperature, in an attempt to minimize the dark
current. Finally, there is a certain number of counts that are generated
during readout—that is, they are generated when the electrons on the
chip are counted and sent off to the computer. These aren’t due to pho-
tons, per se, but must be accounted for nonetheless. On the Starlight
Express cameras, this readout signal amounts to a count of about 2635
at each pixel plus or minus about 5. If every pixel had a readout signal
of 2635 all of the time, then it would be trivial to remove—just sub-
tract it. But, unfortunately, these numbers are random and furthermore,
different pixels have a slightly different average values.
32 4. IMAGE PROCESSING
2. Dark Frames
So, how do we remove these unwanted signals? First, one removes
the readout and dark current signals by taking another image of the same
duration and at the same temperature as the original image but with the
front of the telescope covered so that no space or sky photons reach the
chip. This is called a dark frame. The image on the left in Figure 4.2
shows a dark frame that was taken to match the frames in Figure 4.1.
It contains the readout signal plus the dark current—and nothing else.
Hence, it can be subtracted from the original image to remove these two
unwanted signals from the original. But, before doing that, we need
to bear in mind one thing: if we take two grainy images and subtract
them, the result is an image that is even grainier. The graininess in both
the original image and the dark frame is due to the fact that the number
of photons arriving at a pixel in a given time is random. Two images
taken under identical conditions but at different times will be similar to
each other but not identical. It is exactly the same as taking a Geiger
counter, putting it near some radioactive material, and recording how
many counts one gets in a fixed time interval. If one repeats the ex-
periment several times, the results will be slightly different each time.
Assuming that the half-life of the material is very long, the average
number of counts will not change over time. But each individual ex-
√ this average. If the average is about N , then
periment will vary around
the variation is about N . Whether doing experiments with radioactive
materials or doing CCD imaging, we are generally interested in the av-
erage value rather than the outcome of one particular experiment. The
easiest way to estimate the average is to repeat the experiment many
times and compute the numerical average. The image on the right in
3. BIAS AND FLAT FRAMES 33
R=94–22019, V=166–232
Figure 4.2 shows an average of 30 dark frames. Note that the graininess
has largely disappeared. We now see a nice smooth amplifier glow in
the upper left corner. We also see a number of bright pixels. These are
called hot pixels. Every CCD chip has some hot pixels. The number
of dark counts in hot pixels grows with time just like all other pixels
but it grows more rapidly. If we subtract this averaged dark frame from
the original raw image frames, we will remove the effects of readout,
amplifier glow, and other dark current including hot pixels all at once.
The result is shown in Figure 4.3.
most reliable method for obtaining flat frames is to build a light box that
fits over the front of the telescope (with dew shield in place!) and pro-
vides a flat, dim, very out-of-focus, field to image. I made my light box
from a battery operated, dome-shaped, closet light glued onto the end of
a 4” PVC pipe coupler with a few sheets of milky plastic glued in. The
whole contraption cost nothing more than pocket change and took only
an hour or so to assemble. Using this light box, I obtained the flat field
frame on the right in Figure 4.4. This flat field needs to be calibrated
by subtracting an appropriate dark frame from it. As the exposure is
typically only a fraction of a second, there is almost no dark current to
subtract—the main thing to subtract is the readout signal. The readout
signal can be imaged by taking a 0-second black image. Such an image
is called a bias frame. As before it is best to average several of these.
The left-hand image in Figure 4.4 shows an average of 40 bias frames.
Using the averaged bias frame to calibrate the flat frame, the next step
is to rescale the numbers in the calibrated flat frame so that they average
about one and then divide the calibrated image frame by the normalized,
calibrated flat frame. In this way, areas that are too bright get dimmed
a little while areas that are too faint get brightened somewhat. The re-
sult is that the dust donuts, glow ring, and vignetting are all effectively
removed as Figure 4.5 shows.
4. Image Calibration
The steps describe in the previous three sections are called image
calibration. The process is summarized in Figure 4.6. It might seem
tedious, but image processing software is carefully designed to make
this task very easy. One simply selects a set of raw images, a set of dark
4. IMAGE CALIBRATION 35
frames, a set of bias frames, and a set of flat frames and then with the
click of the mouse all images will be calibrated in just a matter of sec-
onds. Actually acquiring the calibration images may also sound rather
onerous, especially if one must collect new images for every imaging
session in an effort to ensure that these images are obtained under ex-
actly the same conditions as the raw image frames. It’s not as bad as it
might seem.
First of all, averages of bias frames change very little over time. In
fact, one can collect a large number of bias frames, average them, save
the average, delete the originals, and use this one bias frame essentially
indefinitely. There are some slow changes as the chip ages but they can
often be ignored. If one is worried, one could remake their bias frame
once a year or so. As the interexposure time is just the amount of time
36 4. IMAGE PROCESSING
5. Stacking
The calibrated image of NGC 3628 shown in Figure 4.5 is not the
work of art that we’d like it to be. But it only suffers from one deficiency—
not enough photons. The sky glow is contributing on the average 150
photons per pixel and the galaxy is contributing about that many again.
This means that two adjacent background pixels, which ought√to have
the same value√of about 150 are likely to vary between 150 − 150 =
138 to 150 + 150 = 162. In fact, only two thirds of the background
pixels will fall into this range. We have to make the range twice as wide,
from 126 to 174 to include 95 percent of all background pixels. This is
why the image looks so √ grainy. The technical jargon here is that the
signal (150) to noise ( 150) ratio is inadequate. The only solution is
to collect more photons. Here again we see the beauty of CCD imag-
ing and digital image processing. Rather than starting over and doing a
(much!) longer exposure, we can collect lots of images and add them
together in the computer. This is called stacking. Figure 4.7 shows the
result of stacking 5, 15, and 50 six-minute images.
The final image, while not perfect, does begin to look quite re-
spectable. Of course, the total image intergration time is 5 hours. These
images were acquired using a focal reducer which brought the Questar
to about f/10. This is one of the main lessons. Imaging galaxies at f/10
requires hours of exposure to get nice results. I’m often asked what is
the correct exposure time. The answer is infinity. The longer the expo-
sure, the better the final result. The vast majority of mediocre images
one sees suffer from just one deficiency—not enough photons because
the total integration time was too short. Imaging faint objects at f/10
requires one important trait: patience. The final image of NGC 3628 is
shown on page ??.
used to measure it. For example, the other two bright stars in the image
have FWHM values of 2.860 and 2.831. Picking two other dimmer stars
at random, we find values of 2.539 and 2.703.
Measuring Image Scale and Focal Length. Using a computer plan-
etarium program and an image processing program, it is easy to com-
pute the image’s scale. Indeed, using the planetarium program, we find
that the distance from SAO 99572 to the star bright star below the galaxy
(TIC 0861) is 20.308 arcminutes. Using an image processing program,
we find that the same pair of stars are 476 pixel units away from each
other, meaning that if the image were rotated so that these two stars were
on the same row of pixels they would be 476 pixels apart. Dividing these
two distances, we get the size of a pixel in arcseconds:
20.308 arcminutes 60 arcseconds
× = 2.56 arcseconds/pixel.
476 pixels arcminute
Using the additional fact that one pixel on the Starlight Express MX-
916 camera is 11.2 microns across, we can compute the precise effective
focal length of the instrument, as configured. First, we convert the pixel
distance to a physical distance at the image plane by multiplying by the
size of a pixel:
IA
mA − mB = −2.50 log .
IB
Using the fact that the magnitude 9.85 star SAO 99572 has peak bright-
ness over background of 12953 − 248 = 12705 and galaxy NGC 3628
has peak brightness above background of 315.387 − 248 = 67.387, we
can compute an effective peak brightness for NGC 3628:
12705
9.85 − mgalaxy = −2.50 log
67.387
which reduces to mgalaxy = 15.5. This means that the brightest part of
the galaxy is about as bright as a magnitude 15.5 star in our image. We
must emphasize that this is a computation that is relative to the image at
hand. If the focus, the seeing, or the tracking had been worse, then the
psf would have been broader and the peak value for the star would have
been smaller. The peak value for the galaxy on the other hand would not
change much since it is fairly constant over several neighboring pixels.
Small errors in focus, atmospheric seeing, or tracking cause faint stars
to disappear into the background sky glow.
6. ANALYZING THE IMAGE 41
Using total values rather than peak values for stars avoids the sys-
tematic errors caused by focus errors, bad seeing, and tracking errors.
As we computed earlier, one pixel corresponds to 2.56 arcseconds both
horizontally and vertically. Hence a pixel covers 2.56 × 2.56 = 6.55
square arcseconds. Dividing the per-pixel background level of 248 by
the area of a pixel and using the total value for SAO 99572, we can
derive the true surface brightness of the sky glow:
95091
mskyglow = 9.85 + 2.50 log = 18.3.
248/6.55
This gives a magnitude that can be compared to a star that covers ex-
actly one square arcsecond. Sometimes surface brightnesses are com-
puted based on smearing a star over a square arcminute. A star has to
be 3600 times brighter to smear over a square arcminute with the same
resulting brightness as if it had been smeared over only a square arc-
second. Hence, magnitudes based on square arcminute coverage are
brighter by 2.5 log 3600 = 8.9 magnitudes. Hence, we see that the sur-
face brightness of the sky glow calculated on a square arcminute basis
is 18.3 − 8.9 = 9.4.
Sky glow surface brightness is usually reported on a square arcsec-
ond basis whereas surface brightnesses of galaxies are usually reported
on a square arcminute basis. The peak surface brightness of NGC 3628
is calculated as follows:
95091
mgalaxy = 9.85 + 2.50 log − 8.9 = 10.9.
67.387/6.55
This value compares favorably with reported average surface bright-
nesses of from 12 to 14 depending on the source catalog. Average sur-
face brightnesses are hard to define precisely because it is not clear how
much of the galaxy to include in the calculation.
CHAPTER 5
In some cases such as with NGC 3628, the image that results after
calibrating and stacking a set of raw images cannot be improved very
much by further tinkering. However, in most cases, further significant
enhancements can be obtained. There are a few reasons one might wish
to do more. The two most obvious are:
• To sharpen a blurred image.
• To adjust the dynamic range of the pixel values.
We will discuss these two issues in this chapter and illustrate the stan-
dard techniques for addressing them.
1. Unsharp Mask
Sometimes an image is blurred simply because of problems that oc-
curred during image acquisition: perhaps the telescope was not focused
properly or maybe there were tracking problems with the mount. In
such cases, the techniques described in this section can help but really
one ought to consider simply fixing the core problem and taking a new
set of raw images.
However, there are certain situations when an image will be inher-
ently blurry and it becomes desirable to sharpen it as much as possible.
The two most obvious reasons for inherent blurriness are:
• Imaging at high f -number.
• Dynamic distortions for atmospheric irregularities—so called
seeing.
For these cases, the techniques presented here work amazingly well.
Planetary imaging provides an excellent example of a situation where
one typically works at a high effective f -number. Such images are nor-
mally taken using either a Barlow lens or a regular eyepiece for eyepiece
projection (see Chapter ??). In either case, the f -number is often around
50 or more. At such high f -numbers, a star becomes a large blob called
an Airy disk surrounded by a concentric sequence of successively fainter
diffraction rings. An image of a planet is similarly blurred. There is
nothing one can do up front about this blurring of images—it is simply
43
44 5. IMAGE ENHANCEMENT AND PRESENTATION
−0.8× =
1 2 1
16 16 16
2 4 2
16 16 16
1 2 1
16 16 16
Note that the numbers add up to one. Here’s how the mask is used. For
each pixel in the unsmoothed image, the current pixel value is multiplied
by the value at the center of the mask (4/16) and to this is added the
values of each of its eight neighbors weighted by the corresponding
values in the mask. This total then becomes the new pixel value. This
process is repeated for every pixel in the image (border pixels have to
be treated slightly differently). For example, suppose that an image has
the following pixel values in rows 87–90 and columns 123–126:
Then, using the Gaussian kernel mask we can compute replacement val-
ues for the pixel in row 88 and column 124 as follows:
1 2 1
1723 ⇒ 1432 + 1510 + 1717 +
16 16 16
2 4 2
1654 + 1723 + 1925 +
16 16 16
1 2 1
1411 + 1522 + 1704 = 1648.6
16 16 16
46 5. IMAGE ENHANCEMENT AND PRESENTATION
∗ =
1925 ⇒ 1786.1
1522 ⇒ 1645.4
1704 ⇒ 1782.9.
2. Gamma Stretching
The unsharped mask image in Figure 5.1 is a big improvement over
the original but it suffers from low contrast within the rings and the
disk of the planet. This is because these parts of the image all have
approximately the same bright values. Yet, there is added detail in these
parts of the image that we’d like to bring out. This can be done using a
nonlinear stretching called gamma stretching.
As we mentioned at the very beginning of this chapter, all images
when displayed on a computer screen or on paper are stretched in some
way. The simplest sort of stretching is to specify two levels, a min-
imum level below which pixels should appear black and a maximum
level below which pixels should appear white, and to then shade pix-
els with values between these two extremes using a gray level that is
directly related to how far it is from the black threshold to the white
threshold. This is called a linear scaling. Using 0 to represent black on
the computer screen and 255 for white, the linear stretch used to display
the unsharp mask image in Figure 5.1 is graphically illustrated in Figure
5.3.
To enhance the unsharp-masked Saturn image, we’d like to make a
stretch where the input values from say about 6000 to 7200 take up a
larger share of the output values. This can be done by pulling down on
the sloped line in Figure 5.3 to get the curve shown in Figure 5.4. The
stretching function shown in the figure corresponds to gamma (γ) value
of 2. Larger values have the effect of pulling the curve down even more
and hence adding contrast to the brightest parts of the image. Values
of γ between zero and one actually push the curve above the original
straight line and have the effect of adding contrast to dim parts of the
image. Such values are often useful when trying to bring out the details
in faint galaxies.
Ideally, one might like to be able to draw any nonlinear stretching
curve to customize the appearance of a particular image. Most software
packages give the user such a tool but using it effectively can be tedious
if not downright difficult. The nice thing about gamma stretching is that
there is just one parameter that needs to be chosen and then the software
does the rest. Usually, one can find a good value by trial-and-error in
only a few tries.
The result of a γ = 2 stretch on the unsharp-masked Saturn image
is shown in Figure the center image in 5.5. The contrast is improved
but now there is one more thing to worry about. The video camera
used to take these images uses an “interlaced” CCD chip. This means
that the odd and even lines of the chip are read off separately. As a
48 5. IMAGE ENHANCEMENT AND PRESENTATION
100
150
200
250
50
0
0
1000
2000
3000
4000
5000
6000
7000
8000
consequence the precise exposure time for the two sets of lines might
not match perfectly. This has now become visible in our image. The
effect was always there but we have restretched the image in such a way
that it is now obvious and needs attention. The simplest way to fix this is
to apply the Gaussian kernel smoother one last time. The final image is
shown on the right in Figure 5.5. It now seems quite acceptable, given
the limitations of the telescope and the atmosphere we had to image
through.
3. Digital Development
As discussed earlier, images that cover a large range of brightness
levels are difficult to display effectively in a print image or on a com-
puter screen. The problem is that white is just white when we’d like it
to be dazzlingly brilliant, squint-your-eyes, white. Perhaps you’ve seen
old art deco paintings of a night scene in which various street lights and
3. DIGITAL DEVELOPMENT 49
10000
20000
30000
40000
50000
60000
70000
0
0
1000
2000
3000
4000
5000
6000
7000
stars have little light bulbs embedded in the painting to make these ob-
jects appropriately bright. Artists have known for a long time that white
in a painting isn’t bright enough. Perhaps someday computer screens
50 5. IMAGE ENHANCEMENT AND PRESENTATION
and maybe even the pages of books will be able to shine brightly at
the reader. In this case, we would not need to pursue special image
processing techniques to bring out the entire range of brightnesses. In
the meantime, we must be clever and figure out how to display images
nicely in print.
In this section and the next we confront this issue of how to display
images having a huge range of brightness levels. The best example of an
astronomical object with a large range is M42, the great nebula in Orion.
Figure 5.6 shows a calibrated image of M42 stretched two ways. The
raw image has pixels with as few counts as 6 and as many as 48721. If
we were to display the image using a linear stretch over this entire range,
all we would see are the four bright central stars, called the Trapezium,
and a few other bright stars. The nebulosity would be completely lost.
The two images shown in the figure are stretched 0–500 and 0–60. Even
here we see each of these stretchings bring out interesting details that
are lost in the other.
How can we restretch the image so that all the interesting detail
can be displayed at once? The first thought is to use a gamma stretch
with a gamma value less than 1. Figure 5.7 shows the result obtain
using γ = 0.2. It is much improved, but the interesting details in the
bright region, nearest the trapezium, have become a little flat because
they must be depicted using gray-scale values over a reduced range.
Essentially, each of the two images in Figure 5.6 had values 0–255 in
which to display the result, but the stretched image gets about half the
range, say 0–127 for the dim part of the image and 128–255 for the
bright part.
The problem here is that given any two pixels in the image, say A
and B, if pixel A is brighter than pixel B in the original image, then it
3. DIGITAL DEVELOPMENT 51
must still be brighter in the gamma stretched image. And, this is true for
any value of gamma. This preservation of relative brightness over the
entire image puts a very strong constraint on what can be done with the
raw image. In fact, this property must be preserved locally but it need
not be preserved throughout the entire image. For example, it would
be better to have a range of say 0–191 for the dim parts of the image
and a range of 64–255 for the bright parts. Of course, these ranges have
a significant overlap which could make the final image look “wrong”
but if we make a smooth transition from the dim regions to the bright
regions, then the final result will still look right.
It might sound difficult to find a technique that will have this prop-
erty of preserving relative brightness only locally, but it turns out that
it is quite simple. The basic method is similar to unsharp masking but
instead of subtracting a blurred version of the image, we divide by it.
Well, that’s almost it. We just need to make two small changes.
The first change is that we need to add some positive constant, let’s
call it b, to the smoothed image before dividing. Without this constant,
every part of the image, both the bright parts and the dim parts, would
come out with the same general brightness level. This is not what we
want—it goes too far. We still want the dim parts to appear somewhat
dimmer overall than the bright parts. Adding a constant reduces the ef-
fect of the division and effectively gives us an overlap of ranges without
it being complete. That is, we get something like 0–191 for the dim
parts and 64–255 for the bright parts. Of course, the exact amount of
the overlap depends on the constant b. The software packages that im-
plement this method usually suggest a good value for b based on the
image data but at the same time allow the user to change it to a higher
or lower value as desired.
52 5. IMAGE ENHANCEMENT AND PRESENTATION
a× =
+b
The second change addresses the fact that simple division of two
similar images will result in numbers that are fractions averaging about
one. Some fractions will be smaller than one and others larger than
one but the average value will be about one. We should multiply all
the values by some positive scale factor, call it a, to make the average
some reasonably large value. Some programs save images in such a
way that fractional pixel values are saved as fractions. But most don’t
do this. Instead they round each fraction to the nearest whole number.
If rounding is going to happen, it is crucial to scale up the fractions.
So, that is the method. It was invented by Kunihiko Okano. He
called it digital development processing or DDP since it imitates the
behavior of the photographic development process.
Figure 5.8 shows the result of this process on M42. As the method
works best when the background pixels are close to zero, we first sub-
tracted an appropriate background level from the raw image of M42.
Then, we applied the Gaussian smoother to make a blurred copy of the
image. Next, we added a constant (40) to the blurred image. Finally, we
divided the original image by the blurred image and scaled the result by
1000 to make the final numbers span a reasonable range of values.
4. VARIANTS OF DIGITAL DEVELOPMENT 53
=⇒
The resulting image looks much better but it has unnatural dark ha-
los around the bright stars. These halos can be minimized by a care-
ful choice of the smoother one uses to make the blurred image—the
Gaussian smoother we used was not the best choice. Implementations
of DDP in commercial image processing software tend to do a better
job picking an appropriate smoother but the halos are still often evident
in images having very bright stars embedded in background nebulosity.
In fact, DDP tends to work best on globular clusters (where there is no
background nebulosity) and on galaxies (where embedded stars usually
aren’t too bright). Star birth regions, such as the Orion nebula pose the
biggest challenge for DDP. In the next section, we’ll give a minor vari-
ation on the method that works well even in the presense of embedded
bright stars.
There is a second benefit to using DDP. Not only does it accomodate
wide ranges of brightnesses, it also sharpens the image much in the
same way that unsharp mask does. (In fact, DDP is roughly equivalent
to applying a logarithmic stretch to the original image, unsharp masking
that image, and then undoing the logarithmic stretch.) The sharpening
effect is quite strong and is the main reason that DDP is very effective
on globular clusters. As an example, Figure 5.9 shows how it does on
globular cluster M5.
centered on the pixel. Figure 5.10 shows the result obtained when using
a 7 × 7 rectangle around each point. This method loses the sharpening
aspect of DDP but retains the rescaling of brightnesses so that the final
image shows details in both the bright and the dark parts of the original
image.
One last alternative is to use the median value in a rectangle centered
on each pixel rather than the minimum. The median of a set of num-
bers is that number for which half of the remaining numbers are larger
than it and the other half are smaller. Since the average pixel value
in a neighborhood of a star tends to be skewed toward the maximum
value, we find that the median gives a value between the minimum and
the average. Hence, using the median gives a result that is somewhat
in between DDP with average and DDP with minimum. It stretches
brightness nicely and it sharpens the image somewhat without making
dark rings around the bright stars. The result for M42 is shown on the
right in Figure 5.10.
5. Deconvolution
As we saw earlier, convolving an image with a Gaussian kernel pro-
duces a blurred image. Convolution is a fundamental concept. In fact,
it is so fundamental that one can view the blurry imperfections in a
raw image as a convolution of a perfect image with a smoothing kernel
representing the blurring caused by the diffraction properties of light
combined with other sources of blur such as atmospheric turbulance, in-
accurate tracking/guiding, and even bad focusing. The kernel is called
the point spread function or psf. It is easy to get an estimate of the psf—
simply look at the image of any star. A perfect image of a star would
5. DECONVOLUTION 55
be just a point of light. The fact that it is spread out in all real images
is a consequence of all the blurring factors mentioned above. The star’s
image, after normalizing so that the total sum is one, is precisely the psf.
If we assume that the psf is known, it is natural to ask whether the
convolution process can be undone to recover the underlying perfect
image. Undoing convolution is called deconvolution. If the blur is ex-
actly the result of convolution with a known psf, then deconvolution
will recover the perfect, unblurred image. However, in practice, the
psf is never known exactly and in addition to blur in the image there
is also graininess resulting from the discrete nature of counting pho-
tons. The graininess gets amplified when deconvolving. Hence, it is
undesirable to completely deconvolve an image. Instead, one looks for
iterative procedures that slowly move from the original blurred image to
the completely deconvolved image. Then, one can stop the process after
just a few iterations—enough to improve the image but not so many as
to amplify the graininess and other imperfections to the point of annoy-
ance. There are three popular iterative procedures: Richardson–Lucy
deconvolution, VanCittert deconvolution, and maximum entropy decon-
volution.
We will only describe in detail the first method, Richardson–Lucy
deconvolution, as this is the particular procedure that achieved wide
acclaim from its success in restoring the flawed images from the Hubble
Space Telescope in its early years when it had flawed optics. The first
step of the process is similar to but a little bit more complicated than
DDP. The first step proceeds as follows.
(1) Make three copies of the original image; call them numerator,
denominator, and result.
(2) Apply the psf-based kernel smoother to denominator.
(3) Divide each pixel in numerator by the corresponding pixel
value in the blurred image denomnator.
(4) Apply the psf-based kernel smoother to the divided image numerator.
(5) Multiply each pixel in result by the smoothed ratio image
in numerator.
(6) Now, result contains the result of the first iteration of the
Richardson–Lucy deconvolution.
This first iteration is summarized in Figure 5.11.
Subsequent iterations are similar except that the image from the cur-
rent iteration is used for the left-hand image and for the denominator
image. For example, the second iteration is as shown in Figure 5.12.
As with the earlier sharpening techniques, there is a danger in over-
doing it. Dark rings will start to appear around stars, areas of medium
56 5. IMAGE ENHANCEMENT AND PRESENTATION
× ∗
∗
brightness will appear to have too much contrast, and background ar-
eas will become noisier. There are various techniques to limit these bad
behaviors. The simplest is to subtract as much of the background as
possible before applying the technique. If any real data was taken away
with the subtraction, it is always possible, even desirable, to add the
background back in at the end.
× ∗
∗
taken with a focal reducer in order to get the largest possible field of
view. But the price one pays is to see these imperfections. In fact, a
focal reducer that is not well matched to the optical system can, and
often does, contribute to the problem. In this section, we show how one
can address these imperfections using appropriate image-processing—
it’s cheaper than buying a new telescope.
Figure 5.14 shows the result of Richardson–Lucy deconvolution ap-
plied to the image from Figure 5.13. Note that the nebulosity is much
improved and that the central stars have been sharpened nicely, but at
the same time the stars in the corners have gotten much worse. The rea-
son is that deconvolving an oblong star using a round psf produces not
a pin-point star but rather a little line segment. This is not good. But, it
58 5. IMAGE ENHANCEMENT AND PRESENTATION
the image. So, I wrote my own. Figure 5.15 shows a model of the
psf for each of 100 stars spread out over the field of view. Note that
the stars shapes approximately match what we saw in the Veil nebula
(Figure 5.13).
Using a psf model like the one shown in Figure 5.15 in a Richardson–
Lucy deconvolution code, I was able to recover the deconvolved image
shown in Figure 5.16. I think it is safe to say that the final image is
much improved. As a final remark, we note that deconvolution only
works well on long-exposure images (i.e., images with very little grain-
iness).
60 5. IMAGE ENHANCEMENT AND PRESENTATION
Stories
1. Jupiter
I was interested in astronomy in junior high and high school. But at
that time, I had no money so my interest was mostly restricted to books
and visits to the local planetarium. After high-school, I devoted my en-
ergies to other activities and my interest in astronomy layed dormant for
many years. Then in October of 1998, a friend of mine, Kirk Alexan-
der, invited me to an evening of stargazing at the annual Stella-Della
star party. This was my first night ever of real observing. Kirk set up his
telescope and pointed it at Jupiter, which was just rising in the east. We
then walked around the large field and looked at various objects through
a variety of telescopes. At times during the evening we would return to
his telescope for another peek at Jupiter. At one of those times, I noticed
a round black spot on the face of Jupiter. I asked Kirk if maybe there
might be some dirt on the eyepiece. He took a look and immediately
identified the black spot as a shadow transit of one of Jupiter’s moons.
This was really exciting to me—to think that we were looking at a to-
tal solar eclipse taking place on another planet. Over the next hour or
more we watched as the region of totality, the black spot, moved slowly
across the face of the planet. I decided then and there that I would buy
myself a telescope.
Just like so many before me, I made my telescope decision too
hastily. Positively influenced by what I saw through Kirk’s 7” Ques-
tar I decided to buy the smaller 3.5” version. I also subscribed to Sky
and Telescope magazine. For the next few years, every month I would
check the page in Sky and Telescope that lists times of shadow transits.
Those times that seemed like reasonable observing times for someone in
New Jersey were then entered into my computer’s calendar program so
that I would be automatically reminded of the event a few hours ahead
of time. I must have set up my new telescope dozens of times over a
few years with the sole purpose of observing a shadow transit. I was
never successful. To this day I don’t know why. Was I converting from
Universal Time to Eastern Standard Time incorrectly? Was my 3.5”
61
62 6. STORIES
2. Starquest 2002
Every June, the Amateur Astronomers Association of Princeton (AAAP)
hosts a two-night star party at a fairly dark site in northwestern New Jer-
sey. I first attended this weekend affair in June of 1999. It was great fun
to take my recently bought Questar to its first star party. Having myself
never been to Starquest, I didn’t know exactly what to expect. When I
got there I learned that AAAP member Bill Murray has a tradition of
running an observers challenge. It is different every year but generally
consists of a list of about 50 interesting things to observe. Attendees
are given the list—actually there are two lists, a beginners list and an
advanced list—and are instructed to record the time that they observe
the various items on the list. At the Sunday morning breakfast, those
observers who have logged 25 of the items are awarded a “AAAP Ob-
servers” pin. I attempted the beginner’s list and had fun observing some
things that I’d never before seen through my Questar—or any telescope
for that matter. The most memorable discoveries were the Double Clus-
ter and the Wild Duck Cluster. It also became apparent just how faint
galaxies appear in a 3.5” f/15 instrument. On Sunday, I got my first
observer’s pin.
I had such a good time at my first Starquest that I decided to make
it a tradition of my own to attend this party every year. June 7–9, 2002,
was my first Starquest party after getting into CCD imaging. I brought
my telescope and imaging equipment, i.e., camera and computer, to the
party not really being sure whether I’d devote myself to visual or CCD
imaging. I was pleasantly surprised to learn that Bill had selected a
list of about 50 globular clusters for the observer challenge. This was
perfect for me since globulur clusters are objects that can be imaged
nicely with fairly short exposures. The rules were that an observer must
log 25 of the globulars and that no more than 15 of them can be Messier
objects. I figured that it might actually be possible to image 25 globular
clusters in two nights, so this is what I set out to do. My plan was to take
25 six-second unguided exposures of each globular that I could find.
That’s 2.5 minutes of exposure time. Since each image takes 6 seconds
to download, it would take 5 minutes to do one imaging sequence. I
allowed myself 5 minutes to find each target and another 5 minutes to
“compose” the shot. If I could keep to this schedule, I could do four
clusters per hour.
On Friday night I managed to image 16 of the easier clusters. On
Saturday night, I imaged 11 more clusters and revisited one cluster,
NGC 5053 from the previous night. Sunday morning I discovered that
my image of NGC 6517 didn’t actually contain the cluster. Looking
2. STARQUEST 2002 65
at star charts, I discovered that I’d missed the cluster by only 5 arcmin-
utes. Also, due to inaccurate counting in the middle of the night, I ended
up short on non-Messier objects. Even if I had correctly imaged NGC
6517, that would have only been number nine, not number ten like I had
thought. My final tally was 18 Messier clusters and 8 non-Messiers.
Nonetheless, Bill awarded me with a pin for the effort. Thumbnail im-
ages of the 26 clusters are shown in Figure 6.2.
Bibliography
67