You are on page 1of 9

GET 1042: Sky and Telescopes

Astrophotography

1 Photography
Photography is the process in which light is collected to record an image.
• Light enters a camera system through the objective element.
• Gets focused onto a focal plane where an image forms.
• A light sensitive image sensor records the image.
Focal length (f)

Rays from Retina


distant object

Image Focus
Sensor

Lens
Objective Image
Diameter (D) Focal ratio (f/ number) = f/D

The process of taking a photograph usually goes as follows:


1. Open the shutter or activate the sensor.
2. Leave the shutter open for a preset amount of time (the shutter speed or exposure time).
3. Close the shutter (if there is one) and read out the data from the sensor.
4. Clear the image from the sensor (if necessary) and prepare for the next exposure.
Photography allows more light to be collected!
• Your eye constantly “reads out” the accumulated information on the retina.
• Cameras allow photons to accumulate over a long time so fainter details can show up.
• Photography is sometimes the only way to observe fainter objects: You may have found
an object but is invisible to your eyes. However, it will show up in a sufficiently long
exposure.
• A good finding chart that shows the expected field can help here!

Camera Controls
The simplest cameras just have a button to take a picture, with the internal settings all auto-
matic. For astrophotography, we will want more control. The important settings are as follows:
Shutter Speed Also known as exposure, this is how long you expose the sensor to light. The
longer the camera can stare at something, the more light you can collect.
Sensor Gain This controls how the camera converts light to numbers. For most consumer
cameras, this is calibrated to the film speed scale in which case it is known as the ISO
setting or exposure index. If it is not calibrated, it will just be known as gain (in the case
of webcams). More sensitive settings often give more image noise.
Aperture Otherwise known as the f-stop or f-number. For consumer level cameras, there is
an adjustable diaphragm inside the lens that allows you to control the effective objective
W EEK 5: A STROPHOTOGRAPHY

diameter and make it smaller to let in less light (useful in the day). For telescopes, we
want more collecting area so the f-number of a telescope is fixed.
A good interactive site to understand how these factors play together is at http://photography-mapped.
com/interact.html. To see it in action without a DSLR, some phone cameras may also allow
manual controls (sometimes through a third party app).

2 Image Sensors
Image sensors record the image formed on the focal plane by a variety of methods. The image
sensor may be a semiconductor device, chemical layer or biological material.
• Semiconductor detectors consist of light sensitive pixel sites that convert light into an
electrical signal.
• Photographic film (or plate) is coated with an emulsion of silver halide (often silver chlo-
ride or silver bromide) particles that are light sensitive.
• The retina consists of light sensitive cells that are connected to the optic nerve.
The recorded image on a sensor can then be stored or processed.
• In a semiconductor sensor, the image is read out pixel by pixel.
• In photographic film, the image remains on the film.
• In the retina, nerve endings transmit image data to the brain.
Most astrophotography now uses semiconductor detectors.
• Photographic plates were common in the past but now have been replaced by large for-
mat CCD detectors.
• Consumer level photographic film has been mostly replaced by digital sensors of varying
sizes.

Sensor and Pixel Size


Digital image sensors (and some film types) are often specified with a format that specifies the
sensor size, and a pixel count (megapixels). Common sensor formats are
“Full Frame” This is the active area of a 135 format film frame (35 mm), at 24 mm × 36 mm.
“Full Frame” when referring to sensor size means the full active area of a 135 format film
frame.
APS-C These formats try to approximate the size of an Advanced Photo System Classic (APS-
C) negative frame, at 25.1 mm × 16.7 mm. This is a common format in many lower end
DSLRs and mirrorless cameras.
Four Thirds system This sensor format has an active area of 17.3 mm × 13.0 mm which ap-
proximates the size and active area of a 4/300 TV camera tube sensor. This is found in
four-thirds and micro four-thirds (mirrorless) cameras.
“Inch system” Smaller sensors are specified with sizes in fractions of an inch. This size is
measured along the diagonal of the sensor and includes “packaging” so the actual sensor
is smaller. Webcams and phone cameras usually have 1/3 inch or 1/4 inch sensors.
The sensor size tells you the field of view of the camera, or the plate scale. This comes from the
sensor size and the focal length of the telescope:

Sensor Size Sensor Size


 
Plate Scale = 2 × tan−1 ≈ 57.3◦ ×
2 × Focal Length Focal Length

2
W EEK 5: A STROPHOTOGRAPHY

However, while larger sensors may seem better, sometimes the telescope is not big enough to
light up a large sensor!
• The telescope may not have enough field of view for the entire sensor (the edges of the
picture will be black). This is known as vignetting.

Colour Cameras
All consumer level cameras take pictures in colour.
• However, CCDs and sensors are normally sensitive to most of the visible spectrum, and
some infrared. This is why some sensors are monochrome.
• To get a colour image, filters are placed over each pixel to make it sensitive to different
colours.
• The most common of these filters is the Bayer format filter where each group of 4 pixels
come together to give 3 colours.
• To actually get 3 colours per pixel, interpolation is used to fill in the missing information.
This is where software can reconstruct the missing colour information from the adjacent
pixels.
Bayer format colour sensor Monochrome sensor

The use of interpolation for colour information means that the spatial resolution for colour
information is half that of the spatial resolution of monochrome information.
• The human eye has specific sensors for red, green, blue (cones) and brightness (rods).
People will see changes in brightness better than changes in colour.
• Interpolated colour information is imperceptible to the human eye.
• Interpolating colour information is a very common practice in consumer level imaging
and is used for colour TV broadcasts: The colours you see on TV are all interpolated.

3 The Camera and Telescope as a System


Astrophotography often involves putting a camera on the telescope. To get the most out of the
camera-telescope system, the camera and telescope should complement each other.

Resolution
Resolution is defined as the smallest feature an optical system can resolve.
• This is measured in angular units (most commonly arcseconds for astrophotography).
• The “resolution” that you are familiar with (number of pixels) comes from the idea that
if you have more pixels in a picture, you can resolve finer details. Note that this is not
always so!

3
W EEK 5: A STROPHOTOGRAPHY

Pixel Scale and Resolution


The telescope has a native angular resolution that specifies the smallest angular feature it can
resolve. For visible light (green), this is the larger of 1 arcsec or
138.4
θres = arcsec
Objective diameter (mm)
• The formula gives the resolution limit of the telescope
• The 1 arcsec limit gives the resolution limit of the atmosphere, and may depend on the
weather. This is known as seeing.
However, on a camera, each pixel can see an angular patch of sky given by the pixel scale:
Pixel Size (mm)
θpix = 206265 × arcsec
Focal Length (mm)
From this we can compare the two competing constraints (θres from the telescope and θpix from
the camera) on resolution:
θres < θpix : The pixel scale is larger than the angular resolution of the telescope. Even though
the optics have very good resolution, the camera cannot see the fine details because the
pixels are too large. This is known as undersampling. The images could be sharper with a
suitable Barlow lens. (but that may have other side effects...)
θres > θpix : The pixel scale is smaller than the angular resolution of the telescope. The optics
in the telescope are not good enough to resolve finer details. The very fine details will be
blurred and will be spread out over multiple pixels. Even though the camera has very
good resolution, the final images will not be as sharp as the sensor can go. This is known
as oversampling. However, some image processing methods can still extract information...

Étendue
The Étendue of the system describes the illumination of the sensor-telescope system.
• This describes how much light is collected and sent to the focal plane.
– A large field of view means that light is collected over a large patch of sky.
– A large collecting area means more area for starlight to fall on.
• A larger Étendue means that more light is concentrated onto the focal plane (and expo-
sures are shorter).
Étendue for a camera-telescope system depends on the pixel scale (for the field of view) and
collecting area (for the amount of light collected). This is proportional to

pixel size 2
 

f /#
This means slower optics (larger f /#), which have a longer focal length or smaller objective,
will not concentrate light onto a sensor as effectively as faster optics (smaller f /#).
• Faster telescopes are often better when used to image extended objects. However, faster
telescopes also tend to have shorter focal lengths and thus a wider field of view.

4 Photons to Pictures
There are many steps before starlight becomes a photograph. We will focus on semiconductor
detectors because that’s what most people now use. In general, the simplest way to take a
picture is as follows:
1. Open the shutter and let light onto the sensor.

4
W EEK 5: A STROPHOTOGRAPHY

2. Pixels are made of a light-sensitive semiconductor that ideally sets aside an electron (pho-
toelectron) for each photon that falls on it.
3. Close the shutter after the preset exposure time.
4. Transfer the image (in the form of accumulated photoelectrons) out of the sensor.
5. Amplify and measure the electrical signal, and convert it into digital numbers.
6. Organise the numbers and write it into a file on the memory card.
The pixel values on each captured image represents the amount of light falling on the pixel
(and some noise). The captured image is not always perfect, and at each step of the way,
imperfections and tradeoffs can affect the final image.

Reading Out
Once the exposure is over, the image needs to be transferred off the sensor and digitised.
• In this step, the photoelectrons are counted and converted to a digital number. The num-
ber is then written to a file which repesents the pixel brightness.
• To count the photoelectrons, the signal needs to be processed and amplified.
This results in a readout gain and bias level:
Readout Gain This is the amplification factor, expressed as how much light corresponds to a
digital unit. This is controlled by (and hidden behind) the camera’s ISO setting.
Bias Level In order to reliably measure and record the signal, the camera will add a number
to the pixel value. This is known as the bias level and represents black in the image.
The readout process is not completely perfect and will introduce a small amount of noise into
the image
• This noise is not noticeable in ordinary photography (lots of light), but can make a big
difference in astrophotography (not enough light).
• In practice, this means the bias level will fluctuate slightly about an average value.
To measure the bias level, we take a picture with zero (or as short as possible) exposure time
and the shutter closed (or lens cap on). This is known as a bias frame (Some authors use offset
frame or zero frame). The bias frame defines what is a black pixel and sets the black point of the
image.

When reading out the image, the camera will add a fixed amount to the pixel value so that it
can reliably record the signal. This is known as a bias value and means that black in the image
does not correspond to a 0 pixel value. We measure this by taking a picture of negligible
duration with the shutter closed or lens cap on. This is known as a bias frame and tells us
what pixel value represents black.

Dynamic Range
During an exposure, photoelectrons are stored on pixels on the chip.
• The number of photoelectrons that a pixel can hold is known as the full well capacity.
• When the pixels are full (too much light → too many photoelectrons), the pixel becomes
saturated.

5
W EEK 5: A STROPHOTOGRAPHY

The full well capacity, read noise and largest value possible for a pixel gives the dynamic range
of the sensor. This is the ratio between the brightest object it can record (without saturating) to
the faintest object it can reliably record.
• The faintest object it can reliably record depends on the read noise: Too faint (not enough
electrons) and the signal gets confused with the read noise. This is also known as under-
exposed and the pixel will appear black.
• The brightest object it can record depends on the full well capacity: Too bright and the
pixel is full (all white or saturated). This is also known as overexposed.

To avoid underexposed images we use long exposures.


To avoid overexposed images we limit the exposure time of each picture. To collect more light
than is possible in a single picture, we can take more pictures instead and combine them later
in software in a process known as stacking.

5 Imperfections
There are always imperfections in the image:
• Manufacturing processes cannot make precisely perfect sensor chips so each pixel may
respond to light differently.
• Dust on the optics can also degrade the sensitivity of specific areas.
• Optical aberrations may mean that the sensor is not evenly illuminated. In many systems,
the centre of the frame is brighter than the edges (known as vignetting).
To work around these irregularities, we have to characterise each pixel of the camera-optics
system.
• To do so, we measure the response of each pixel to a uniform light source. This involves
taking flat frames.
• Flat frames can be taken by pointing the telescope to the twilight sky and taking a picture.
Stars are not out yet, and the sky (in the small field of view) is mostly evenly illuminated.
• A flat frame measures the response of the sensor-optics combination at each pixel.
• We can correct for vignetting and other uneven illumination problems with a good flat
frame.
We will save the flat field images for later post-processing. These flat field images allow us to
correct for slight differences in pixel sensitivity.

For various reasons, each pixel may respond to light slightly differently. To account for these
differences, we take a picture of a uniformly illuminated patch of sky. This helps to charac-
terise each pixel and is known as a flat frame.
A flat frame contains the light response of each pixel, dark noise and the bias level.

6 Long Exposures
The stars are faint, so long exposures are needed before enough photoelectrons build up in the
sensor. This can range from a few seconds to tens of minutes. Most DSLRs have a maximum
exposure of 30 minutes.

6
W EEK 5: A STROPHOTOGRAPHY

During the exposure, the sensor is connected to the power supply and powered up.
• Electrons may move about because they are warm (room temperature).
• These electrons may find their way into the sensor sites and join the photoelectrons. This
is known as a dark current because it will appear even in the dark, with the lens cap on.
• Dark current will occur at random. However, we can measure the average dark current
(in electrons per hour).
The dark current depends on the temperature of the sensor chip.
• A colder chip will have less dark current: Colder electrons have less thermal energy to
move about.
• To reduce dark current, sensors can be cooled.
• A DSLR can be cooled by blowing air over it, or letting it cool down for a while after
taking a long exposure.
• Most commercially available cooled sensors are thermoelectrically cooled.
– This uses the thermoelectric effect to remove heat from a cold pad and transfer it to
a warm pad.
– A heatsink on the warm pad then dissipates the heat out into the air.
Although we can try to reduce dark current by cooling the sensor, we cannot completely re-
move it: The sensor has to be at absolute zero for the dark current to be zero.
• We will have to try and subtract out an averaged dark current. This dark current shows
up in the image in the form of noise known as dark noise.
• In a DSLR image, the noise shows up as bright coloured pixels.
• To get an averaged dark current, we take a number of exposures with the shutter closed
and the sensor at the same temperature. This gets us a dark frame.
• Each exposure should have the same duration as the sky exposure (or we will have to
scale the dark noise based on the exposure time).
• The dark frames are then averaged to get a master dark frame.

Dark frames are not always necessary:


• This is usually when the exposure time is sufficiently short that the dark noise is negligi-
ble.
• A suitable exposure time will depend on the sensor, and the temperature of the sensor
during the exposure.
• Some DSLRs attempt to suppress dark current internally

During the exposure, thermal movements of electrons in the system contribute to the dark
noise. We can measure and subtract the average amount of dark current by taking long
exposures with the same duration and temperature, and with the shutter closed. This is
known as a dark frame.
A dark frame contains dark noise, dark current and the bias level: The dark current photo-
electrons need to pass through the same readout electronics as the image.

7
W EEK 5: A STROPHOTOGRAPHY

7 Deep Sky Astrophotography


For deep sky astrophotography, we want to take long exposures. However, there are many
constraints that limit the effective exposure time of a picture:
• The camera may have a limit on the maximum exposure time
• Skyglow may saturate the sensor very quickly
• Parts of the object may be too bright and saturate the sensor, while fainter regions are still
dark
• The mount may have difficulties tracking for so long
To get around these constraints, we take multiple exposures and combine them in software. At
the same time, since we are doing software processing, we can also combine the bias and flat
frames to get better single images:
• We subtract the bias or dark frames from the image frame. This removes the average bias
level, and the average dark level from the images (and removes tthe unwanted contribu-
tion form the camera’s electronics).
• We do the same for the flat frame because we are interested in how the sensor actually
responds to light.
• We then correct for differences in pixel response by dividing the subtracted image by the
subtracted flat frame. This will also remove dust shadows on the image.
The end result is
Unprocessed Frame − (Bias Frame OR Dark Frame)
Processed Image =
Flat Frame − (Bias Frame OR Dark Flat)

The processing software can do this automatically.


After stacking, there may still be other steps. Sensor chips are not always perfect and there may
be bad pixels. These will show up as bright or dark spots.
• This is when the bad pixels (even after processing) really start to show up.
• The best way to avoid bad pixels is to use a different part of the sensor. In astrophotogra-
phy, this means moving the telescope slightly off for another exposure. Give the camera
a rest and let it trail a bit.
• To remove bad pixels from a final image, we can fill in the blank data from another image.
• Sometimes we can interpolate across the bad pixel: This is like making a guess on what
should be on a bad pixel.
• In practice, if the bad pixel is on a blank patch of sky, paint it over in photoshop.

8 Lucky Shot Imaging


While much of astrophotography involves opening the shutter for a long time, lucky shot imag-
ing does the opposite and takes a lot of quick images of a bright object (such as a planet).
• This is to catch the moments when seeing is good: Seeing is caused by turbulence in the
upper atmosphere, and depends on high altitude winds.
• Exposures are short so that an exposure completes before seeing changes.
• Software then evaluates the resulting images and chooses the best to stack.
The effect of seeing in general is to cause blurring:
• The atmosphere refracts light, and causes the image to jump about.

8
W EEK 5: A STROPHOTOGRAPHY

• On a long exposure, the jumping leads to blurring


• On a video, the jumping is very visible.
• In each frame of a video, the planet may still be slightly blurred, but that is more easily
corrected.
Stacking for lucky shot imaging is not the same as stacking for deep sky objects!
Lucky shot imaging involves very different equipment and requirements from deep sky imag-
ing:
Lucky Shot Imaging Deep Sky Imaging

• Bright objects (planets) • Faint objects (Nebulae)


• Very short (. 1/100 s) exposures • Long (> 5 s) exposures
• Quick succession of many images • Long sequence of fewer images
• High resolution (diffraction limited) • Lower resolution (seeing limited)
Cameras used for lucky shot imaging require faster frame rates and smaller file sizes than
cameras used for deep sky imaging.
• Read noise and gain are not so important here: There is much more light available.
• Some cameras have onboard cropping: Set a region of interest (ROI) in the picture and
download only that region for smaller file sizes and faster frame rates.
• Webcams are the more suitable option for lucky shot imaging!
Mount requirements for lucky shot imaging are also different:
• Polar alignment is not so important, autoguiding is unnecessary.
• Exposures are so short that any trailing will not be seen at all.

You might also like