You are on page 1of 22

Choosing a High Definition Image Sensor for Small Gauge Film Transfers

Rumble House Media Group Inc.,


May 1st, 2009

Introduction
The age of small gauge film (8mm and Super 8mm) transfers to DVD using 4:3 DV (Digital Video)
technology may very well be drawing to a close as a mainstream service by the latter part of
2010. In light of the promises of what high definition (HD) technology can deliver, its not a
stretch to make that prediction. Just as off-the-wall/VHS film transfer methods used not a
decade ago has mercifully faded away, a much improved Standard Definition (SD) DV
technology will also be giving way to the new kid on the block, High Definition Video or HD, whether or not its worthwhile. It will just be the expected thing to do.
Regardless of where HD technology will lead the film transfer business, it should be noted that
the standard DV resolution of 720 x 480 pixels used in film transfer is marginal at best. This is
particularly true for Super8 and 16mm film gauges, where the film images can be under
sampled. Adequate results can be had assuming the image sensors were physically big enough,
that good post processing occurred, the imaging system has good horizontal resolution (better
than 520 lines) and the attached optics met 1MP requirements, even though many DV based
imaging sensors were sub 720 pixel rated. Conversion often resulted in loss of detail. Use of
elegant pre and post image processing techniques within the camera or specialized software in
the post production phase in an NLE did a very good job at masking many sampling based
errors but at the expense of hiding the finer nature of the actual image content; -an acceptable
tradeoff given the current state of SD based transfer technology.
Is high definition video with its greatly increased spatial resolution and dynamic range set to
carry-on where SD left off then? It looks like it. The technical means to faithfully render the
high resolution nature of small gauge film does now appear to exist, barring that certain
caveats have been met. The availability of larger image sensors, higher pixel densities, better
noise figures and lower cost higher precision optics are certainly moving in the right direction,
providing the basis for what can be impressive amateur film to digital results.
It does leave a lingering question though, why have a superior imaging system capture and
process amateur film that is in many cases questionable. In other words why bring out any
warts the film may have in finer detail? The answer simply is that regardless of the warts
(which will be there anyway), the higher image resolution and deeper pixel depths possible that
HD technology can offer, can now be used to approach the resolution and contrast ratios of
film. A notion not possible in the SD telecine model, as imaging errors were inherently masked
very well and thus of little or no concern.

Making the Case


To begin the process of assessing what is required for a true HD based transfer system is to
start with the image sensor itself. It must possess enough horizontal and vertical resolution to
produce a faithful digital equivalent of a film frame when exposed to its light gathering surface.
This means it must have more pixel density than the smallest photo element within the film
emulsion.
What limits the resolution of an image sensor is primarily its individual pixel size and density
(there are other properties, but lets leave those out for now) and for film, the size and quality
of the micro crystalline structure of its photographic base. So, is there any common ground
between these two very different technologies so that the math on both sides of the equal sign
works? Yes there is, and that is how each defines its resolution. If each can be made to share a
common unit of measurement we can move forward.
Small gauge amateur film by nature has a high resolution, as defined by the micro granular
structure and grain size of the silver halide crystals within its emulsion layers; - the very
photographic nature of film. Similarly, the resolution of digital image sensors is measured by its
pixel size and spacing on its semiconductor substrate. Being discrete in nature, a sensors
physical horizontal and vertical resolution is quite apparent. Sensor effective resolution is
another issue and will be looked at later. Both film and sensor quality can be judged by the unit
of line pairs per mm or lp/mm as defined by MTF (Modulation Transfer Function -see side bar).
View line pairs as alternating white and black sets of vertical (and horizontal) bars (Figure 1.0).
In an MTF measurement test pattern, these line pairs will progressively get narrower until the
imaging system cannot distinguish white from black. When that occurs, a certain set of
calibrated MTF markings will define the systems resolution capability.

The MTF characteristics published for


photographic bases like 8mm film is
determined by imprinting a series of
test patterns (alternating black
maximum opaqueness and white bars
maximum transparency) on the films
emulsion layer of ever decreasing
space between the bars Figure 1.0.

Figure 1.0

Film Resolution Test Pattern

The films resolution is ultimately determined at some point where the bar spacing produces a
50% and 10% loss in contrast. This measurement is done in both horizontal and vertical
directions. The photo grain is both random and variable in distance to each other in both
directions. Looking up some old KODAK data will provide the typical lp/mm information for the
many grades of 8mm film in order to move forward.

Sidebar MTF Modulation Transfer Function


All imaging devices have a measurable MTF. The MTF standard provides the means to
measure how well an imaging system can resolve detail. Higher MTF (or higher
frequency response), quoted in percent, will resolve higher detail in a photo image.

Figure 2.0

MTF Test pattern

Lower MTF numbers indicate a loss in contrast and detail. High contrast occurs when a
white and a black line are well defined with no blurring between them (as the bars get
closer as shown towards the right, a trend towards merging will yield a grey monotone
result, indicating maximum resolution for that imaging component has been reached).
These spatial frequency responses are typically quoted in cycles or line pairs per
millimeter. MTF is measured as a percentage of maximum contrast transfer (100%
modulation switching between white and black bars that are well defined, --meaning
there is no frequency related attenuation of light. An MTF of 50% for example means
that half of the contrast is missing at a specified contrast frequency, the white and
black bars are beginning to blur together). A response that has deep or extended MTF
numbers indicates finer detail thus sharper images are expected. Typically MTF
numbers are quoted by the resolutions attained at the 50% and 10% response points in
the MTF curve. The 50% point will always result in lower line pair density than the 10%
point. Higher frequencies will typically roll off the MTF response, resulting in loss in
detail and the perception of loss in contrast. We will be using line pairs resulting from
50% MTF measurements.

Sampling - Joining the Two Worlds


To capture an 8mm film frame faithfully, anomalies and all, the imager must have better
resolving power than that of the films highest resolution which is determined by the quality
and density of the films silver halide crystals within its gelatin base.

Note: For purposes of keeping things simple, only the sensors pixel size and density parameters
are used to determine sensor performance. Properties that contribute to degradation like
demosaic processes, anti-aliasing applications, the resolving power and any aberration
properties of the optics and image motion blur are not considered at this point. We are just
looking for trends in the results.
From sampling theory and to keep Nyquist happy, at least two samples of data must be
captured at the sensor to reconstruct the original film information without aliasing, usually
seen as moirs and other image distortions. These annoying effects are removed by post
processing techniques (like on purpose defocusing) usually resulting in lower resolution which
in turn will yield softer looking images. To see it another way, the imager must have more
pixels (light wells) in both x and y directions (over sampling) to capture the smallest details
within the film frame, thus keeping the scanned output image as sharp and true to the original
as possible.
Treat the pixels or light wells of the sensor as sampling points when the film frame is being
exposed to its light capturing surface. The more light wells per unit area on the sensor with
respect to the size and density of the photo components embedded in the films gelatin, the
more valid the captured film samples will be. This can be achieved by ensuring there are more
than enough light wells available in relation to the size and density of the photo elements (the
colorful silver halide chunks) of the film as shown in Figure 3.0.

There are lots of light


wells or photo-sites
sampling the photo
chunks of the film frame
(represented by multi
colours and random odd
shapes as shown in the
figure to the left).

Figure 3.0

A highly sampled film frame on a dense photo site sensor

In Figure 4.0, the opposite is true, there is not enough sensor light wells or samples to capture
the various sizes of film photo elements, which will result in a poorly reconstructed image.
Many aspects of the granular structure of the film (photo chunks) are being missed by the
sensor photo sites simply because the photo chunks of the film frame are too small in relation
to the photo site density. This is a case where a low resolution imager is being used.

Figure 4.0

An under sampled film frame on a low photo site density imager

So what does this mean in the real world? What effect does the degree of sensor size and pixel
density have on the final version of a reconstructed image?

Figure 5.0

Our reference image

Figure 5.0 for all intents and purposes shows an image whose resolution is infinite. This would
be a case for a perfect film frame. There are no jaggies in angled lines, fine details are present,
highlights shine and there is no blockiness on contrasting edges. Just smooth clean lines in all
directions, with high contrast and life.

Figure 6.0

A discretely sampled image

In Figure 6.0 above, the same image has now been discretely sampled. At this point, a
definitive conclusion of whether or not this image has been under sampled is not the point
here. What is important is to demonstrate what effect sampling has on image output quality.
Certainly, lowering the number of sample points will at some point be considered under
sampling the image. In any case, insufficient sampling will create larger looking blocks, increase
the jaggies, diminish detail on the finer elements of the picture and degrade any existing
highlights.
In the side blow up of Figure 6.0 above, it is quite apparent what happens when discrete
sampling takes place see the jaggies. In the context of the full image as we pull back, it is not
so obvious at first glance. But take your time and look closer on the finer aspects of the image.
You will begin to see the effects sampling may have when an image is reconstructed.

Figure 7.0

An anti-aliased image

In order to minimize or hide the effects of sampling, an anti-aliasing filter is typically used
(usually Gaussian in nature). The less sample points or coarser looking the image, the more
filtering is required. It is clear what happens as shown in the side blowup of Figure 7.0.
Primarily the image gets softer. This impacts image detail, the degree of potential contrast,
highlights and shadow loss, which in turn affects the very aspects of dynamic range and with it
the perception of image depth.
An anti-aliasing post filter is designed to take out as much of the induced artifacts created by
inadequate sample spacing. A simple low pass filter for low cost systems would produce
passable results, but digitally controlled filters using proven DSP techniques are the best way to
go.

Sidebar

The Bayer topology in a CMOS Sensor


Any image sensor evaluation should at least include a mention regarding the technology
types available. In this instance, this paper will limit itself to a brief outline of CMOS
technology namely the use of the Bayer Color Filter Array or CFA, as it is very pertinent
to the HD sensor selection under review here. As there is a lot already written on CCD
and CMOS based Foveon technology, any reference to them would not be relevant here.
The Bayer implemented CMOS sensor is the most common imager found in a great
many consumer camcorders and professional digital cameras.

Figure 8.0

Bayer Color Filter Array (CFA)

As can be seen from the simplified array (or mosaic) above (Figure 8.0), each pixel or
light well has a defined RG or B color assigned to it. The broad spectrum of photons
presented to the arrays photo-sites is separated by a specific R, G or B color filter
covering each light well or photo-site (Figure 9.0). A red assigned photo site will only let
in red light and reject blue and green light. Similarly a green and blue assigned photo
site will reject the light not of that color.

Figure 9.0

Photo site structure

The Bayer color pattern is tailored to meet the model of the human eye when it comes
to light spectrum sensitivity. There are 50% more green pixels than there are blue and
red pixels which have 25% each. Green is the color that is most sensitive to the human
eye and as such will offer more contrast and detail in a Bayer derived image. Bayer
based sensors do offer sharp images as a direct result of the increased green pixel
count.

As it turns out the luminosity and chroma weighting of the


Bayer pattern is exactly the same as how film bases are
constructed. Film has separate color layers that make up its
emulsion and has a weight of 50% green, 25% blue and 25%
red. One could argue that the Bayer pattern is a good match
in how a film image can be interpreted and reconstructed
with all the nuances and its gradient nature in tact.
Because each photo-site has a slight border around it that
does not gather photons, full optical coverage is not
possible. Some compensation is made by having the lens
bubble which covers each photo-site designed as such to redirect light from the sides into the photon gathering well.
The net effect is some sampling errors will occur which does
require filtering.

Figure 10.0

Chroma Sub samples

Though each Bayer pixel has a specific color assigned to it, the remaining R,G or B values
for that pixel is missing and must be derived. Missing R,G or B pixel values are
calculated by interpolating (sometimes called demosaicing), the adjacent pixel colors of
the pixel in question. As an example, lets look at the red pixel in the upper left corner
for example in Figure 8.0, the G value for this pixel is by taken from the green pixel to
the right and bottom of the red pixel. The B value is taken from the adjacent blue pixel
diagonal to the red pixel in question. If the red pixel was deeper in the array, its B value
would be derived from all 4 diagonal co-ordinates, as would the blue pixel getting its R
value at all 4 diagonals to the blue pixel in question, and etc. The final interpolated
result is called sRGB color space.
Once all of the interpolation is done (which can be computed on or off chip), the output
of the array would then have a raw 24bit (8bits per pixel) or 30bit (10bits per pixel) RGB
value per pixel that can be further processed into any standard graphics file format or
video file.

The act of interpolating the RGB values per pixel can be viewed as reducing the actual
horizontal and vertical resolution, because each pixel has a bit of its neighbors value.
Each Bayer pixel does not stand on its own in terms of supporting truly discrete RGB
values. The demosaic process itself requires an anti-alias filter to reduce color moirs
just by the nature of the Bayer layout. This is in addition to the slight blurring effect
contributed by the many microlenses that cover the photo-sites. All in all, a Bayer
image sensor may have an effective resolution that is up to 12% less that the discrete
pixel count might suggest.
As an offset in recovering lost sharpness, the Bayer layout has made improvements by
using pixel shift technology, which effectively increases sensor resolution and the use of
Fluorite base optics to reduce color aberrations.

Sidebar Pixel Shift


A pixel shifted Bayer sensor incorporates a unique layout of how the green pixel
is distributed. Because the green pixel provides the luminance component that is
most sensitive to human vision, it is positioned on the sensor array surface in a
special way to enhance this optical property.
Green pixels in effect will contain 60% of the picture detail, where the red and
blue pixels support the rest of the image content at 40%. The green pixel is
shifted the equivalent distance of 1/2 pixel from the red and blue pixels. The
green pixel will then be sampled more frequently resulting in improved picture
detail from the scanned out image.
The overall net result achieves resolution equal to sensors with nearly twice as
many pixels.

More Film and Sensor Properties


Lets now look at 8mm film as a Hi-Def digital transfer contender. Kodak was the main supplier
of small gauge film throughout the 1930s to at least the mid eighties, now in very limited
availability. By the late 50s to the mid 1970s other manufacturers like Fuji came along with
their Velvia 100 line (80lp/mm) which was much superior to many of the Kodaks offerings in
many ways, but has been limited in its market share by only attracting independent film makers
and the more seasoned amateurs who wanted better than what Kodak offered.

Figure 11.0

Film frame composite

This is an 8 MPixel image of an 8mm film frame, with two blow ups demonstrating how
the film grain is random in size and distribution within the emulsion layers. It is quite
visible. The effects of age or just plain bad lab processing in and around the time of
exposure, brings out the grain artifact.
The film stock offered by Kodak throughout that period was not the best to say the least.
Mainly driven by cost factors at the time, Kodak produced film bases that were not only very
inconsistent, but was usually compounded by poor lab processing. The result was quality loss
just due to premature emulsion erosion, ergo grain artifacts. Kodaks color Ektachrome film for

example had an average of 40-50lp/mm, with improvements on their Kodachrome film to


about 50lp/mm to 53lp/mm*. For purposes of this paper we will be using a mean of 40lp/mm
at 50% MTF for the film. The line pair value has been reduced in resolution to take into account
image degeneration due to poor movie camera optics at time of film exposure and an
aging\long term environmental effects factor.
What are the desirable properties of a semiconductor based imager? It must have enough
pixels in both x and y direction to resolve the highest details expected in a film frame, have
large well sizes, have reasonably fast refresh times and have a high level of light capture per
unit time, thus yielding improved signal to noise ratios, where in turn yielding highly desirable
increases in dynamic range.

Sidebar - Dynamic Range and Signal to Noise


Dynamic range is calculated as a ratio and quoted in db or decibels in digital
systems that quantize information. In the image sensor case, the ratio of the
lightest and darkest ranges the pixel array can handle. Most imagers quantize
the charge build up in the light wells to a 10 bit or 12 bit level using an internal
A/D converter, but are down sampled to 8 bits to make things easier to
undertake any image processing tasks that are to be done down steam, - in or
outside the camera. Most consumer and some pro-sumer camcorders have this
dumbing down feature. An eight bit system will theoretically offer a 48db of
dynamic range (20 log256) in grey scale, - one part in 256 possible values. Of
course in the real world after taking into account noise sources and quantizing
errors, its really more like 40-43db of dynamic range. That represents a loss of
more than double the contrast depth that could be possible, (or more than a one
bit loss in contrast depth).
Signal to Noise (SNR) is also calculated as a ratio and quoted in db as in the
dynamic range case. Sometimes SNR and dynamic range are mixed in the same
pot, but they are very different even though they share some common ground.
Signal to noise in an imager is calculated by the degree of photons collected in
the light wells (proportional to pixel width and exposure time) verses the
amount of noise (photon noise, shot noise and read out noise) in that photon
well (while its filling and being read out). This is the analog signal and noise
value that gets digitized to deliver the dynamic range depth noted above. Good
SNR figures depend on high quantum energy, large pixels and low noise.
Image sensors come in a number of physical sizes and pixel densities. For high definition video
an imager must provide at least a 2MPixel size (native 1920 x 1080 pixels). Smaller pixel
densities like the secondary HD resolution of 1280 x 720 will not work for a film transfer
system. Higher pixel densities will work well like 4MP (2500 x 1600), but must be eventually
down sampled to 1920 x 1080 if the video is to be distributed on Blu-ray disc as HD video or to
be played back on most HD displays.

Next to consider is the imager physical size. Larger imagers (.5 and up) offer much better SNR
than smaller ones (.3 or smaller), though improvements on small imagers are narrowing the
gap on SNR, they may be reaching a brick wall just because of the small geometry. Larger pixel
light wells have the ability to capture more light per unit time than do smaller ones; more
signal, less noise, faster refresh, deeper dynamic range. Quarter inch sensors for example
suffer mainly on four fronts, one, they will typically have lower SNR (pixels are small less light
per unit time), two, require high resolution, wider angle lens assemblies that incur more visible
optical aberrations than do larger lenses, three, have a hard time refreshing to high
performance levels, notably affecting dynamic range and four suffer from diffraction effects
(more on this later). It follows that they become good candidates for additional image
processing that occurs at the camera level to make up for its shortcomings. Small imagers are
found mainly in consumer camcorders because they are cheap to produce. Imagers that are in
the to 2/3 size range are found in some pro-sumer and in many professional cameras. As
an alternative, larger imagers can be available as standalone assembles, less the baggage of a
camera that for the most part will not be used.
Note, before we continue, Ill reiterate, this is not an in depth math based dissertation on the
many aspects of imaging sensors. This primarily is looking at trends. I wont take into account,
the more technical aspects of the lens, anti aliasing filters, pixel sub sampling , AA Bayer filters,
etc. Yes, these have effects and built in fixes to the errors produced in the gathering and output
of pixel information. These many details are out of scope for this paper.
Lets look at a hypothetical 2/3 Bayer based sensor imaging system. The sensor will have a
5um square pixel size and be a 1920x 1080 pixel array.
The 8mm film frame dimensions (4.5mm x 3.3mm) are noted in the graphic below, Figure 12.0.

Figure 12.0 An 8mm and Super8 mm film frame physical dimensions


The equivalent line pairs for an HD size the 2/3 sensor is 100lp/mm.
ie: 5um per pixel times 2 is 10um per line pair. To get lp/mm, divide 10um/lp into one
and multiply by 1000 to get mm. Units now become lp/mm
So, the specs so far:

Resolution
Frame size

Film
40lp/mm
4.5mm x 3.3mm

Surface area

14.85sq mm

Figure 13.0

Sensor
100lp/mm
9.6mm x 5.4mm; 16:9 size
8.8mm x 6.6mm; traditional 4:3 size
51.84 sq mm; @ 16:9 size

Ratio of film size verses sensor size

Here we see in Figure 13.0, that the film frame has physically a smaller area than the sensor
surface area, by about 5.4mm/3.3mm = 1.63 times. To have an HD resolution we must cover
the sensor surface as a full as possible in the vertical dimension (horizontal dimension will not
max out), so the image must be magnified by the same amount of 1.63 times. Of course we will
end up having a 4:3 aspect ratio image occupying a 16:9 space as shown in Figure 14.0, with
black bars to the left and right of the magnified image.

Figure 14.0

8mm Film frame magnified and projected onto 16:9 image sensor simplified

If we use a magnification factor of 1.63 we


should fill the sensor surface area in the y
direction (1080 pixels) but partially in the x
direction (about 1620 pixels). Black bars or
vertical pillars will result (150 pixels each, on
either side). Under these conditions, when the
image does get scanned out, all of the light
wells in the 16:9 array will be read out,
including the space occupied by the black
pillars. Certainly a bandwidth and storage
space waste.

Selection of a 2MP lens that meets the image


circle requirements for an HD aspect ratio
sensor may be unnecessary as the black bars
have no information in them. In addition, any
lens induced aberrations that could be found at
the corners of this optical arrangement will
never be visible.

Figure

15.0

Image Circle

Note:
What may be more beneficial is a lens that just covers the 4:3 aspect of the 16:9 space
(Figure 15.0 image B). Ultimately any lens that is chosen must have very good MTF lp
numbers, in the order of 2 to 3 times the lp/mm rating of the film image to keep the
resolving aspect of the system high (not likely). Not all of the 1920 x 1080 pixels in the
array need be read out in this case, as there may be control to just isolate an ROI (region
of interest) within the sensor array. However, this type of adjustment will require
external post processing to re-create the 16:9 aspect look to ensure proper display in a
Blu ray environment.
If we leave out the effects of the lens and imager aperture settings for a moment, the sensor
will yield a maximum resolution of 100lp/mm. The 8mm film frame has been magnified by
1.63* times for a virtual resolution reduction of 24.54 lp/mm from the original 40lp/mm. We
have over sampled the film frame by just over 4 times (100lp/mm/24.54 lp/mm). This is a good
condition to have, but a poor lens may reduce the sampling effectiveness by at least 2 to
4lp/mm. So lets assume we have a lens that has at least 40lp/mm optical resolution at this
magnification, so no effective sampling loss will be introduced. The film frame contents will be
nicely reproduced under these conditions.
*Note: this would be done by a lengthening change in focal length of the lens (Figure 16.0)

Figure 16.0

Optical Lens Setup

As along as the number of effective samples is greater than 2, the finer details of a film frame
can be captured with reasonable accuracy. A film frame that is sub sampled (ie: not enough
light wells to capture the finest details of a film frame), in the case of a setup where there is a
poor lens and low sensor line pair resolution.

Lets now look at the other film gauges and see how they fare under the same conditions as the
8mm film case.

2/3 2MP Image sensor (1920 x 1080 pixels - 51.84 sq mm)


8mm
Super 8mm
16mm

Magnif
1.63x
1.35x
.77x

Frame size (mm)


4.5 x 3.3
5.46 x 4.01
9.6 x 7.01

Eff Ln Prs
24.54lp
29.62lp
52lp

Sampling Rate
4.07x
3.38x
1.92x

The 16mm case is shown to be sub-sampled, thus this film gauge will not faithfully reproduce a
true rendition of its frame contents. One will need a higher density pixel sensor rating like a 3
or 4 MPixel array of 2/3 size in order for 16mm film to be captured and rendered with all of its
image details intact without incurring sub-sampling aliases. This is an example where the
16mm film frame has to be squeezed to fit the sensor area, which in effect increases its lp/mm
property. As long as the higher density imager being used increases its lp/mm in a
corresponding fashion this will not degrade sampling performance
As another example of how sensor size affects the final results is the concept of field of view or
FOV. When a snapshot is taken, the image in the viewfinder is limited by the information the
lens and sensor system allows. Bigger sensors record more image information.

Figure 17.0

Impact of Field of View changes per sensor size

As an example, in Figure 17.0, given a set focal point, the inner image is what is recorded on a
small image sensor as compared to the outer image which will record more of the view due to
increased surface area of a larger sensor.

This leads to the case of FOV of a film frame. Unlike the example of the image in Figure 17.0,
where a change in the focal length will yield more or less of the world being viewed, film frames
are finite in size, so its field of view is fixed. The change in focal length here is to ensure the film
frame itself fills the available the image sensor surface area, not that more of the film frame be
viewable. In Figure 18.0, Image A shows the film frame at its maximum on the image sensor
(black pillars and all) and image B with the film frame filling all of the sensor surface area but at
a penalty of missing about 16% of the picture content (top and bottom of picture is truncated).

Figure 18.0

Impacts of FOV changes for a film frame

Looking at the case of a HD image sensor using the same criteria as the 2/3 model the
numbers are quite revealing, in that it is not as expected.
A typical true 2MP sensor would have a pixel size of about 2um square or smaller. Sensor
size is then estimated to be about 3.8mm by 2.16mm. Note: some sensors this small
sometimes are quoted as 2MP but really only support 1600x1200 resolutions, so either 1440 x
1080 or 1280 x720 pixel sizes can be supported. The use of onboard hardware scalars can be
used to achieve a synthesized 1920x 1080 resolution from these shortened resolutions.

2MP -- Sensor, 2um sq pixels


1920 x 1080 pixels, 3.84mm X 2.16mm (8.29sq mm), 250lp/mm
Source
8mm
Super8
16mm

Frame size (mm)


4.5 x 3.3
5.46 x 4.01
9.6 x 7.01

Magnif
-1.52x
-1.86x
-3.24x

Eff Ln Prs
60.8
74.4
129.6

Eff Sample Rate


4.11*
3.36*
1.92*

Note * -assuming a perfect lens will be attached


Where :
Magnification factor is
Image vertical length divided by the sensor vertical length assuming no lens
Example in 8mm case; -- 3.3mm /2.16mm= 1.52 times
Result is negative because the film frame is bigger than the sensor surface area
Equivalent lp -line pairs is
Example in 8mm case; - Film resolution; lp x mag factor= 40lp x 1.52 = 60.8lp
Effective sensor samples
Example in 8mm case; - Sensor resolution in line pairs / film resolution line pairs
250lp / 60.8 = 4.11
The results shown here is very similar to the larger imager sensor, in terms of effective sampling
points per film gauge size. Assuming the camcorder has been modified with a proper wide
angle lens, a consumer grade telecine system would offer similar sampling results as the larger
sensor array. But the differences really show up when a real world lens is attached. The
biggest performance show stopper for small sensors is an effect called the diffraction limit. All
lenses, whether attached to small sensors or large ones will introduce some degree of
diffraction, but at different F#stop settings. It affects image quality at higher F#stops, which will
result in image blurring, which in turn reduce the effective pixel density rating of the sensor.
What is the diffraction limit? Light left unhindered in open space travels in a straight line as
waves. When channeled through a variable aperture (circular hole) like many lenses have, the
light diffracts or widens as the aperture narrows with respect to the image sensor surface
Figure 19.

Figure 19.0

Light diffraction through a lens aperture

At a fixed focal length, the degree of diffraction boils down to a function of F#stop setting, and
the pixel spacing of the sensor. The smaller the sensor, (thus smaller pixel sizes) the range of
well performing aperture openings decrease, limiting itself to the lower F#Stop settings. The
net effect, is when a film scene is very bright, the lens F#Stop must be increased to a higher
number. If that higher F#Stop number closes the iris to the point of diffraction, the resulting
image will have lost a degree of resolution. To compound the problem of diffraction, dynamic
range and contrast is diminished at higher F#Stops. A sweet spot can be found where
sharpness is still maintained, but the range of scene brightness and dynamic range will run
short. Small gauge amateur film by its nature has a very wide dynamic range and high contrast
ratios. Small sensors will work, but will have a narrow operational range. The chances of softer
images and limited dynamic range are greater with smaller image sensors, not due to sampling
errors but due to light wave interference at the sensor and lens level. As a result, small sensors
require system level settings that open the lens aperture to an optimal F#Stop range and
specially engineered post processing functions must be applied to produce the perception of
high quality images (spatial adjustments in contrast and edge enhancement). No way of
determining any real image detail recovery.
Conclusions
Small gauge film aspect ratio at one time had a ratio of 5:3, but was scaled back to 4:3 to fit
with the academy film size at the time. Too bad it didnt hang around, things might have been
easier. So, there are issues like how to fit a 4:3 image into a 16:9 space and what do to with the
15% black space that represents data that has to be carried around.
It is more desirable to have the entire image present, in spite of the pillars, than have it
truncated at the top and bottom just to fill the entire sensor array surface.

Figure 20.0

Image Placement in a 16:9 Space

To avoid the limiting effects of small sensor arrays, choosing a larger sensor is a better choice.
The diffraction problem is very much diminished, as the F#Stop where the aberration begins to
occur is much higher than the small sensor condition. Larger sensors also benefit greatly from
better SNR figures, deeper dynamic range, faster refresh (higher ISO rating, speed and low

light) and larger FOV operation. Larger sensors however do produce moirs due to the larger
pixel size and do require filtering, but not to the extent of other performance killers inherent in
smaller sensor systems.
Coupled with a limiting F#Stop range, a wide angle lens must be used with small image sensors,
and with it poorer performance from a lens resolution point of view. Generally, when a lens is
in telephoto mode, (as in the large sensor case) its effective lp/mm resolving number
increases. In the small sensor case, the magnification factor is negative, resulting in a lp/mm
number that could be inherently better, but must be compensated by a more complex lens
structure, thus more chances of imperfections, particularly spherical related ones.
Given that in a practical high performance HD optical system a degree of image degrading
elements will always be present in the optical chain, they can be controlled and provide
outstanding results in spite of them. If its not the construction and type of the imaging system,
the photo-site efficiency, the lens rating and system adopted, prime or other wise, or the
workflows instituted, it will eventually always boil down to cost. Cost for R&D or figuring out
the cost for the customer. True film to HD video solutions will include costly hardware
components, professional software tools and a good degree of technical expertise to ensure
the best outcome for the customer. Time of course will tell as the HD film transfer market
grows and evolves as to what solutions will eventually be available. Caveat emptor.

You might also like