You are on page 1of 13

Bilinear and bicubic interpolation methods for

division of focal plane polarimeters


Shengkui Gao* and Viktor Gruev
Dept. of Computer Science and Engineering, Washington University in St. Louis, One Brookings Drive,
Saint Louis 63130, Missouri, USA
*gaoshengkui@wustl.edu

Abstract: This paper presents bilinear and bicubic interpolation methods


tailored for the division of focal plane polarization imaging sensor. The
interpolation methods are targeted for a 1-Mega pixel polarization imaging
sensor operating in the visible spectrum. The five interpolation methods
considered in this paper are: bilinear, weighted bilinear, bicubic spline, an
approximated bicubic spline and a bicubic interpolation method. The
modulation transfer function analysis is applied to the different interpolation
methods, and test images as well as numerical error analyses are also
presented. Based on the comparison results, the full frame bicubic spline
interpolation achieves the best performance for polarization images.
©2011 Optical Society of America
OCIS codes: (110.5405) Polarimetric imaging; (120.5410) Polarimetry; (230.5440)
Polarization-selective devices; (260.5430) Polarization.

References and links


1. R. Walraven, “Polarization imagery,” Opt. Eng. 20, 14–18 (1981).
2. J. E. Solomon, “Polarization imaging,” Appl. Opt. 20(9), 1537–1544 (1981).
3. R. M. A. Azzam, “Arrangement of four photodetectors for measuring the state of polarization of light,” Opt. Lett.
10(7), 309–311 (1985).
4. C. A. Farlow, D. B. Chenault, K. D. Spradley, M. G. Gulley, M. W. Jones, and C. M. Persons, “Imaging
polarimeter development and application,” Proc. SPIE 4819, 118–125 (2001).
5. J. D. Barter, P. H. Y. Lee, and H. R. Thompson, “Stokes parameter imaging of scattering surfaces,” Proc. SPIE
3121, 314–320 (1997).
6. M. W. Kudenov, L. J. Pezzaniti, and G. R. Gerhart, “Microbolometer-infrared imaging Stokes polarimeter,” Opt.
Eng. 48(6), 063201 (2009).
7. M. E. Roche, D. B. Chenault, J. P. Vaden, A. Lompado, D. Voelz, T. J. Schulz, R. N. Givens, and V. L. Gamiz,
“Synthetic aperture imaging polarimeter,” Proc. SPIE 7672, 767206, 767206-12 (2010).
8. J. L. Pezzaniti and D. B. Chenault, “A division of aperture MWIR imaging polarimeter,” Proc. SPIE 5888,
58880V, 58880V-12 (2005).
9. C. K. Harnett and H. G. Craighead, “Liquid-crystal micropolarizer array for polarization-difference imaging,”
Appl. Opt. 41(7), 1291–1296 (2002).
10. G. P. Nordin, J. T. Meier, P. C. Deguzman, and M. Jones, “Diffractive optical element for Stokes vector
measurement with a focal plane array,” Proc. SPIE 3754, 169–177 (1999).
11. M. Sarkar, D. San Segundo Bello, C. van Hoof, and A. Theuwissen, “Integrated polarization analyzing CMOS
image sensor for material classification,” IEEE Sens. J. 11(8), 1692–1703 (2011).
12. J. S. Tyo, “Hybrid division of aperture/division of a focal-plane polarimeter for real-time polarization imagery
without an instantaneous field-of-view error,” Opt. Lett. 31(20), 2984–2986 (2006).
13. M. Momeni and A. H. Titus, “An analog VLSI chip emulating polarization vision of Octopus retina,” IEEE
Trans. Neural Netw. 17(1), 222–232 (2006).
14. V. Gruev, J. Van der Spiegel, and N. Engheta, “Dual-tier thin film polymer polarization imaging sensor,” Opt.
Express 18(18), 19292–19303 (2010).
15. V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,”
Opt. Express 18(18), 19087–19094 (2010).
16. R. Perkins and V. Gruev, “Signal-to-noise analysis of Stokes parameters in division of focal plane polarimeters,”
Opt. Express 18(25), 25815–25824 (2010).
17. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote
sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006).
18. D. Goldstein, Polarized Light (Marcel Dekker, 2003).

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26161
19. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to
infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90, 611041 (2007).
20. A. Goldberg, T. Fischer, S. Kennerly, S. Wang, M. Sundaram, P. Uppal, M. Winn, G. Milne, and M. Stevens,
“Dual band QWIP MWIR/LWIR focal plane array test results,” Proc. SPIE 4028, 276–287 (2000).
21. T. Weber, T. Käsebier, E. B. Kley, and A. Tünnermann, “Broadband iridium wire grid polarizer for UV
applications,” Opt. Lett. 36(4), 445–447 (2011).
22. J. G. Ok, H. J. Park, M. K. Kwak, C. A. Pina-Hernandez, S. H. Ahn, and L. J. Guo, “Continuous patterning of
nanogratings by nanochannel-guided lithography on liquid resists,” Adv. Mater. (Deerfield Beach Fla.) 23(38),
4444–4448 (2011).
23. V. Gruev, J. Van der Spiegel, and N. Engheta, “Dual-tier thin film polymer polarization imaging sensor,” Opt.
Express 18(18), 19292–19303 (2010).
24. B. E. Bayer, “Color imaging array,” U.S. Patent 3,971,065 (1976).
25. R. Kimmel, “Demosaicing: image reconstruction from color CCD samples,” IEEE Trans. Image Process. 8(9),
1221–1228 (1999).
26. B. K. Gunturk, Y. Altunbasak, and R. M. Mersereau, “Color plane interpolation using alternating projections,”
IEEE Trans. Image Process. 11(9), 997–1013 (2002).
27. B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter
array interpolation,” IEEE Signal Process. Mag. 22(1), 44–54 (2005).
28. R. C. Gonzales and R. E. Woods, Digital Image Processing (Prentice Hall, 2002).
29. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid
polarimeter imagery,” Opt. Express 17(11), 9112–9125 (2009).
30. B. M. Ratliff, J. K. Boger, M. P. Fetrow, J. S. Tyo, and W. T. Black, “Image processing methods to compensate
for IFOV errors in microgrid imaging polarimeters,” Proc. SPIE 6240, 6240OE (2006).
31. H. Hou and H. Andrews, “Cubic splines for image interpolation and digital filtering,” IEEE Trans. Acoust.
Speech Signal Process. 26(6), 508–517 (1978).
32. R. L. Burden and J. D. Faires, Numerical Analysis (Brooks Cole, 2010).
33. T. York, S. Powell, and V. Gruev, “A comparison of polarization image processing across different platforms,”
Proc. SPIE 8160, 816004, 816004-7 (2011).
34. W. S. Russell, “Polynomial interpolation schemes for internal derivative distributions on structured grids,” Appl.
Numer. Math. 17(2), 129–171 (1995).
35. G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001).
36. http://www.kodak.com.

1. Introduction
Polarization imaging sensors can be divided into division of time [1, 2], division of amplitude
[3–6], division of aperture [7, 8] and division of focal plane polarimeters (DoFP) [9–16].
DoFP polarimeters include imaging elements, i.e. photo detectors, and micropolarization filter
arrays on the same substrate, and can record the first three or four Stokes parameters at every
frame [17, 18]. The micropolarization filter array is composed of pixelated polarization filters
oriented at several different polarization angles in order to filter the incident optical filed. The
individual pixelated polarization filters are typically composed of periodic metallic wires
whose width is approximately less than or equal to 1/5th of the incident wavelength [19].
Rigorous coupled-wave analysis has been used to show that extinction ratio above 100 and
maximum transmission of ~80% can be achieved with aluminum metallic wires whose width
is 1/5th of the incident wavelength [19]. For example, the micropolarization filter array for
infrared DoFP sensors imaging at 3 μm wavelength is realized with 600 nm wide metallic
wires [20]. These wires can be fabricated via a standard photolithography process and
integrated with an array of infrared sensitive photo detectors. The width of the wires in the
micropolarization filter array for the visible spectrum have to be ~90 nm, in order to filter
optical frequencies in the range of 450 nm up to 700 nm [15]. Fabricating feature sizes bellow
200 nm is not possible via standard UV lithography and possess challenges for realizing
visible spectrum polarization filters.
Recent advancements in nanofabrication techniques and technology have allowed for
reliable fabrication of metallic nanowires that can act as linear polarization filters in the
visible spectrum [20–23]. Furthermore, the challenge of monolithic integration of metallic
nanowires with an array of photo detectors implemented in either CMOS or CCD technology
has been recently overcome [15]. A high resolution DoFP polarimeter has been reported
where aluminum nanowires have been monolithically integrated with an array of 1000 by

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26162
1000 CCD pixels. This sensor can compute the first three Stokes parameters, i.e. S0, S1 and
S2, at every frame with maximum signal-to-noise ratio of 45 dB and dynamic range of 60 dB
[15]. The pixel pitch of 7.4 μm allows for imaging a scene with high spatial fidelity.
A block diagram of a typical DoFP sensor for infrared or visible spectrum is presented in
Fig. 1(a) [15]. In a neighborhood of 2 by 2 pixels, four different polarization filters, offset by
45 degrees, are deposited on individual photo receptors. This neighborhood of 4 pixels is
typically referred to as a “super pixel”. An analogous concept was realized in the mid 1970’s
for capturing color information via direct deposition of a color filter array, known as a Bayer
pattern [24], on top of the CCD’s imaging elements. A sample of the Bayer color filter pattern
is presented in Fig. 1(b). The integration of color filters with an array of imaging elements has
created robust and compact color imaging sensors. However, the color information from the
imaged scene is sub-sampled and can lead to inaccurate color reconstruction. For instance,
blue and red color information is sub-sampled by a factor of 4 and green color is sub-sampled
by a factor of 2. In order to recreate high resolution color images from low resolution prime
color images, various interpolation techniques have been introduced [24–28]. Since the
inception of the Bayer pattern, research in the color interpolation area has created a
tremendous body of literature with the main purpose of recovering as much missing color
information as possible in the imaging array with high accuracy.

Fig. 1. Block diagram of division of focal plane (a) polarization sensor and (b) color sensor
with Bayer pattern filters.

A similar problem exists in division of focal plane polarization imaging sensors [29, 30].
In these sensors, the imaged scene is sampled with four different polarization filters and four
sub-sampled images are produced. Due to the different instantaneous field of view of the four
sub-sampled images, large errors are introduced in the computed polarization information. In
order to recreate accurate high resolution polarization images, interpolation techniques have
to be introduced on the four primary low resolution polarization images. Although some
interpolation algorithms from the color domain can be easily ported to the polarization
imaging sensors, the inherent difference of the filter arrays between the two imaging sensors
mandates new research directions in polarization interpolation algorithms.
In this paper, we present three interpolation methods for images produced by DoFP
sensors and they are: bicubic interpolation, bicubic spline interpolation and an approximated
bicubic spine interpolation algorithm. For reference purposes in Section 2, we describe two
previously published interpolation methods for polarization images: bilinear interpolation and
weighted bilinear interpolation [29, 30]. In Section 3, the performance of the different
interpolation methods is evaluated in terms of spatial resolution and accuracy of the
reconstructed images for both synthetic and real polarization images. In Section 4, we
introduce a method for evaluating polarization interpolation algorithms on set of polarization
images. In this section, we first present a quantitative comparison between the different
interpolation methods by presenting a set of reconstructed polarization images. Second, we

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26163
present a qualitative evaluation by introducing a mean square error comparison for the
accuracy of the interpolated images. Concluding remarks are presented in Section 5.
2. Interpolation methods
2.1 Bilinear interpolation methods
The bilinear interpolation methods are among the most common techniques used in image
processing due to their computational simplicity. In DoFP polarization sensors, bilinear
interpolation is computed separately on the four low resolution images recorded with different
polarization filters offset by 45°. For example, a 1 megapixel (Mp) DOFP polarization sensor
captures 0.25 Mp for each of the 4 linear polarization filters (0°, 45°, 90°, 135°). The bilinear
interpolation method is then used to up-sample these 0.25 Mp components back to the full
resolution (1 Mp) of the camera.
Figure 2 presents a neighborhood of 4 by 4 pixels in a typical DoFP imager composed of
four different pixilated polarization filters. For this pixel neighborhood, the pixel in location
(2,2) records the intensity of the light wave after filtering it with a linear polarization filter
oriented at 90°. At this pixel location, the intensity value of the filtered light wave with 0°,
45° and 135° polarization filter are estimated via bilinear interpolation from the surrounding
pixels and Eqs. (1) through (3) present the formulas used to estimate these pixel values.
1
I 0p (2, 2)= [ I 0 (1,1) + I 0 (1,3) + I 0 (3,1) + I 0 (3,3)] (1)
4
1
=
I 45p (2, 2) [ I 45 (1, 2) + I 45 (3, 2)] (2)
2
1
= p
I135 (2, 2) [ I135 (2,1) + I135 (2,3)] (3)
2
In Eqs. (1) through (3), the 0° interpolated intensity value at pixel location (2,2) is an
average intensity value of the four adjacent diagonal pixels; the 45° interpolated intensity
value is the average intensity of the two adjacent vertical pixels, and the 135° interpolated
intensity value is the average intensity of the two adjacent horizontal pixels. Equations (1)
through (3) are slightly modified when computing the interpolated pixel values at locations
(2,3), (3,2) and (3,3) in the imaging array due to the differences in the linear polarization
filters in the immediate pixel neighborhood.

Fig. 2. A 4 by 4 interpolated block in DoFP polarimeter.

2.2 Weighted bilinear interpolation method


Weighted bilinear interpolation for polarization images has been introduced in [29] and has
higher reconstruction accuracy when compared to the traditional bilinear interpolation
method. The interpolation method can be visualized as a convolution process using the four
kernels presented in Fig. 3(a) through Fig. 3(d). In this method, all four pixel orientations are
interpolated and the newly computed pixels are located at the center of the convolution kernel,
i.e. the location marked with a white dot in the convolution kernel presented in Fig. 3(e). The

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26164
coefficients in the kernels are estimated by the distance from the location of the pixel that is
interpolated. For example, pixels convolved with the coefficient A are closer to the estimated
pixel and have higher weighting than the pixel convolved with the coefficient B.

Fig. 3. Interpolation kernels for weighted bilinear interpolation for: (a) 0° pixel, (b) 45° pixel,
(c) 90° pixel, (d) 135° pixel. (e) a 12 pixels neighborhood used for weighted bilinear
interpolation.

Equations (4) through (7) are used to compute the four different polarization angle pixel
values. Weighted bilinear interpolation uses a larger interpolation kernel compared to the
bilinear interpolation method. Furthermore, all angle intensity values of the reconstruction
pixel are interpolated using the same number of adjacent pixels in one angle orientation, i.e.
three pixels per convolution kernel.
I 0p (1,1) = A * I 0 (3,3) + B * I 0 (1,3) + B * I 0 (3,1) (4)

I 45p (1,1) = A * I 45 (3, 2) + B * I 45 (1, 2) + B * I 45 (3, 4) (5)

I 90p (1,1) = A * I 90 (2, 2) + B * I 90 (2, 4) + B * I 90 (4, 2) (6)


p
I135 (1,1) = A * I135 (2,3) + B * I135 (2,1) + B * I135 (4,3) (7)

2.3 Bicubic spline interpolation method


The bicubic spline interpolation method is a common algorithm used in color images with
superior improvements in its performance over bilinear interpolation. However, the cost is
increased computational complexity [31]. The bicubic spline interpolation method can be
easily applied to the four low resolution polarization images produced by a DoFP polarimeter.
This interpolation method is first applied in the horizontal direction followed by interpolation
in the vertical direction. The bicubic spline interpolation method computes new values from a
low resolution image by fitting a third order polynomial, fi(x), between two neighboring pixels
(xi and xi + 1) with polarization filters oriented at the same angle. The third order polynomial,
fi(x), can be described by Eq. (8):
f i ( x) =ai + bi ( x − xi ) + ci ( x − xi ) 2 + di ( x − xi )3 (8)

where x is the spatial location of the pixel value that is being interpolated and is typically half
way between xi and xi + 1. Coefficients ai, bi, ci and di are defined as follows:
ai = I (i, j ) (9)

M i +1 − M i
bi =I (i + 1, j ) − I (i, j ) − (10)
6
Mi
ci = (11)
2

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26165
M i +1 − M i
di = (12)
6
In Eqs. (9) through (12), I(i,j) is the intensity value of a pixel at location (i,j) and Mi is the
second derivative of the function fi(x), i.e. Mi = fi”(x). The coefficients described by Eqs. (9)
through (12) are derived by placing the following constraints on the interpolated curve fi(x):
(a) the function fi(x) should be continuous between pixels x1 and xn;
(b) the first derivative of the function fi(x) should be continuous between pixels x1 and xn
(c) the second derivative of the function fi(x) should be continuous between pixels x1 and
xn.
In order to solve the coefficients presented in Eqs. (9) through (12), one additional
equation is derived from the requirement that the second derivative of the function fi(x) should
be continuous and is presented by Eq. (13):
M i −4* M i +1 + M=
i+2 6 ( I (i, j ) − 2(i + 1, j ) + I (i + 2, j ) ) (13)
Equations (8) through (13) lead to formulating a system of N-2 linear equations with N
unknowns, where N is the number of pixels in a row. In order to solve the system of linear
equations presented in Fig. 4, two more edge constraints must be introduced. In natural
splines, the edge constrains are set such that M = 1 M=n 0 . Once the system of equations is
solved, the values of ai, bi, ci and di are uniquely determined. Finally, the interpolated value
fi(x) is computed using Eq. (8).
Due to the edge constrains that M1 = MN = 0, the system of N-2 linear equations need to
be solved which can result in (N-2)2 computational workload. However, since the constant
matrix here is in tri-diagonal form, the Crout or Doolittle method can be applied to solve the
matrix [32]. This method decreases the computational workload to 5*N-14 multiplications
and divisions and 3*N-9 additions and subtractions. This reduction of computational cycles
can lead to real-time execution of the interpolation algorithm.

Fig. 4. A system of n linear equations is constructed for solving the values of Mi. The value of
the newly interpolated curve is computed using Eq. (8).

The drawbacks of the above implementation are the storage of the entire frame
information in memory and the computational complexity necessary to estimate new pixel
values. This limitation may constrain real-time execution of the algorithm. Thus, we
developed an approximated bicubic spline method which is more suitable for computational

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26166
systems with limited memory, such as FPGAs and DSPs [33]. This method computes the
bicubic spline interpolation on a sub window of 10 by 10 pixels from the imaging array. The
edge constraints of the computational window remain the same as described above, i.e. M0 =
M10 = 0. This method can also be executed in parallel on multiple independent windows in the
imaging array and can enable real-time execution of the algorithm.
2.4 Bicubic interpolation method
The bicubic interpolation method attempts to fit a surface between four corner points using a
third order polynomial function [34]. In order to compute a bicubic interpolation, the intensity
values and the horizontal, vertical, and diagonal derivate at the four corner points should be
specified. The interpolated surface, fi(x,y), described by a third order polynomial is presented
by Eq. (14).
3 3
f i ( x, y ) = ∑∑ ai j x i y j (14)
=i 0=j 0

There are 16 coefficients aij that should be determined in order to compute the function in
Eq. (14). Four of the coefficients are determined directly from the intensity values in the four
corners; eight of the coefficients are determined from the spatial derivate in the horizontal and
vertical direction and four of the coefficients are determined from the diagonal derivatives.
Figure 5(a) presents the 0° pixels in the neighborhood of 5-by-5 pixels, where the
interpolation area is in the lower right region, i.e. pixels (3,4), (4,3), (4,4), (4,5), and (5,4) are
estimated. The estimation of these pixel values is based on both the intensity values of the
known pixels and the three spatial derivatives in the 5 by 5 pixel region. A system of 16
equations is composed in order to solve the interpolation coefficients. Once the interpolation
coefficients are computed from these 16 equations, the 5 interpolated pixel values are
computed using Eq. (14). Since other angle elements have the same pattern as the 0° pixels,
the same algorithm is applied to compute the other three pixels with different linear
polarization filters.

Fig. 5. (a) Known 0° intensity value pixels, (b) three directional derivatives.

3. Modulation transfer function evaluation


The modulation transfer function (MTF) is commonly used for evaluating the optical
characteristics and performance of an imaging system. The MTF measures the contrast of
sinusoids as a function of spatial frequency [35]. In order to evaluate the MTF of an imaging
system, a spatially varying sinusoidal pattern at various frequencies is used as a target image.
The output from the imaging system is recorded and the contrast for a given input frequency
is computed, i.e. the difference between the maximum and the minimum of the output
sinusoid. The MTF is computed as the ratio of the contrast of the output sinusoidal pattern
over the contrast of the input sinusoidal pattern for various frequencies.

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26167
In order to evaluate the MTF response for a typical CMOS or CCD grayscale or color
imaging sensors, a single target composed of spatially varying intensity patterns is imaged.
This is achieved by imaging a set of calibrated sinusoidal targets provided by Kodak [36]. The
MTF of an imaging sensor depends on the MTFs of individual optical elements used in the
system such as lenses, filters, pixel design and others.
Division of focal plane polarization imaging sensors record both intensity and polarization
information of the light wave at every frame. Hence, the standard procedure for evaluating the
MTF of CMOS/CCD imaging sensor should be accordingly modified in order to encompass
changes in the polarization properties of an input target. Since polarization sensors record the
photo response at four different pixels with linear polarization filters offset by 45 degrees,
then an input target can be defined such that any of the four pixel photo responses is spatially
varying at a particular spatial frequency. For example, an input target image can be defined as
spatially varying sinusoids where the responses of the four pixels are defined as follows:
) cos(2π f x x + 2π f y y ) + 1
I 0 ( x, y= (15)

, y ) 2 * cos(2π f x x + 2π f y y ) + 2
I 45 ( x= (16)

) cos(2π f x x + 2π f y y ) + 1
I 90 ( x, y= (17)

I135 ( x, y ) = 0 (18)
In Eqs. (15) through (18), fx and fy are the spatial frequency elements in the horizontal and
vertical directions in cycles per pixel. The spatial pattern for I0, I45, and I90 are modulated in
both the x and y directions, resulting in a target image that has variable intensity, i.e. variable
S0 parameter, a constant S1 parameter and variable S2 parameter. Hence, the target image is
composed of spatially varying intensity, angle and degree of polarization patterns. The spatial
patterns for I0, I45, I90 and I135 are projected across the entire imaging array of a division of
focal plane polarimeter. Due to the pixelated polarization pattern at the focal plane, all four
spatial patterns described by Eqs. (15) through (18) are decimated in order to emulate an
image recorded by a DoFP imager. Using the decimated images as inputs to the different
interpolation algorithms described in Section 2, four high resolution images are generated.
The original and interpolated images are used to compute the MTF of the system. First,
the first Stokes parameter, S0, is computed from the data recorded by the four pixels. The
Fourier transform is computed on the intensity images from the original and interpolated data
set. Since the target image is a cosine function with horizontal and vertical frequency (fx,fy),
the Fourier image has two main frequency peaks at positions (fx,fy) and (-fx,-fy). Since the
accuracy of the interpolated images is a strong function of the input frequency of the target
image, the amplitude of the main frequency peaks of the S0 image will be a function of the
input frequency as well. The MTF of the imaging sensor is computed as the ratio of the
magnitude of the main frequency peak in the interpolated result, Minterpolated(fx,fy), over the
magnitude of the main frequency peak in the original image, i.e. Moriginal(fx,fy).
Figure 6 presents the MTF of the intensity image from four different interpolation
algorithms. In Fig. 6(a) through Fig. 6(d), the vertical and horizontal frequency, fx and fy, are
swept from 0 up to 0.5 cycles per pixel and the MTF response is computed at each frequency.
Figure 6(e) presents the MTF response for all four interpolation algorithms across equal
horizontal and vertical frequencies, i.e. fx = fy. In Fig. 6(e), the ideal MTF response of a full
resolution sensor is plotted for comparison reasons. Due to the Nyquist sampling theorem, the
ideal MTF has a response of one up to 0.5 cycles per pixel and drops to zero for higher
frequencies. Due to sampling a scene with four different polarization filters in DoFP
polarimeters, the MTF for these class of sensors drops to zero for frequencies higher than 0.25
cycles per pixel.

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26168
Two important observations can be made regarding bilinear and weighted bilinear
interpolation. Specifically, MTF function decreases relatively quickly for low spatial
frequencies and goes to zero at fx = fy = 0.25 cycles per pixel. Bilinear interpolation does not
perform well when the interpolated signal contains high spatial frequencies. Due to Nyquist
theory, the signal cannot be reconstructed beyond spatial frequency of 0.25 cycles per pixel.
Meanwhile, the two bicubic interpolation methods reconstruct high spatial frequency
information better when compared to the bilinear methods. The bicubic spline interpolation
method preserves low frequency features the best, while the bicubic method can recover the
highest frequency components better than any other method (up to spatial frequency of 0.375
cycles per pixel).

Fig. 6. The MTF of S0 for interpolation methods (a) Bilinear, (b) Weighted bilinear, (c)
Bicubic, (d) Bicubic spline, (e) The MTF results of S0 along fx = fy.

4. Performance evaluation
In order to evaluate the accuracy of the different interpolation methods, we acquired a set of
four images using a grayscale CCD camera and a rotating linear polarization filter i.e. a
division of time polarization camera. The four images are recorded at 0°, 45°, 90° and 135°
orientations of the linear polarization filter. All four images have spatial resolution of 1024 by
1024 pixels and they are co-registered with respect to each other due to the fixed position of
the imaging sensor. In other words, at each pixel location, the optical field has been filtered
and recorded with four different polarization filters offset by 45°. Using the intensity

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26169
information of the four images, true polarization information, such as angle and degree of
linear polarization, is computed at every pixel location.
Four low resolution (512 by 512 pixels) images for I0, I45, I90 and I135 are computed by
decimating the respective high resolution images. The first pixel in all four low resolution
images is different with respect to each other and offset by one pixel to the right and/or down
in the imaging array in order to resemble the sampling pattern of a typical division of focal
plane polarization imaging sensor [15]. For example, the first pixel in the I0 low resolution
image corresponds to pixel (1,1) in the high resolution image; the first pixel in the I45 low
resolution image corresponds to pixel (1,2) in the high resolution image; the first pixel in the
I135 low resolution image corresponds to pixel (2,1) in the high resolution image and the first
pixel in the I90 low resolution image corresponds to pixel (2,2) in the high resolution image.
The different interpolation methods compute a high resolution images for I0, I45, I90 and I135
from its respective low resolution image and the resulting images are compared against the
“true” high resolution image. Note, the accuracy of the recorded polarization information is
dependent on the accuracy of the rotational stage with the linear polarization filters and the
alignment of the linear polarization filter with respect to the imaging sensor. These optical
misalignments will introduce errors in the recorded polarization information by the four
original high resolution images. These errors are not critical for our set of experiments since
the interpolation algorithms are implemented on the decimated images computed from the
original high resolution images. The interpolated images are compared against the original
high resolution images in order to estimate the accuracy of the method. Hence, any optical
misalignment errors would equally affect the original high resolution images as well as the
interpolated images.
The linear polarizing filter used in this experiment is an Edmund Optics M71-939 and has
polarizing extinction ratios greater than 1,000. The linear polarization filter is mounted on a
manual rotation stage from Newport (RSP-2T). Figure 7 presents the high resolution intensity,
degree of linear polarization and angle of linear polarization image of a still black horse
figure. The degree of linear polarization is presented as a grayscale image, where dark levels
represent low levels of degree of linear polarization and bright levels represent high levels of
degree of linear polarization. The angle of polarization is presented in false color
representation, where 0° of linear polarization is presented in red color and 90° of linear
polarization is presented in blue color. The images presented in Fig. 7 are a true measurement
of the linear polarization properties of a scene and will be used for comparisons against the
interpolated images.

Fig. 7. High resolution image representation of the true (a) Intensity, (b) DoLP, (c) AoP and an
imaged scene.

4.1 Image visual comparison


In Fig. 8, three columns of images are presented. In the first column the intensity image is
presented; in the second column the degree of linear polarization is presented and in the third
column the angle of linear polarization is presented.

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26170
Fig. 8. Comparison of the images of different interpolation methods on the intensity, DoLP and
AoP. (a) true polarization, (b) bilinear interpolation, (c) weighted bilinear interpolation, (d)
bicubic interpolation, (e) bicubic-spline interpolation with 10 x 10 window, (f) bicubic-spline
interpolation.

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26171
The first row of images, i.e. Fig. 8(a), presents the true polarization images; the second
row, i.e. Fig. 8(b), presents the results from bilinear interpolation; the third row, i.e. Fig. 8(c),
presents the results from weighted bilinear interpolation; the fourth row, i.e. Fig. 8(d),
presents the results from bicubic interpolation; the fifth and sixth row, i.e. Figs. 8(e) and 8(f),
present the results from bicubic spline interpolation. From this set of images, it can be
observed that both degree and angle of linear polarization images for bilinear and weighted
bilinear interpolation have stronger pixellation effects compared to the true polarization
images (see region B in the AoP image). Details of the horsehair are lost, and the boundary
between the horsehair and the background is discrete due to the large error introduced by the
bilinear interpolation method (see region A in the AoP image). For the bicubic and bicubic
spline interpolation images the details of the horsehair in both degree and angle of linear
polarization are recovered with similar results and closely resemble the true polarization
images. Also smooth varying regions of angle and degree of polarization are recovered
visually with higher accuracy than the bilinear interpolated images.
4.2 MSE comparison
To quantitatively compare the different interpolation methods, we utilized the concept of
mean square error (MSE). The MSE is computed as shown by Eq. (19):
1
∑ ∑ ( I true (i, j ) − I interpolated (i, j ) )
2
=MSE (19)
MN 1≤i ≤ M 1≤ j ≤ N

where Itrue(i,j) is the true value of the target pixel (i.e. Fig. 8(a)), Iinterpolated(i,j) is the
interpolated intensity value of the target pixel (i.e. Figs. 8(b) through 8(f)), M is the number of
rows in the image array and N is the number of columns in the image array. For the toy horse
test images, the MSE comparison results are shown in Table 1. Based on the evaluating
results presented in Table 1, the minimum interpolation error for all images, i.e. I0, I45, I90,
I135, S1, S2, intensity, degree and angle of linear polarization images, is obtained via the
bicubic spline interpolation method. Similar results are obtained for the approximated 10 by
10 bicubic spline method, which might be more suitable for real-time implementation. The
bilinear and weighted bilinear interpolation methods introduce larger errors in the computed
higher resolution images when compared to the three bicubic interpolation methods.
Table 1. The MSE Performance Comparison for the Toy Horse
Bicubic-spline
Bilinear Weighted Bilinear Bicubic Bicubic-spline
10x10
Intensity 4.736E + 02 4.498E + 03 4.497E + 02 2.780E + 02 2.637E + 02
DoLP 2.476E-04 4.081E-04 2.252E-04 1.818E-04 1.734E-04
AoP 2.972E + 01 3.665E + 01 2.759E + 01 2.723E + 01 2.699 E + 01
I0 2.112E + 02 1.258E + 03 2.229E + 02 1.422E + 02 1.359E + 02
I45 2.720E + 02 1.421E + 03 2.683E + 02 1.791E + 02 1.712E + 02
I90 3.560E + 02 1.519E + 03 3.293E + 02 2.532E + 02 2.427E + 02
I135 2.004E + 02 1.163E + 03 1.840E + 02 1.351E + 02 1.304E + 02
5.074E + 02 8.457E + 02 3.401E + 02
S1 4.972E + 02 3.566E + 02
S2 4.034E + 02 5.814E + 02 3.878E + 02 2.707E + 02 2.586E + 02
*The blue -marked, gray-shaded number is the best result in each row.

4.3 Results analysis


Based on both visual comparison and numerical error analysis results, three main observations
can be made. First, bicubic spline interpolation shows the best results in the visual
comparison, and the numerical error comparison via MSE confirms this observation. The
algorithm complexity, as well as the large window area used to compute the interpolated
images, allow for higher accuracy in the recovered images. The algorithm complexity

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26172
introduces higher computational workload compared to the two bilinear interpolation
methods. However, the bicubic algorithms are very modular and can be executed in parallel if
implemented in an FPGA, GPU or a multi core processor.
Second, the approximated bicubic spline interpolation on 10 by 10 pixel window
computes images with similar accuracy as the bicubic interpolation method which uses an
entire row of pixels. The difference in the MSE between the full frame bicubic spline
interpolation and the approximated bicubic spline interpolation are less than 4.8% for DoLP
and 0.9% for AoP. Therefore, the approximated bicubic spline interpolation could be an
efficient way to achieve high precision for the portable hardware implementation of a DoFP
sensor.
Third, bilinear and weighted bilinear interpolation methods show relatively poor
performance. A remarkable “gap” exists between the linear-based interpolation methods and
the cubic-based interpolation methods for both visual results and the MSE numerical
comparison. This is because the linear-based interpolation methods have relatively simple
linear interpolation kernel and less original pixels involved in interpolating an unknown pixel,
which could not allow them to handle the edges and abrupt variations. The weighted bilinear
interpolation method produces images that look smoother when visually compared to the
bilinear method but larger error in the MSE comparison. The cubic-based algorithms will
always outperform the linear-based methods regardless of the extinction ratios of the
polarization filters in division of focal plane polarimeters.
5. Conclusion
In this paper, we present five interpolation methods for four angle division of focal plane
polarization imagers. The interpolation methods discussed in this paper are: bilinear, weighted
bilinear, bicubic, bicubic spline and approximated bicubic spline method. A theoretical
framework for all five interpolation methods is presented in the paper. The interpolation
methods are first analyzed using the MTF for their frequency performance, followed by
numerical analysis on a set of images obtained via a CCD camera and a linear polarization
filter rotated in front of the sensor. The presented method allows for reconstruction of a true
polarization image as well as constructs lower resolution images by appropriately decimating
the high resolution images.
The interpolation methods are implemented on the low resolution images and compared
numerically against the true high resolution polarization images. According to the results,
though the bicubic based interpolation methods have more computational complexities, they
show higher MTF gain and wider validation frequency bandwidth than bilinear based
interpolation methods, and also outperform the bilinear based interpolation methods both
visually and numerically. Furthermore, an approximated bicubic spline interpolation on 10 by
10 pixels could largely ease the portable hardware implementation while maintaining the high
performance.
Acknowledgments
The work described in this paper was supported by National Science Foundation grant
number OCE-1130897, and Air Force Office of Scientific Research grant number FA9550-
10-1-0121.

#157182 - $15.00 USD Received 27 Oct 2011; revised 20 Nov 2011; accepted 22 Nov 2011; published 7 Dec 2011
(C) 2011 OSA 19 December 2011 / Vol. 19, No. 27 / OPTICS EXPRESS 26173

You might also like