You are on page 1of 13

Passive depth estimation using chromatic aberration

and a depth from defocus approach

Pauline Trouvé,1,* Frédéric Champagnat,1 Guy Le Besnerais,1 Jacques Sabater,2


Thierry Avignon,2 and Jérôme Idier3
1
ONERA-The French Aerospace Lab, F-91761 Palaiseau, France
2
Institut d’Optique Graduate School, 2 Avenue Augustin Fresnel, RD128 91127 Palaiseau, France
3
LUNAM Université, IRCCyN (UMR CNRS 6597) BP 92101, 1 rue de la Noë, 44321 Nantes Cedex 3, France
*Corresponding author: pauline.trouve@onera.fr

Received 19 June 2013; revised 30 August 2013; accepted 3 September 2013;


posted 10 September 2013 (Doc. ID 192491); published 9 October 2013

In this paper, we propose a new method for passive depth estimation based on the combination
of a camera with longitudinal chromatic aberration and an original depth from defocus (DFD)
algorithm. Indeed a chromatic lens, combined with an RGB sensor, produces three images with
spectrally variable in-focus planes, which eases the task of depth extraction with DFD. We first
propose an original DFD algorithm dedicated to color images having spectrally varying defocus
blurs. Then we describe the design of a prototype chromatic camera so as to evaluate experimentally
the effectiveness of the proposed approach for depth estimation. We provide comparisons with results
of an active ranging sensor and real indoor/outdoor scene reconstructions. © 2013 Optical Society of
America
OCIS codes: (110.0110) Imaging systems; (110.1758) Computational imaging; (100.0100) Image
processing; (100.3190) Inverse problems.
http://dx.doi.org/10.1364/AO.52.007152

1. Introduction We first describe an original chromatic depth from


Imaging devices with depth estimation ability, also defocus (DFD) algorithm and show its efficiency on
referred to as RGB-D cameras, have a large field simulated data. Then we present the design and
of applications, including robot guidance for civilian the realization of a real prototype of chromatic cam-
and military applications, man–machine interface era and combine it with our algorithm so as to pro-
for game consoles or smartphones, and 3D recording. duce estimated depth maps. We evaluate the depth
These systems are usually based either on stereos- estimation performance on real textured scenes
copy, using two cameras with different viewpoints, and demonstrate an accuracy better than 10 cm in
or on infrared (IR) active ranging systems, using the range of 1–3 m for a camera with focal length
laser pulses as in LIDAR scanners and time-of-flight of 25 mm and an f-number of 4.
(TOF) cameras, or projected light patterns such as A. Depth from Defocus
the Kinect, developed by PrimeSense. Here we focus
DFD is a passive depth estimation method based
on a passive depth estimation method using a single
on the relation between defocus blur and depth [1].
“chromatic camera,” i.e., a camera with a lens having
Indeed, as illustrated in Fig. 1, if a point source is
an accentuated longitudinal chromatic aberration.
placed out of the in-focus plane of an imaging system,
its image, which corresponds to the point spread
1559-128X/13/297152-13$15.00/0 function (PSF), has a size given by the geometrical
© 2013 Optical Society of America relation

7152 APPLIED OPTICS / Vol. 52, No. 29 / 10 October 2013


Fig. 1. Illustration of the DFD principle.

 
 1 1 1

ε  Ds − − ; (1)
f d s

where f is the focal length, D is the lens diameter,


and d and s are, respectively, the distance of the
point source and the sensor with respect to the lens.
Knowing f and s, the depth d can be inferred from a
Fig. 2. Theoretical blur variation ϵ given by Eq. (1) with respect to
local estimation of the PSF size, or in other words, of
depth, for a conventional imaging system with a focal length of
the local amount of blur. 25 mm, an f-number of 3, and a sensor pixel size of 5 μm, and with
Compared to the traditional stereoscopic ranging the in-focus plane put at 2 m. (a) Illustration of depth estimation
system, a DFD approach requires only a single cam- ambiguity. (b) Illustration of the dead zone in the DoF region.
era, and thus leads to a more compact and simple
experimental setting. Moreover, compared to a
recent single lens stereoscopic approach such as in proposed that uses a spatial light modulator to
[2], where the aperture is divided within three produce two images successively, the first one with
RGB color filters to create a parallax effect between a constant blur from which the scene is estimated,
the RGB channels, it does not reduce the signal-to- and the second one with a depth-dependent defocus
noise ratio (SNR) with an aperture division. Finally, blur from which depth is estimated. However, all the
compared to a light-field camera [3], DFD requires a latter solutions imply sophisticated optical systems,
simpler optical design and has no issue of angular which increases the size of the camera, or the design
resolution versus spatial resolution. Besides, one and manufacturing difficulty.
advantage of a 3D camera with DFD compared to Another family of DFD techniques uses a single
active ranging systems is that it can be used in out- image. Acquisition is then easier, but the processing
door as well as in indoor situations. Indeed, an active is more difficult due to the ambiguity between
ranging system, such as the Kinect or TOF camera, scene and blur. Various scene models have been used
projects an IR signal that can be disturbed by the IR [10–15], and some of them are based on a supervised
illumination of the sun. learning step [12,13]. Note that an important topic in
On the other hand, DFD has two fundamental single image DFD is the use of a coded aperture
drawbacks illustrated in Fig. 2. First, as shown in whose shape has an influence on depth estimation
Fig. 2(a), there is a depth ambiguity on both sides accuracy and can thus be optimized [11–13]. How-
of the in-focus plane because two different depths ever, in most coded aperture approaches developed
lead to the same PSF size. In addition, Fig. 2(b) illus- in the literature [11–13], depth ambiguity remains,
trates that near the in-focus plane region, no blur because there is still a unique in-focus plane. Note
variation can be observed, due to a PSF size below that a nonsymmetric aperture could avoid this ambi-
one pixel. That region corresponds to the depth of guity, but they are not considered as the most accu-
field (DoF) and can be considered as a dead zone rate [12]. Besides, due to the DoF of the lens, there is
for depth estimation by DFD. still a dead zone near the in-focus plane. Another
Like all passive depth estimation techniques, DFD interesting approach proposed in [16] consists of us-
requires the scene to be sufficiently textured. How- ing a color circular spectral filter inside the aperture
ever, this scene is generally unknown. A first family in order to capture simultaneously three RGB
of DFD techniques then uses several images of the images with different aperture radii: one for the
same scene to estimate the relative local blur. In green channel and another for the two red and blue
[1,4–6] the images are obtained with different lens channels. As shown in Fig. 3(a), each depth is then
settings, an approach that requires the scene to be characterized by two blur radii, which increases
static between the successive acquisitions. To avoid the accuracy of depth estimation [16]. However, once
this constraint, a first solution is to separate the again, this approach is subject to ambiguity and a
input beam. For example, a mirror beam splitter is dead zone around the single in-focus plane.
used in [7]. In [8] a programmable aperture involving To summarize, there is still a need for a single
an LCoS is used to acquire two images with different acquisition DFD system that could provide depth
aperture shapes at a high rate. In [9] a system is estimation with good accuracy over a large range

10 October 2013 / Vol. 52, No. 29 / APPLIED OPTICS 7153


On the other hand, chromatic aberration has
been proposed for DoF extension, for grayscale
scenes in [18,19] and for color scenes in [20–22]. In
[18,21,22] a relative sharpness measure is used to
locally identify the sharpest channel and to add
the higher frequencies of the sharpest channel onto
the blurred channels, a technique that is referred to
as “high frequency transfer.” The relative sharpness
measure can lead to a classification of the scene into
three regions: near, medium range, and far. However,
none of these references propose a solution for depth
imaging nor study the performance of depth estima-
tion. Note that chromatic aberration is also exploited
in microscopy to estimate a 3D map of the surface of
the observed object. For example, in [23], a chromatic
lens and a RGB sensor are used for phase estimation
Fig. 3. Example of theoretical blur variation ε given by Eq. (1), with the transport of intensity equation. Another
with respect to depth for RGB channels: (a) in the case of the chro- example is the use of chromatic aberration in spec-
matic aperture of [16], and (b) in the case of a chromatic lens. In
tral confocal microscopy [24].
(a) the focal length is 25 mm, and the f-number of the green chan-
nel is 4.5, while it is 3 for the red and blue channels. For the
Regarding algorithm issues, let us first note that
chromatic lens (b), the green channel focal length is 25 mm, classical multi-image DFD techniques [1,4–6] cannot
and the RGB channels are respectively focused at 1.5, 2, and be used for DFD with a chromatic lens because they
3 m, with an f-number of 3. In both cases the sensor pixel size do not account for the partial correlation between the
is 5 μm. color channels: this point is detailed in Section 2. In
[16] the proposed DFD algorithm is dedicated to the
without dead zone or ambiguity. In this paper we processing of color images having spectrally varying
describe such a system based on the use of a lens with defocus blur. However, the processing in [16] relies on
longitudinal chromatic aberration, and an original the assumption that the same channel always has
chromatic DFD algorithm. the lowest blur level, whatever the depth. Such an
approach cannot be applied to the case of DFD with
B. Depth from Defocus with a Lens with Chromatic a chromatic lens, because the channel having
Aberration the lowest blur level varies with depth, as shown
Longitudinal chromatic aberration leads to spec- in Fig. 3(b).
trally varying in-focus planes of the lens. Thus a D. Contributions and Paper Organization
chromatic lens combined with a color sensor produ-
ces, in a single snapshot, three images with varying Our first contribution is an original depth estimation
defocus blur. The benefit that can be gained from this algorithm dedicated to chromatic systems presented
kind of system can be seen in Fig. 3(b), which shows in Section 2. Section 3 presents validations of the pro-
the variation of the PSF size for the three RGB chan- posed algorithm carried out on simulated images
nels of a chromatic imaging system. First, for each from a set of natural scenes. Our second contribution
depth there is a unique triplet of PSFs. Therefore is the design and the realization of a chromatic lens,
we avoid the depth ambiguity that exists in single from which a prototype of RGB-D camera has been
in-focus plane DFD. In addition, each channel has built. Using this prototype and the proposed chro-
a different in-focus plane, so the dead zone of a chan- matic DFD algorithm, we present an experimental
nel can be compensated by using blur variation in the evaluation of depth estimation accuracy in Section 4.
other two channels. Hence with such a system depth (Some of the depth estimation results were presented
estimation can potentially have good and homo- in [25].) Depth maps obtained on real 3D scenes are
geneous accuracy over a large range. Finally, as chro- shown and compared to the results of an active rang-
ing system. Discussion and concluding remarks are
matic aberration is often corrected at the expense of
given in Sections 5 and 6.
additional lenses such as diffractive lenses, removing
the achromatic constraint makes the optical design
simpler, as there are fewer elements to optimize, 2. DFD Algorithm Dedicated to Chromatic Camera
and can lead to a more compact system.
A. Algorithm Overview
C. Related Works DFD can be related to blind deconvolution as both
Depth estimation from chromatic aberration has the scene and the PSF are unknown. Moreover, as
already been proposed by Garcia et al. in [17], but depth varies in the image, one has to deal with spa-
their algorithm for local blur identification is tially varying blur, which means that identification
restricted to step edges; hence they do not obtain has to be done locally with a very limited number
depth images and provide no experimental evalu- of data. Such a severely underdetermined problem
ation of the depth estimation performance. requires additional assumptions on the scene and

7154 APPLIED OPTICS / Vol. 52, No. 29 / 10 October 2013


on the PSF. In [14] we have presented an unsuper- barcode and QR-code reading [19], but wider appli-
vised single image DFD method based on a very cability requires the development of a truly RGB
simple one-parameter Gaussian prior distribution scene prior that models the partial correlation
of the scene and that assumes that the PSF belongs between the RGB channels. The related criterion,
to some finite set of known candidate PSFs. Each which leads to the color chromatic DFD (CC-DFD)
candidate PSF accounts for the defocus associated algorithm, is derived in Section 2.D.
to a particular depth dk , according to a preliminary
step of calibration or simulation of the imaging B. Observation Model
system at hand. The relation between the scene and the recorded
The flowchart of this algorithm is presented in image is usually modeled as a convolution with
Fig. 4. The image is decomposed in patches where the PSF. In the general case, defocus blur varies
depth is assumed constant. For each patch the prob- spatially in the image and this model is only valid
lem reduces to the optimization of a cost function on image patches, where the PSF is supposed to
over two parameters: be constant. In the case of a lens having spectrally
varying defocus blur combined with a color sensor,
dˆk ; α̂  arg min GLdk ; α: (2) each RGB channel has a different PSF. Using the
k;α matrix formalism on image and scene patches, this
case can be modeled as
The first one is a regularization parameter α that
accounts for the local SNR, and the second is the 2 3 2 32 3
yR H R d 0 0 xR
depth dk of the patch. The cost function GL is a 6 7 6 76 7
generalized likelihood (GL) obtained by marginal- Y  4 yG 5  4 0 H G d 0 54 xG 5  N
izing out the (unknown) scene. Note that, as shown yB 0 0 H B d xB
in the flowchart, homogeneous patches are rejected
because they are insensitive to depth.  H C X  N; (3)
We adopt the same general approach and flow-
chart for the case of the chromatic camera, but where yR ; yG ; yB  and xR ; xG ; xB  represent the
derive a new GL cost function that requires two non- concatenation of the pixels of three RGB scenes
trivial modifications with respect to [14]. First, a and image patches, respectively. N stands for the
depth value is now related to a triplet of PSFs noise that affects the three channels. Each H c d
(one per RGB channel) instead of a unique one is a convolution matrix that depends on the PSF of
in the single image DFD case of [14]. Second, a the channel c and on the depth d. As we consider
scene prior distribution has to be specified for the small patches, care has to be taken concerning
three channels, accounting for interchannel RGB boundary hypotheses: the usual periodic boundary
correlations. assumption is not suited here. In the following, we
In this section we introduce these modifications use “valid” convolutions in which the support of xc
incrementally. We first consider, in Section 2.C, the is enlarged with respect to the one of yc according
case of color images affected by a spectrally varying to the PSF support [26, Section 4.3.2]. Thus, if N
blur, but assuming a total correlation between the is the length of the vector yc, and M is the length
RGB recorded scenes. The resulting algorithm is of xc, with M > N, each H c d is an N × M convolu-
referred to as grayscale chromatic DFD (GC-DFD) tion matrix.
since it amounts to considering grayscale scenes. Assuming that the noise is a zero-mean white
The scope for such an algorithm is, for instance, Gaussian process with variance σ 2n , the data likeli-
hood is
 
‖Y − H C X‖2
pYjX; σ 2n   2π −N∕2
σ 2n −N∕2 exp − :
2σ 2n
(4)

C. Grayscale Scene
In the case of a grayscale scene acquired with a chro-
matic camera, each RGB image actually originates
from the same scene so xR  xG  xB  x. Thus the
observation model reduces to

2 3 2 3
yR H R d
6 7 6 7
Y  4 yG 5  4 H G d 5x  N  H gC x  N: (5)
Fig. 4. Generic flowchart of the DFD algorithm for one image
yB H B d
patch.

10 October 2013 / Vol. 52, No. 29 / APPLIED OPTICS 7155


1. Scene Model Pα; d
We propose to use, as in [14], a Gaussian prior  I N;N − H gC dH gC dT H gC d  αDT D−1 H gC dT :
written as
  (10)
2 ‖Dx‖2
px; σ x  ∝ exp − ; (6)
2σ 2x As shown in [30], the number of zero eigenvalues
in P is equal to that of D. As discussed in the previous
with D a matrix composed of the concatenation of the section, D has a single zero eigenvalue associated
convolution matrices corresponding to the vertical with the eigenvector 1. Thus n  1, and reporting
and horizontal first-order derivatives, i.e., the the expression of σ̂ 2n in Eq. (8) we obtain
convolution matrices relative to filters  −1 1  and
 −1 1 T . This model, which can be physically inter- 1 3N−1
preted as a 1∕f 2 decrease of the scene spectrum, has pYjH gC d; α ∝ jPα; dj2 Y T Pα; dY− 2 : (11)
previously shown good results in single image blur
identification [14,27]. Note that matrix D is singular, Maximizing Eq. (11) is equivalent to minimizing
as D1  0. In such a case, the scene prior is said the GL function:
to be improper. Improper models are not uncommon
in statistics, the most famous example being the Y T Pα; dY
GLG d; α  1∕3N−1
: (12)
Brownian motion; moreover, as discussed in the next jPα; dj
section, depth inference can still be derived from
such a prior [28–30]. Note that this expression is formally identical to
the one proposed by Wahba in the context of spline
2. Generalized Likelihood Derivation smoothing [28]. We borrow the term GL from this
In the general case of proper prior distributions, the reference.
probability of observed data Y is obtained by integrat- D. Color Scene
ing out or marginalizing the prior px; σ 2x :
Z Actually, the RGB image originates from three
pYjH gC d; σ 2n ; σ 2x   pYjx; H gC d; σ 2n px; σ 2x dx: distinct scenes: xR , xG , and xB , which are partially
correlated. We propose to use the luminance and
(7) the red–green and blue–yellow chrominance decom-
position xl ; xc1 ; xc2  defined as
Because px; σ 2x  is improper, the distribution
pYjH gC d; σ 2n ; σ 2x  defined by Eq. (7) is also improper. 2 3 2 3
It is shown in [30] that an extended definition of the xR xl
6 7 6 7
marginal probability can be derived, which yields the 4 xG 5  T ⊗ I M;M  X LC  T ⊗ I M;M  4 xc1 5; (13)
following marginal likelihood: xB xc2

1 Y T Qα;d;σ 2
n Y
pYjH gC d; σ 2n ; α ∝ jQα; d; σ 2n j2 e− 2 ; (8) where
2 3
with jAj corresponding to the product of the nonzero p1 −1
p −1
p
3 2 6
eigenvalues of a matrix A and 6 7
6 −1 7
T  6 p13 p
1
2
p
67
; (14)
Qα;d;σ 2n  4 5
p1 0 p2
3 6
1
 2 I N;N − H gC dH gC dT H gC d  αDT D−1 H gC dT ;
σn ⊗ stands for the Kronecker product. The observation
(9) model then writes

where α  σ 2n ∕σ 2x is a regularization parameter that Y  H cC dX LC  N; (15)


accounts for various SNR in different image patches,
and I is the N × N identity matrix. In statistics, Q is
referred to as the precision matrix and when Q H cC d  H C dT ⊗ I M;M : (16)
is regular, it corresponds to the inverse of the
covariance matrix.
To reduce the number of parameters, the likeli-
hood is maximized with respect to σ 2n in order to deal 1. Scene Model
with a generalized likelihood (GL) that depends only According to [31], the three components of the lumi-
on H gC d and α [26, Section 3.8.2]. This maximization nance/chrominance decomposition can be assumed
leads to σ̂ 2n  Y T Pα; dY∕3N − n, where n is the independent. We then use the Gaussian prior (6)
number of zero eigenvalues of the matrix Pα; d on each luminance and chrominance component,
defined as which leads to

7156 APPLIED OPTICS / Vol. 52, No. 29 / 10 October 2013


 
‖D μXLC ‖2 which is discussed in Section 3.B) or automatically
pX LC ; σ 2x ; μ ∝ exp − C 2 set by the algorithm.
2σ x
2 p 3
μD 0 0 F. Variation of Color Sensor Type
6 7 The proposed formalism allows us to deal with the
with DC  4 0 D 0 5: (17)
case of a 3CCD sensor, a layered photodiode sensor,
0 0 D or a sensor with a color filter array (CFA) [32]. For a
3CCD sensor, the input beam is separated, according
Akin to [31], our prior incorporates a parameter μ to wavelength, on three different sensors of identical
that models the ratio between the gradient variances resolution. The layered photodiode sensor uses the
of the luminance and the chrominance components. wavelength-dependent absorption length of a semi-
conductor. Both systems provide three full-resolution
2. Generalized Likelihood Derivation RGB images that can be directly superposed. How-
ever, most color cameras use a unique sensor with
Following the derivation of Section 2.C, maximum
a CFA. For instance, the classical Bayer CFA is made
likelihood estimation of d; α is equivalent to
up of 2 × 2 basic cells with one red, one blue, and two
minimizing the GL function:
green filters. In this case, the recorded image can be
decomposed into three undersampled red, green, and
Y T Pα; dY blue images. Such undersampling is easily taken into
GLC d; α  1∕3N−3
; (18)
jPα; dj account in our model by removing the lines corre-
with sponding to missing pixels from convolution matrices
H R , H G , and H B in Eq. (3).
Pα; d
3. Simulation Study
 I N;N − H cC dH cC dT H cC d  αDTC DC −1 H cC dT : To simulate a chromatic lens, a set of triplets of PSFs
(19) is built, each triplet being related to a depth. The
PSFs are supposed to be Gaussian with a standard
In this case the number of eigenvalues of Pα; d deviation defined by σ  ρϵ, where ϵ is given by
is n  3, one for each luminance and chrominance Eq. (1) and ρ  0.65. The focal lengths of the RGB
component. channels are respectively, [25.06, 25.00, 24.81] mm,
the lens diameter D is 6.3 mm, the sensor position
E. Algorithm Implementation s is 25.22 mm, and the pixel size is 7.4 μm. The re-
The two GC-DFD and CC-DFD algorithms have the sulting in-focus planes of the RGB channels are
same flowchart as the one illustrated in Fig. 4. For respectively at 4, 2.9, and 1.5 m. The set of potential
each RGB image patch, the proposed algorithm PSF triplets is built for depth varying from 1.2 to
selects a depth among a discrete set of potential 3.8 m with a step of 5 cm.
depths fd1 ; …; dK g. Each depth dk is associated with In the following, we simulate images acquired by
a triplet of PSFs, i.e., fH R dk ; H G dk ; H B dk g ob- this ideal chromatic lens, using the computed PSFs
tained by calibration, or by simulation, for example triplet set applied to a set of natural color scenes. For
using an optical design software. The proposed selec- each scene, composed of three RGB components, we
tion criterion for both algorithms is build three noisy blurred images using a triplet from
the PSFs triplet set associated with the true depth.
dˆk ; α̂  arg min GLX dk ; α; (20) We assume a fronto-parallel scene, i.e., a constant
k;α depth over the whole image support, and choose suc-
cessively several depths ranging from 1.3 to 3.5 m.
where X refers to either G or C for the GC-DFD and The acquisition noise is modeled by a zero-mean
CC-DFD algorithms. To limit the computational bur- white Gaussian pseudo-random noise of standard
den, we use an efficient computation of the criterion deviation 0.05, given that the scenes have a normal-
using a generalized singular value decomposition ized intensity. Depth is then estimated on image
(GSVD), as in [14]. We use square patches; however, patches of size 20 × 20 pixels extracted from the
other shapes or even adaptation of the patch shape to three RGB images. In this simulation, without loss
the spatial features of the scene could be considered of genericity, a 3CCD sensor is considered.
but lead to a higher computational cost. Note that to
reject homogeneous patches, we use a Canny edge A. Comparison of GC-DFD and CC-DFD on Simulated
detector with an adaptive threshold. A patch is proc- Images
essed only if the Canny filter detects an edge in it; Both GC-DFD and CC-DFD algorithms are tested on
otherwise the patch is rejected. Finally, for both two sets of scenes presented in Fig. 5. The first one is
GC-DFD and CC-DFD the only parameters that have made of grayscale scenes, and the second one is made
to be set by the user are the patch size and, if needed, of color scenes. To simulate images from the chro-
the patch overlapping. All other parameters are matic lens and the grayscale scenes, we convert
either fixed (such as the parameter μ of CC-DFD, the color scene into a grayscale scene and use it to

10 October 2013 / Vol. 52, No. 29 / APPLIED OPTICS 7157


Table 1. Mean Error Metric (Err) and Standard Deviation (Std) of
Depth Estimation Results over a Set of True Depths Varying from
1.3 to 3.5 m with a Step of 20 cm, for Various Values of μ

μ 1 0.4 0.04 0.004 0.0004


5.5

Error metric (cm)


Error metric (cm) 50 50 Mean err in cm 9.1 7.0 7.2 9.1
Mean std in cm 14 11 8.3 12 16

25 25
for depths varying from 1.3 to 3.5 m for five values
0 0 of μ. The value of μ that gives the lowest error metric
1.2 2.4 3.6 1.2 2.4 3.6 and standard deviation is 0.04. This result is consis-
Depth in m Depth in m tent with the value empirically chosen in [31]. This
75 75 value is fixed for all the other experiments of
the paper.
Std (cm)

Std (cm)

50 50
C. Chromatic Versus Achromatic Lens Depth Estimation
25 25 Performance Study
In this section, we use the CC-DFD algorithm on
0 0
1.2 2.4 3.6 1.2 2.4 3.6 simulated images for two imaging systems having
Depth in m Depth in m either a chromatic or an achromatic lens. The chro-
(a) (b) matic lens is identical to the one simulated in
Section 3.A. The achromatic lens has the same focal
CC−DFD algorithm length and f-number, i.e., respectively, 25 mm and 4,
GC−DFD algorithm but has a single in-focus plane that is put at 1.5 m in
order to have no depth estimation ambiguity nor
Fig. 5. Comparison of CC-DFD and GC-DFD algorithms using dead zone in the range of 2 to 3 m. The set of potential
error metric and standard deviation (std) computed over a collec- PSF triplets for each lens is built for depth varying
tion of (a) grayscale scenes and (b) color scenes. from 2 to 4 m with a step of 5 cm. For each imaging
system, we generate 120 image patches of size
20 × 20 pixels using scenes patches extracted from
produce each channel image. For the CC-DFD
natural color scenes presented in Fig. 5(b). For each
algorithm the value of μ is fixed at 0.04, a value
depth and each imaging system, the RGB images are
empirically determined in [31]. The depth estimation
obtained by convolution of scene patches with the
results for both algorithms and both sets are shown
corresponding PSFs triplet. White Gaussian noise
in Fig. 5. is added to the result with a standard deviation of
Figure 5(a) shows that for the set of grayscale 0.05, given that the scenes have a normalized inten-
scenes either the GC-DFD or the CC-DFD algorithm sity. Figure 6 shows the error metric and the stan-
leads to an error metric and a standard deviation dard deviation of the depth estimation obtained
under 10 cm, a value that is close to the depth sam- with the CC-DFD algorithm. The error metrics of
pling step of 5 cm used in the PSF set construction. both imaging systems are both low; however, the
Despite the fact that it is a grayscale scene, the
CC-DFD algorithm gives results very close to
20
Error metric in cm

the GC-DFD. Moreover, Fig. 5(b) shows that the


GC-DFD algorithm on color scenes leads to unsatis- CL
factory results with a significant error metric on AL
10
small depths and large standard deviation. In con-
trast the CC-DFD algorithm shows far better results
for these color scenes. These simulation results illus- 0
trate that classical DFD techniques, which rely as 2 2.5 3 3.5
the GC-DFD algorithm on the assumption that the Depth in m
blurred images originate from the same scene, are
150
prone to bad estimation results when used with
CL
images of a color scene from a chromatic lens. This
Std in cm

100 AL
justifies the need for the algorithm based on the
modelization of the correlation between the RGB 50
channels, such as the proposed CC-DFD.
0
B. Setting of the Parameter μ 2 2.5 3 3.5
The depth estimation tests conducted in Section 3.A Depth in m
on the CC-DFD algorithm with color scenes are Fig. 6. Comparison of CC-DFD algorithm using either an achro-
reproduced for different values of μ. Table 1 shows matic lens (AL) or a chromatic lens (CL). Lens parameters for both
the mean error metric and mean standard deviation systems are given in Section 3.C.

7158 APPLIED OPTICS / Vol. 52, No. 29 / 10 October 2013


standard deviation with the chromatic lens is much
B

Blur radius in pixel (ε)


lower than with an achromatic one. This study illus- 15
trates the performance gain in terms of accuracy of
G
using chromatic aberration for depth estimation.
10
4. Experimental Validation on a Real Chromatic R
Camera
To experimentally demonstrate the efficiency of the 5
proposed approach, we have built a prototype camera
with a chromatic lens. The design of the lens and the
0
chosen calibration procedure are first presented. 1 2 3 4 5
Then we demonstrate the depth estimation capabil- Depth in m
ity of our chromatic approach using this prototype
Fig. 7. Theoretical blur variation with respect to depth for the
with the CC-DFD algorithm. designed chromatic camera.
A. Lens Design
1. Settings to account for Bayer matrix undersampling. Figure 8
We have designed a customized optical system using shows the focal shift between the red and the blue
the optical design software Zemax. We use a commer- wavelengths and the lateral chromatic aberration
cially available 5 megapixel CFA Stingray color sen- of the obtained triplet. As expected, the maximal
sor with a pixel size of 3.45 μm. The lens focal length lateral chromatic aberration is of 1 μm, which is less
is fixed at 25 mm, and the depth estimation range is than the sensor pixel size.
chosen from 1 to 5 m, to conduct both indoors and
outdoors tests. These settings lead to an optic with 3. Lens Realization
a diagonal field of view of around 25°.
Figure 9 shows the obtained triplet, and the color
rays corresponding to different point sources at vari-
2. Design Process ous field angles. The specifications of the triplet were
Given the lens settings, our aim was to build a lens sent to an optical lens manufacturer who built the
with sufficient longitudinal chromatic aberration to lenses. We have mounted the obtained lens on a
have separated RGB DoF in the required depth customized mechanical structure, and the optical
range, and thus avoid dead zones, as explained in system was fixed in front of the Stingray color sensor.
Section 1.A and Fig. 3(b). On the other hand, all other A picture of the obtained camera is presented
aberrations should be reduced as much as possible, in Fig. 10.
in order to maintain good image quality, and in par-
ticular lateral chromatic aberration, because it B. Lens Calibration
causes misalignment of the RGB images. The CC-DFD algorithm requires a set of potential
The design of a lens with reduced field aberrations defocus blur triplets. To build such a set, one could
for such a field of view requires us to use at least a use several techniques, such as modeling the PSF
lens triplet. We choose to start with a classical 25 mm with a pill-box function, with Fourier optics formula
f ∕4 triplet made of two convergent lenses separated [33], or using simulated PSF generated by an optical
by a divergent lens, which is the triplet aperture
stop, because this configuration naturally helps to
reduce lateral chromatic aberration. In the classical
optical design, the lens glasses would usually be
chosen so that their constringence difference reduces
longitudinal chromatic aberration, for instance,
respectively, crown, flint, and crown glasses. Hence,
the amount of longitudinal chromatic aberration can
be tuned by changing the triplet glasses.
We have compared several choices of glass for the
triplet, and we found that a focal shift of 200 μm,
obtained with the glasses N-BK7/LLF1/N-BK7,
was an amount of chromatic aberration that correctly
separates the RGB DoF in the depth range of 1 to
5 m. Indeed, as illustrated in Fig. 7, with this triplet,
when the green channel is focused around 2.8 m, the
in-focus planes of the blue and red channels are ap-
proximately at 1.9 and 4.5 m and the camera always
has at least one channel whose blur size is above one Fig. 8. Focal shift and lateral chromatic shift between the red and
pixel in the range 1 to 5 m. Note that the pixel unit in the blue wavelengths of the proposed chromatic lens estimated by
this figure is twice that of the detector pitch in order the optical design software Zemax.

10 October 2013 / Vol. 52, No. 29 / APPLIED OPTICS 7159


(a)

R G B

Fig. 9. Chromatic lens scheme given by Zemax, according to the


optical design of Section 4.A. Color rays correspond to point (b)
sources at different field angles.

design software with the theoretical lens settings. (c)


However, these methods do not take into account
aberrations, misalignment, or mechanical construc-
tion errors. A solution is to calibrate the PSFs of
the actual camera prototype using either a point (d)
source or a known high frequency pattern [12,34,35].
We use the method proposed in [35], with the black-
and-white pattern shown in Fig. 11(d), for depths Fig. 11. (a) Random pattern of [35] used for the RGB PSFs
varying from 1 to 5 m with a step of 5 cm. As the trip- calibration. (b)–(d) Examples of calibrated PSFs at (b) 4.7 m,
let still suffers from field aberrations that imply a (c) 2.7 m, and (d) 2 m.
PSF variation with field angle, we build an on-axis
and off-axis potential PSFs set. Figure 11 shows
on-axis calibrated PSFs at three different depths
for the three RGB channels. Note that the compari- True depth Estimation
son of depth estimation results using different cali-
5
bration techniques remains an interesting subject
for further studies. Indeed, recent works propose 4
to use only few calibrated PSFs to estimate any
3
PSF at any range [36,37]. Implementation of these
methods could also be interesting here in order to 2
limit the number of snapshots required for PSF
calibration. 1
1 2.5 4
C. Depth Estimation Accuracy Evaluation
5
Acquisitions are made of color textured plane fronto-
parallel scenes put at different distances from the 4
camera. For each scene and at each distance, depth 3
is estimated with the CC-DFD algorithm on image
patches of size 21 × 21 inside a centered region of size 2
120 × 120, where the PSF is supposed to be constant, 1
with a patch overlapping of 50%. Figure 12 shows 1 2.5 4
depth estimation results for the four scenes pre-
sented in Fig. 12. We plot the mean and the standard 5
deviation of the estimated depth results for each 4
scene position with respect to the ground truth given
3
2
1
1 2.5 4
5
4
3
2
1
Fig. 10. Customized chromatic lens (left) has been mounted on a 1 2.5 4
Stingray color sensor to obtain a chromatic camera (right) of Fig. 12. Evaluation of depth estimation accuracy on real fronto-
dimensions 4.5 × 4.5 × 8 cm. parallel color scenes. Axes are in m.

7160 APPLIED OPTICS / Vol. 52, No. 29 / 10 October 2013


True depth Estimation
5

1
1 2 3 4
5

2 Fig. 14. Results of the prototype chromatic camera on indoor


scenes. (a) Raw image acquired with the chromatic lens. (b) Kinect’s
1 depth map. (c) Raw depth map with CC-DFD with a patch size of
1 2 3 4
21 × 21 and 50% overlapping. The depth labels are in m. Black
5
label corresponds to homogeneous regions rejected by the
4 algorithm.

3
with the CC-DFD algorithm can be compared to the
2 depth maps given by the Kinect. Homogeneous re-
gions, where depth can not be estimated, appear in
1
1 2 3 4 black in the depth map.
Fig. 13. Evaluation of depth estimation accuracy on real fronto- In the first two examples, the estimated depth
parallel grayscale scenes. Axes are in m. maps are noisier but consistent with the Kinect
depth map. The third example is a case in which
the Kinect depth map locally shows outliers. This
by the Kinect. Indeed, the Kinect accuracy ranges is due to occultations and thin surfaces where the
from a few millimeters at 1 m to 3 cm at 3 m [38], Kinect pattern does not reflect properly. In contrast,
which is lower than the sampling of depths in our po- the proposed chromatic DFD camera correctly esti-
tential set. For each scene, bias is comparable to the mates a depth map for such a complex scene.
PSF calibration step (5 cm) and standard deviation
varies from 3 to 10 cm between 1 and 3.5 m. The 2. Outdoor Situations
same experiment was repeated with the same tar-
gets printed in grayscale, leading to very similar In contrast with the Kinect that is sensible to the sun
results, presented in Fig. 13. These results demon- IR illumination, the proposed chromatic camera can
strate that the CC-DFD algorithm combined with also be used for outdoor depth estimation. Figure 15
a chromatic camera can provide accurate depth shows examples of raw depth maps obtained in vari-
estimation and is robust to various scene textures. ous outdoor situations with the CC-DFD algorithm
Note, however, that both bias and standard with 9 × 9 square patches and 50% overlapping. In
deviation degrade after 3.5 m. One can explain this the estimated depth maps, various objects can be
degradation by looking at the theoretical PSF size distinguished, including thin ones like the grid in
variations of the prototype lens presented in Fig. 7. the second example. Note that this kind of repetitive
First, the variation of the PSF size reduces with object misleads the correlation technique, so would
depth, so it is not surprising that the accuracy of be difficult to identify with a stereoscopic device.
depth estimation reduces as well. Besides, the PSF 5. Discussion
of the red channel is below one pixel after 3.5 m,
so this channel no longer gives depth information, A. Robustness of Depth Estimation with Respect to
which reduces the performance. Variation of the Scene Spectrum
D. Depth Maps In the proposed DFD approach, we take advantage of
the interchannel chromatic aberration in order to
1. Comparison with an Active Ranging System for estimate depth from a set of potential calibrated
Indoor Scenes PSFs triplets. However, the object spectrum can have
Figure 14 shows three examples of depth maps an influence on the actual PSF size due to intrachan-
obtained from images acquired with our prototype nel chromatic aberration. One option to reduce the
of chromatic camera using the CC-DFD algorithm PSF variability within a color channel would be to
with 21 × 21 square patches and 50% overlapping. use a sensor with narrowband color filter. However,
As they are indoor scenes, the depth maps obtained this approach would increase the complexity of the

10 October 2013 / Vol. 52, No. 29 / APPLIED OPTICS 7161


(a) (b)
2

1.5

(c) (d) (e) (f)


Fig. 16. Example of restoration of a real image acquired with the
1 prototype chromatic camera using the high frequency transfer
approach of Eq. (21). (a) Raw image. (b) Image after restoration.
(c) and (e) Zoomed detail patches from the raw red channel.
(d) and (f) Restored detailed of the red channel.

could consider restoration by local multichannel


deconvolution, although such approaches would
probably be computationally demanding. On the
other hand, the prototype camera has been designed
so that the blur is mainly caused by defocus and that,
Fig. 15. Results of the prototype chromatic camera on outdoor
for a large depth range, there is at least one sharp
scenes. (Left) Raw image acquired with the chromatic lens. (Right) channel out of three (see Fig. 7). Thus we propose
Raw depth map with CC-DFD with a patch size of 9 × 9 and 50% a simple and fast restoration technique by transfer-
overlapping. The depth labels are in m. Black label corresponds to ring the high frequencies of the sharpest channel to
homogeneous regions rejected by the algorithm. others. Such an approach has been proposed in [21]
and is also related to pansharpening in satellite im-
aging. Formally the restored channel yc is the sum of
device realization and reduce the SNR. Here we opt the original image with a weighted sum of the high
for an off-the-shelf RGB sensor having broadband frequencies of all channels:
color filters. Then we assume that the observed
scenes have a sufficiently large spectrum to be yc;out  p  yc;in p  adp;R HPR p  adp;G HPG p
consistent with PSFs calibrated with a black-and-  adp;B HPB p; (21)
white pattern as described in Section 4.B. The
experimental results obtained with various color or where HPc p is the output of a high-pass filter
grayscale scenes show that this assumption is valid (difference of Gaussian) at pixel p applied on the
enough to give an accurate depth estimation. image of the channel c.
Finally, in the limit case of an object emitting only The weight of a channel c depends on the distance
in the spectrum of one channel, two channels out of the object with respect to the in-focus planes of
of three will receive no signal, which will probably that channel. If the object is at depth d close to
reduce the CC-DFD efficiency. However, in our opin- the depth d0;c of the in-focus plane of a channel,
ion this case is unlikely to happen in natural scenes, its image in this channel is likely to have high
and even so it could be detected with a preprocessing frequency content. Weights are then decreasing func-
so as to use a single image DFD approach [14] on a tions of jd0;c − dj, as presented in Fig. 17. In practice,
valid channel. for each pixel, the weights are computed using the
estimated depth provided by the CC-DFD algorithm.
B. Image Restoration In homogeneous regions, the weights are set to
Chromatic aberration induces an inhomogeneous zero. Note that the restoration equation (21) is
resolution among the RGB channels. Thus, as illus- formally similar to the one used in [21] but, in this
trated in Fig. 16(a), the raw RGB image requires reference, the weights are computed on the basis
restoration. The issue with a chromatic lens is that of a local contrast of each channel through a learned
blur is both spatially and spectrally variant. As the function. Figure 16 presents an example of restora-
CC-DFD algorithm provides a depth value, hence tion of a real image acquired with the prototype
a triplet of PSFs, for each local RGB patch one chromatic camera. The resolution gain appears

7162 APPLIED OPTICS / Vol. 52, No. 29 / 10 October 2013


1 1
relies on a simple and fast median filter. The median
filter has a size of three times the patch size. This

d,G
d,R
0.5 0.5

a
filter helps to eliminate the outliers in the back-
0 0 ground but does not propagate depth information
2 4 2 4
on homogeneous regions. The second regularization
Depth in m Depth in m
(a) (b) approach consists in optimizing a criterion based on
a Markov random field model of the depth map:
1 X
Ed  GLC dp
d,B
a 0.5 p

X  
0 ‖yg p − yg q‖2
2 4 λ exp − 1 − δdp ;dq ;
Depth in m p;q∈N p
2σ 2
(c)
(22)
Fig. 17. Weights for the high frequency transfer equation (21) as
a function of depth. where N p is a first-order neighborhood of the pixel p,
dp is the estimated depth value at pixel p, and yg is
the result of color image conversion in grayscale.
clearly on the details of the red channel presented in This energy favors depth jumps located on image
Figs. 16(c)–16(f). A quantitative evaluation of the re- edges. We minimize this criterion using a graphcut
storation gain could be the subject of further studies. algorithm [12,15,39]. Figure 18(b) presents the
C. Depth Map Regularization results obtained with λ  1.1 and σ  4.10−4 . The
parameters have been chosen to propagate informa-
In Fig. 14, we have presented raw depth maps tion over homogeneous regions and lead to a result
obtained with the CC-DFD algorithm. These raw that is close to a depth segmentation.
results present few outliers, and show no depth infor-
mation on homogeneous regions that are excluded 6. Conclusion
from the processing. A regularization approach
can be use to overcome these two issues. Here we In this paper we have proposed a new passive
present preliminary results of two regularization method for depth estimation based on the use of
approaches. The first one, presented in Fig. 18(a), chromatic aberration and a DFD approach. We have
presented an algorithm that estimates depth locally
using a PSFs triplet selection criterion derived from
a maximum likelihood calculation. This algorithm is
based on a modelization of the interchannel scene
correlation, which allows estimation of depth on color
scene patches. Simulated and experimental tests
have illustrated the effectiveness of the proposed al-
gorithm and provide a demonstration of the concept
of chromatic DFD.
There are several perspectives for this work. The
calibration process could be improved so as to reduce
2.5 the number of calibration images required. Regard-
ing the processing, we are currently working on a
parallel implementation of the CC-DFD algorithm.
A more detailed study of image restoration and
2 depth map regularization should also be conducted.
Finally, the design of the chromatic lens could be
optimized to improve the overall performance of
the system using the codesign approach presented
1.5 in [40].
The authors would like to thank F. Bernard and
L. Jacubowiez for fruitful discussions.
1 References
1. A. Pentland, “A new sense for depth of field,” IEEE Trans.
(a) (b) Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
2. Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and
Fig. 18. Result of depth map regularization. From up to down:
matte using a color-filtered aperture,” ACM Trans. Graph.
acquired image, raw depth map, regularized depth map: (a) with 27, 1–9 (2008).
a median filter of size 3 × 3, (b) after the minimization of Eq. (22). 3. A. Lumsdaine and T. Georgiev, “The focused plenoptic
The depth labels are in m. Black label corresponds to homogeneous camera,” in Proceedings of IEEE International Conference
regions rejected by the algorithm. on Computational Photography (IEEE, 2009), pp. 1–8.

10 October 2013 / Vol. 52, No. 29 / APPLIED OPTICS 7163


4. M. Subbarao, “Parallel depth recovery by changing camera 22. J. Lim, J. Kang, and H. Ok, “Robust local restoration
parameters,” in Proceedings of the IEEE International of space-variant blur image,” Proc. SPIE 6817, 68170S
Conference on Computer Vision (IEEE, 1988), pp. 149–155. (2008).
5. C. Zhou and S. Nayar, “Coded aperture pairs for depth from 23. L. Waller, S. S. Kou, C. J. R. Sheppard, and G. Barbastathis,
defocus,” in Proceedings of the IEEE International Conference “Phase from chromatic aberrations,” Opt. Express 18,
on Computer Vision (IEEE, 2009), pp. 325–332. 22817–22825 (2010).
6. P. Favaro and S. Soatto, 3D Shape Estimation and Image 24. S. Kebin, L. Peng, Y. Shizhuo, and L. Zhiwen, “Chromatic con-
Restoration (Springer, 2007). focal microscopy using supercontinuum light,” Opt. Express
7. P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture 12, 2096–2101 (2004).
photography,” ACM Trans. Graph. 26, 1–7 (2007). 25. P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier,
8. H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and “Chromatic depth from defocus, an theoretical and experi-
S. Nayar, “Programmable aperture camera using LCoS,” in mental study,” in Computational Optical Sensing and
Proceedings of the IEEE European Conference on Computer Imaging Conference, Imaging and Applied Optics Technical
Vision (IEEE, 2010), pp. 337–350. Papers (2012), paper CM3B.3.
9. S. Quirin and R. Piestun, “Depth estimation and image 26. J. Idier, Bayesian Approach to Inverse Problems (Wiley, 2008).
27. A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understand-
recovery using broadband, incoherent illumination with engi-
ing and evaluating blind deconvolution algorithms,” in
neered point spread functions,” Appl. Opt. 52, A367–A376
Proceedings of IEEE Conference on Computer Vision and
(2013). Pattern Recognition (IEEE, 2009), pp. 88–101.
10. S. Zhuo and T. Sim, “On the recovery of depth from a single
28. G. Wahba, “A comparison of GCV and GML for choosing the
defocused image,” in International Conference on Computer smoothing parameter in the generalized spline smoothing
Analysis of Images and Patterns (Springer 2009), pp. 889–897. problem,” Ann. Stat. 13, 1378–1402 (1985).
11. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and 29. A. Neumaier, “Solving ill-conditioned and singular linear sys-
J. Tumblin, “Dappled photography: mask enhanced cameras tems: a tutorial on regularization,” SIAM Rev. 40, 636–666
for heterodyned light fields and coded aperture refocusing,” (1998).
ACM Trans. Graph. 26, 1–12 (2007). 30. F. Champagnat, “Inference with gaussian improper distribu-
12. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image tions,” Internal Onera Report No. RT 5/14983 DTIM (2012).
and depth from a conventional camera with a coded aperture,” 31. L. Condat, “Color filter array design using random patterns
ACM Trans. Graph. 26, 1–9 (2007). with blue noise chromatic spectra,” Image Vis. Comput. 28,
13. M. Martinello and P. Favaro, “Single image blind deconvolu- 1196–1202 (2010).
tion with higher-order texture statistics,” Lect. Notes Comput. 32. D. L. Gilblom, K. Sang, and P. Ventura, “Operation and per-
Sci. 7082, 124–151 (2011). formance of a color image sensor with layered photodiodes,”
14. P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, Proc. SPIE 5074, 318–331 (2003).
“Single image local blur identification,” in Proceedings 33. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill,
of IEEE Conference on Image Processing (IEEE, 2011), 1996).
pp. 613–616. 34. N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation
15. M. Martinello, T. Bishop, and P. Favaro, “A Bayesian approach using sharp edge prediction,” in Proceedings of IEEE
to shape from coded aperture,” in Proceedings of IEEE International Conference on Computer Vision and Pattern
Conference on Image Processing (IEEE, 2010), pp. 3521–3524. Recognition (IEEE 2008), pp. 1–8.
16. A. Chakrabarti and T. Zickler, “Depth and deblurring from a 35. M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-
spectrally varying depth of field,” in Proceedings of IEEE parametric sub-pixel local point spread function estimation
European Conference on Computer Vision (IEEE, 2012), is a well posed problem,” Int. J. Comput. Vis. 96, 175–194
pp. 648–661. (2012).
17. J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic 36. Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using
aberration and depth extraction,” in Proceedings of IEEE calibrated lens simulations,” in Proceedings of IEEE Euro-
International Conference on Pattern Recognition (IEEE, pean Conference on Computer Vision (IEEE, 2012), p. 42.
37. H. Tang and K. N. Kutulakos, “What does an aberrated photo
2000), pp. 762–765.
tell us about the lens and the scene?” in Proceedings of
18. M. Robinson and D. Stork, “Joint digital-optical design of
IEEE International Conference on Computational Photogra-
imaging systems for grayscale objects,” Proc. SPIE 7100,
phy (IEEE, 2013), p. 86.
710011 (2008). 38. J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analy-
19. B. Milgrom, N. Konforti, M. Golub, and E. Marom, “Novel sis of low cost triangulation-based 3D camera: microsoft kin-
approach for extending the depth of field of Barcode decoders ect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf.
by using RGB channels of information,” Opt. Express 18, Sci. 39, 175–180 (2012).
17027–17039 (2010). 39. J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold,
20. O. Cossairt and S. Nayar, “Spectral focal sweep: extended “Unsupervised multiresolution segmentation for images with
depth of field from chromatic aberrations,” in Proceedings low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.
of IEEE Conference on Computational Photography (IEEE, 23, 85–90 (2001).
2010), p. 1–8. 40. P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J.
21. F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Idier, “Design of a chromatic 3D camera with an end-to-end
Tarchouna, and F. C. Cao, “Extended depth-of-field using performance model approach,” in Proceedings of the IEEE
sharpness transport across color channels,” Proc. SPIE Conference on Computer Vision and Pattern Recognition work-
7250, 72500N (2009). shops (IEEE, 2013), pp. 953–960.

7164 APPLIED OPTICS / Vol. 52, No. 29 / 10 October 2013

You might also like