You are on page 1of 10

Resolution-enhanced three-dimension / two-

dimension convertible display based on integral


imaging
Jae-Hyeung Park, Joohwan Kim, Yunhee Kim and Byoungho Lee
Optical Engineering and Quantum Electronics Laboratory
School of Electrical Engineering, Seoul National University, Kwanak-Gu Shinlim-Dong,
Seoul 151-744, Korea
byoungho@snu.ac.kr

http://oeqelab.snu.ac.kr

Abstract: A scheme for the resolution-enhancement of a three-


dimension/two-dimension convertible display based on integral imaging is
proposed. The proposed method uses an additional lens array, located
between the conventional lens array and a collimating lens. Using the
additional lens array, the number of the point light sources is increased far
beyond the number of the elemental lenses constituting the lens array, and,
consequently, the resolution of the generated 3D image is enhanced. The
principle of the proposed method is described and verified experimentally.
©2005 Optical Society of America
OCIS codes: (110.2990) Image formation theory, (100.6890) Three-dimensional image
processing, (220.2740) Geometrical optics, optical design

References and links


1. G. Lippmann, “La photographie integrale,” Comptes-Rendus Acad. Sci. 146, 446-451 (1908).
2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image
based on integral photography,” Appl. Opt. 36, 1598-1603 (1997).
3. S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-
dimensional integral image recording system that uses circular and hexagonal-based spherical surface
microlenses,” J. Opt. Soc. Am. A. 18, 1814-1821 (2001).
4. T. Naemura, T. Yoshida, and H. Harashima, “3-D computer graphics based on integral photography,” Opt.
Express 8, 255-262 (2001), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-8-4-255.
5. J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, "Three-dimensional display scheme based on integral
imaging with three-dimensional information processing," Opt. Express 12, 6020-6032 (2004),
http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-24-6020.
6. S.-H. Shin and B. Javidi, “Speckle reduced three-dimensional volume holographic display using integral
imaging,” Appl. Opt. 41, 2644–2649 (2002).
7. S.-H. Hong, J.-S. Jang, and B. Javidi, "Three-dimensional volumetric object reconstruction using
computational integral imaging," Opt. Express 12, 483-491 (2004),
http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-483.
8. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens
array,” Appl. Opt. 40, 5592-5599 (2001).
9. H. Liao, M. Iwahara, N. Hata, and T. Dohi, "High-quality integral videography using a multiprojector,"
Opt. Express 12, 1067-1076 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-6-1067
10. B. Lee, S. Jung, and J. -H. Park, “Viewing-angle-enhanced integral imaging using lens switching,” Opt.
Lett. 27, 818-820 (2002).
11. J. S. Jang, Y.-S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale
high resolution three-dimensional display,” Opt. Express 12, 557-563 (2004),
http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-4-557
12. Y. Kim, J.-H. Park, S.-W. Min, S. Jung, H. Choi, and B. Lee, "Wide-viewing-angle integral three-
dimensional imaging system by curving a screen and a lens array," Appl. Opt. 44, 546-552 (2005).
13. J. S. Jang, and B. Javidi, “Three dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144-1146
(2002).

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1875
14. J. S. Jang, and B. Javidi, “Improved viewing resolution of 3-D integral imaging with nonstationary micro-
optics,” Opt. Lett. 27, 324-326 (2002).
15. J. Hong, J.-H. Park, S. Jung and B. Lee, "A depth-enhanced integral imaging by use of optical path
control," Opt. Lett. 29, 1790-1792 (2004).
16. J. S. Jang, and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use
of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. 28, 1924-1926 (2003).
17. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, "Depth-enhanced three-
dimensional-two-dimensional convertible display based on modified integral imaging," Opt. Lett. 29, 2734-
2736 (2004).

1. Introduction
Integral imaging is a technique for displaying three-dimensional (3D) images using a lens
array. The feature of the lens array that spatially samples and captures the light field from the
objects is exploited in the integral imaging to capture and display 3D images [1]. Full color,
full parallax real-time 3D images provided by the integral imaging make it attractive [2-7] and
considerable efforts have been made to enhance its viewing parameters [8-16]. A novel
integral imaging scheme with enhanced depth and 3D / two-dimension (2D) convertibility was
recently reported [17]. In this approach, the lens array, which is located behind the
transmission-type spatial light modulator (SLM) and a polymer-dispersed liquid crystal
(PDLC) attached to the lens array, enhance the expressible depth range substantially and
enable 3D/2D conversion. With these superior features, integral imaging acquires a much
wider range of applications, and approaches the level of commercialization. This is because
only 3D/2D convertible display techniques can infiltrate into commercial markets for 3D TV.
Our previous work [17] using PDLC was the first demonstration of a 3D/2D convertible
integral imaging technique (that has both horizontal and vertical parallaxes unlike the
lenticular or parallax barrier technique). The remaining problem is resolution. In fact, depth
enhancement and 3D/2D convertibility can be achieved, but at the expense of the resolution.
Therefore it is necessary to develop a method that compensates for resolution degradation
while retaining the useful features, i.e., depth enhancement and 3D/2D convertibility.
In this paper, we report on a resolution enhancement scheme for a depth-enhanced 3D/2D
convertible display based on integral imaging. The proposed method utilizes an additional
lens array to generate an excess of point light sources, thus permitting the resolution of the
generated 3D images to be enhanced.

Lens array
SLM Lens array
Collimating Collimating SLM
lens lens

Light 3D image
Light Light
source source
source

Point light
sources

PDLC 2f 2D image PDLC 2f


(diffuse) (transparent)
(a) (b)
Fig. 1. Schematic diagram of 3D/2D convertible integral imaging (a) 2D mode (b) 3D mode

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1876
2. 3D/2D convertible integral imaging and resolution limitation
Figure 1 shows the concept of the 3D/2D convertible integral imaging. The 3D/2D convertible
integral imaging consists of a collimated incoherent light source, a PDLC, a lens array and an
SLM. Conversion between the 3D and 2D modes is achieved by controlling the diffusing rate
of the PDLC. In the 2D mode, the PDLC is set to be diffusive. The collimated light is
scattered by the PDLC and this scattered field is then relayed to the SLM. Each pixel in the
SLM is then illuminated effectively from all directions, and consequently, 2D images are
observed on the SLM plane with full resolution and viewing angle. In the 3D mode, the PDLC
is set to be transparent. In this case, the collimated light rays pass through the PDLC without
scattering and are imaged into an array of point light sources at the focal plane of the lens
array. The light rays from each point light source are modulated according to the direction of
their propagation, and finally integrated into 3D images.
In the 3D mode, the resolution is limited by the number of point light sources. Figure 2
shows this point. As shown in Fig. 2, the light rays from each point light source are modulated
by the SLM. Hence, a unit comprised of one elemental lens, a point light source, and the
corresponding region on the SLM is effectively one pixel that emits light rays of different
colors or intensities according to the observation directions and, as a result, acts as one pixel
in the generated 3D images. Since the role of the elemental lens and the SLM is merely the
formation and modulation of the point light sources, the key parameter for determining the
resolution of the generated 3D images is the number of point light sources. That is, the
resolution of the generated 3D images is the same as the number of point light sources. In
order to enhance the resolution of generated 3D images, it is necessary to generate more point
light sources with a smaller spacing between them. One straightforward method for achieving
this end is to use a lens array with more elemental lenses and small elemental lens pitches.
Such a lens array should reach the desired size of the display panel (e.g. 12-inch or 15-inch)
with uniform elemental lenses whose pitch is comparable to the pixel pitch of conventional
2D displays (e.g. 200 um) and f-number is sufficiently small to preserve a reasonable viewing
angle, which is difficult to realize at this time. This high requirement stimulates research for
an alternate resolution compensation method that utilizes lens arrays that can be fabricated
more readily.

θ
act as one pixel
ϕ
in 3D image
Point light
source plane

f f

ϕ 1 elemental
image region
f f
Integrated 3D image

Fig. 2. Limitations in resolution of the 3D/2D convertible integral imaging

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1877
3. Configuration of the proposed method
The proposed method enhances the resolution of the generated 3D images using an additional
lens array. A schematic diagram of the proposed method is shown in Fig. 3. In addition to the
lens array between the PDLC and the SLM, an additional lens array is inserted between the
collimating lens and the PDLC. In the 3D operation mode, collimated light rays from the light
source are imaged by the first (additional) lens array into the first array of point light sources
at the focal plane of the first lens array. Each elemental lens of the second (original) lens array
then forms an image of this first point light source array again, and, consequently, an array of
point light sources with far more point light sources than the number of the elemental lenses in
the second lens array is generated at the image plane of the second lens array. Each one in this
abundant number of second point light sources serves as one pixel in the generated 3D images,
as explained above, and, hence, the resolution is dramatically enhanced, compared to the
previous configuration shown in Fig. 1. One significant difference between the proposed
method and our previous method [17] is that each elemental lens produces multiple point light
sources in the proposed method, while only one point light source is produced in our previous
method. Therefore, with the lens array, which can be readily fabricated, a high density of
point light sources is easily obtained over a large display area, resulting in high resolution 3D
images.

First point
light sources SLM
Collimating
Lens array
lens
Integrated
Light image
source

Second
point light
sources

Additional PDLC
lens array

Fig. 3. Schematic diagram of the proposed method

In the proposed method, the number of elemental images should be same as that of the
second point light sources and each one should be generated with reference to the position of
the corresponding point light source. Since much more point light sources are generated in the
proposed method than in the previous method, the number of elemental images should be
increased while the size of each one is decreased, which may result in a coarser modulation of
the light rays from each point light source than in the previous method if the pixel pitch of the
SLM is fixed.
The number of second point light sources, the spacing between them, and the diverging
angle of the light rays from each point light source are determined by the specifications and
locations of the two lens arrays in the proposed method. The geometry for this configuration is
shown in Fig. 4. The first lens array with focal length f1 and elemental lens pitch ϕ1 forms the
first point light source array at its focal plane. Each elemental lens with pitch ϕ2 and focal
length f2 in the second lens array images these first point light sources at its image plane. Note

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1878
that not all of the first point light sources are imaged by each elemental lens in the second lens
array because the diverging angle of the light rays from each first point light source is limited.
The diverging angle of each first point light source, ψ1, is given by

⎛ ϕ1 ⎞
ψ 1 = 2 tan −1 ⎜⎜ ⎟.
⎟ (1)
⎝ 2 f1 ⎠

Since the position of the first point light source generated by k-th elemental lens in the first
lens array, y1,k, can be represented by y1,k=kϕ1, it illuminates the second lens array’s elemental
lenses that satisfy

⎛ψ 1 ⎞ ⎛ψ 1 ⎞
kϕ1 − l tan⎜ ⎟ < qϕ 2 < kϕ1 + l tan⎜ ⎟, (2)
⎝ 2 ⎠ ⎝ 2 ⎠

where l is the distance between the second lens array and the focal plane of the first lens array,
and q is the index of the elemental lenses in the second lens array. Or equivalently, the q-th
elemental lens in the second lens array is illuminated by the first point light sources whose
index k satisfies kl≤k≤kh where kl and kh are given by

⎛ψ 1 ⎞ ⎛ψ 1 ⎞
k l ϕ1 + l tan⎜ ⎟ = qϕ 2 = k hϕ1 − l tan⎜ ⎟. (3)
⎝ 2 ⎠ ⎝ 2 ⎠
From Eqs. (1) and (3), kl and kh can also be represented by
qϕ 2 l
kl = − , (4)
ϕ1 2 f1

qϕ 2 l
kh = + . (5)
ϕ1 2 f1
Therefore the first point light sources satisfying (kl≤k≤kh) are imaged into the second point
light sources by the q-th elemental lens in the second lens array at

⎛ g ⎞ kϕ1 g
y 2,k ,q = qϕ 2 ⎜1 + ⎟− , (kl≤k≤kh) (6)
⎝ l⎠ l
where g is the location of the image plane of the second lens array which is given by
lf 2
g= . (7)
l − f2
The diverging angle of each second point light source which mainly determines the viewing
angle of the generated 3D images is written by

⎛ ϕ2 ⎞
ψ 2 = 2 tan −1 ⎜⎜ ⎟⎟ . (8)
⎝ 2g ⎠
Since g is slightly larger than f2 as indicated by Eq. (7), the viewing angle of the proposed
method can be somewhat narrower than conventional method where g=f. Note that the
diverging directions of the second point light sources are, in general, not parallel but vary

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1879
depending on the relative position between the optic axis of the corresponding elemental lens
in the second lens array and the second point light sources, as shown in Fig. 4 (shaded region).
This non-parallel diverging direction is another factor in reducing the viewing angle of the
proposed method.

Fig. 4. Geometry of the generation of a point light source array using two lens arrays

The final array of the point light sources involves the collection of all of the point light
sources imaged by all the elemental lenses in the second lens array. Through careful design
using Eqs. (4)-(8), a uniformly dense array of point light sources with parallel diverging
directions can be obtained. For example, 2N×2N uniform point light sources can be generated
using a second lens array consisting of N×N elemental lenses. Figure 5 shows an example of
such a configuration. The parameters and locations of two lens arrays are adjusted so that each
elemental lens in the second lens array images 3 point light sources at y2,k,q = (q+0.5)ϕ2, qϕ2,
(q-0.5)ϕ2. As a result, at every position of (q±0.5)ϕ2, the point light sources imaged by two
neighboring elemental lenses in the second lens array are overlapped. Therefore a uniform
point light source array can be obtained that is twice as dense. Note that at the overlapping
point light sources the diverging regions (shaded region in Fig. 5) are not overlapped but are
patched constructively. Therefore the reduction in viewing angle due to non-parallel diverging
directions is effectively alleviated.

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1880
Fig. 5. Example of the generation of a uniformly dense point light source array

4. Experimental results
An experiment was performed to verify the feasibility of the proposed method. A schematic
diagram of the experimental setup is shown in Fig. 6. An incoherent white light source and a
collimating lens were used to illuminate the system. The PDLC was fabricated using NOA65
polymer and an E7 liquid crystal and was operated as a transparent plate at 30 V (1kHz
rectangular wave) and as a diffusing plate at 0 V. The SLM used in the experiment had a
0.036 mm pixel pitch. In the experiment, we realized the example shown in Fig. 5. The first
lens array consisted of 13×13 elemental lenses with a 10 mm elemental lens pitch and a 22
mm focal length. The second lens array consisted of 70×70 elemental lenses with a 1 mm
elemental lens pitch and a 3.3 mm focal length. The second lens array was located at a
distance of 66 mm from the first lens array. With these specifications, the uniform point light
sources with double density were achieved.
Figure 7 shows the point light sources generated by the previous method [17] and the
proposed method. It can clearly be seen that the density of the point light sources is uniformly
doubled in the proposed method. The non-uniform brightness of the point light sources in the
proposed method is the result of the overlapping of the point light sources generated by
neighboring elemental lenses in the second lens array, as explained above. Note that this non-
uniformity does not bring about any non-uniformity in the generated 3D images because the
overlapping point light sources emit light in different diverging directions, as shown in Fig. 5.

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1881
PDLC
Elemental lens pitch: 1 mm
Focal length: 3.3 mm
Integrated Number: 70×70
image
20 mm
30 mm

SLM
Pixel pitch: 0.036 mm

3.3 mm
(a)

Elemental lens pitch: 10 mm


Focal length: 22 mm
Number: 13×13
PDLC
Elemental lens pitch: 1 mm
Focal length: 3.3 mm
Integrated Number: 70×70
image
20 mm
30 mm

SLM
Pixel pitch: 0.036 mm

66 mm 1.8 mm
3.5 mm

(b)

Fig. 6. Schematic diagram of the experimental setup (a) previous method (b) proposed method

(a) (b)
Fig. 7. Part of generated point light source (a) previous method (b) proposed method

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1882
Up Up

Left Center Right Left Center Right

Down Down

(a) (b)
Fig. 8. 3D image observed from different directions (a) previous method (b) proposed method

Figure 8 shows the generated 3D images. Two letter images, ‘3’ and ‘D’, are displayed: at
20 mm in front of the second point light source plane for ‘3’, and 30 mm behind the second
point light source plane for ‘D’. The different perspectives according to the different
observing directions provide convincing support for the 3D nature of the generated images.
From Figs. 8(a) and (b), the resolution enhancement of the proposed method can be readily
seen. The viewing angle was determined to be about 18º (H) × 19º (V) in the previous method
and 18º (H) × 15º (V) in the proposed method. No significant reduction in viewing angle is
observed in the experiment since the gap between the second lens array and the second point
light sources that determines the viewing angle as indicated by Eq. (8) is nearly the same in
the previous method and the proposed method(3.3mm for the previous method and 3.5mm for
the proposed method. See Fig. 6).
The resolution enhancement of the proposed method is revealed more clearly in Fig. 9.
Figure 9 shows the result of displaying flat images 30 mm in front of the SLM. The much
higher fidelity of the flat images of Fig. 9(c) compared with those of Fig. 9(b) provides
convincing evidence for the resolution enhancement of the proposed method.

(a) (b) (c)


Fig. 9. Displayed flat images in the 3D mode (a) original images (b) previous method (c) proposed method

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1883
5. Conclusion
A resolution enhancement method for a 3D/2D convertible display based on integral imaging
is described. By inserting an additional lens array between the collimating lens and the PDLC,
an abundant number of the point light sources is generated, far more than the number of
elemental lenses in the lens array, and thus, the resolution of the 3D images is much enhanced.
The experimental results support the feasibility of the proposed method.
Acknowledgment
This work was supported by the Information Display R&D Center, one of the 21st Century
Frontier R&D Programs funded by the Ministry of Commerce, Industry and Energy of Korea.

#6287 - $15.00 US Received 11 January 2005; revised 24 February 2005; accepted 2 March 2005
(C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 1884

You might also like