Professional Documents
Culture Documents
Notas RA
Notas RA
Published by the IEEE Computer Society 0272-1716/15/$31.00 © 2015 IEEE IEEE Computer Graphics and Applications 33
Figure 1. A synthetic, to-scale view of the immersive gigapixel Reality Deck facility displaying a geometric model of future New
York City (approximately 40 million triangles with hundreds of materials).
Figure 2. An interior view of the Reality Deck, showing the front wall and partial side walls. A user is exploring a fused
geographic-information-system dataset of downstate New York, which incorporates elevation information, road networks, and
detailed 3D geometry for select areas. Insets illustrate the detail that can be resolved by approaching the facility walls, with
building facades and architectural details easily distinguishable.
■■ a view-centered perspective (head tracking), CAVEs are routinely used for data visualization
■■ panorama (surrounding the viewer with visuals), in a number of scenarios, but the concept argu-
■■ body and physical representation (users’ aware- ably isn’t keeping up with growing datasets. For
ness of the interactive workspace’s physical con- example, the CORNEA (kvl.kaust.edu.sa/Pages/
straints), CORNEA.aspx), a high-end CAVE installation, of-
■■ intrusion (restricting the user’s senses) and fers approximately 100 Mpixels per eye. But modest
■■ the FOV (the display portion users can observe examples of panoramic images from www.gigapan.
without rotating their head). com can exceed 1 Gpixel.
Another consideration for current immersive
The CAVE’s popularity is a testament to its ability facilities is their ability to deliver quality visuals
to optimally combine these factors. Whereas max- as a factor of the observer’s position in the visu-
imizing VA allows for natural zooming through alization space. As Figure 4a shows, the CORNEA
the data, maximizing immersion enables a wider provides VA of approximately 20/34 but only at a
range of panning by looking around. sweet spot in its center. For a single user, moving
34 January/February 2015
9 ft. 9 ft.
0 ft. 0 ft.
–9 ft. –9 ft.
0 20/20
(b)
Figure 4. Visual acuity (VA) heatmaps (floor plan views) for the CORNEA CAVE (Cave Automatic Visual
Environment) and Reality Deck. (a) The CORNEA offers 20/34 VA in a 10 × 10 ft. workspace. However, the
maximum visual quality is achievable only at the facility’s center. (b) In contrast, the Reality Deck offers a large
workspace of 33 × 19 ft. Given its monitor configuration, 20/20 VA can be achieved approximately 31 in. from
the displays. This translates to a total workspace area with 20/20 VA of approximately 384 sq. ft.
away from this sweet spot can occur naturally in ing high-resolution data. High-end systems, such
a head-tracked 3D application. This movement re- as the Stallion powerwall (www.tacc.utexas.edu/
sults in the user approaching display surfaces that resources/visualization), can reach 300 Mpixels
don’t saturate his or her visual system, creating of aggregate resolution. However, these facilities
a suboptimal visual experience. Even worse, in materialize as a single planar surface, resulting in
multiuser scenarios, only one user can occupy the two issues. First, they offer only a small degree of
location with maximal VA, forcing lower-quality physical immersion to users (whose FOV is satu-
visuals on the other collaborators. rated by the large display, but without any pan-
Conversely, powerwalls are targeted at visualiz- orama). Second, planar designs can be unwieldy
Visual Acuity
to scale owing to spatial constraints and potential displays’ arrangement presented an open design
ergonomic issues of traversing large distances dur- problem. After considering different placements
ing visual exploration. of display surfaces, we opted for a rectangular ar-
Hybrid designs contemporary with the Reality rangement with four walls. This layout enables
Deck, such as the CAVE2,4 bridge the gap between interesting usage scenarios, depending on the na-
CAVEs and powerwalls. These designs place more ture of the data and collaborative situation. It’s
emphasis on CAVE-like characteristics such as ste- different from most CAVE systems, which use a
reoscopic 3D. cube-like arrangement of displays. Our rectangu-
lar layout allows for more operational flexibility
“Immersifying” a Tiled Display Wall and maximizes the use of the available lab space.
As a higher-level goal, we felt that the Reality Deck The straightforward mapping lets the Reality
should provide the high pixel density of tiled dis- Deck’s four walls serve as configurable viewports
plays but also the full FOV of CAVEs. As a next- into the virtual world, akin to a CAVE. Or, we
generation facility, it should offer a significant can interpret the display as a single continuous
leap in aggregate resolution (with the gigapixel planar viewport. This configuration is useful for
milestone being an obvious choice). Also, 20/20 large-scale 2D data (for example, geographical-
VA should be available for most of a large visu- information-system parallel coordinates). A visual
alization space, to promote physical navigation. discontinuity exists where the two extremes of the
Additionally, the facility had to fit in the avail- logical frustum meet on the physical display sur-
able 40 × 30 ft. lab space and cost no more than face (typically in the middle of the rear wall). To
US$1,000,000. naturally ameliorate this shortcoming, we employ
The Reality Deck defines an enclosed space our Infinite Canvas technology.5 This continuous-
surrounded by high-pixel-density displays. The viewport display mapping addresses traditional
36 January/February 2015
Figure 5. We evaluated the perceptual effects of the screen and bezel size for different LCD monitors in the Immersive Cabin. In
both images, the left wall simulates a large-format planar monitor (60 in. diagonal) and the right wall is the Samsung S27A850D
monitor with modified bezels (27 in. diagonal).
planar powerwalls’ scalability problems. (We de- maintenance. This can be time-consuming for the
scribe the Infinite Canvas in more detail later.) Immersive Cabin and would be unmanageable for
In multiuser scenarios, we can subdivide the a system employing hundreds of projectors. Ad-
display space in various ways. For example, we ditionally, projectors produce significant heat and
can split it into four planar tiled displays, one per noise, and the need to accommodate the throw
wall. Here, each of the two long walls offers ap- distance (as much as 1.5 ft. for short-throw lenses)
proximately 471 Mpixels of resolution; each narrow affects the space requirements. Finally, projectors
side wall has 295 Mpixels. For additional immer- are generally much more expensive than LCDs to
sion, we can create two CAVE-like systems, with acquire and maintain. Because of these drawbacks,
three walls each, operating independently. Each of we eliminated projectors early during design.
these CAVEs offers roughly 0.76 Gpixel resolution, We then considered different types of LCD
depending on the border space that separates the monitors on the basis of these criteria:
CAVEs’ viewports. This pixel count is several times
larger than that offered by state-of-the-art CAVE ■■ Resolution target. On the basis of the available
systems, at the expense of bezels and anaglyph-only space and supergigapixel-resolution target, the
stereo (owing to the selected panel technology). monitors should provide approximately 100 pix-
els per inch (PPI).
Building an Immersive Gigapixel Display ■■ Bezel size. Ideally, the bezel should be smaller
Hardware component selection is a critical aspect than 5 mm; it shouldn’t exceed 8 mm for a 23
of constructing a large-scale visualization system. in. display and 15 mm for a 30 in. display. We
From the display system to the visualization clus- based these metrics on the bezel dimensions of
ter, every component affects the resulting system’s commercially available monitors with potential
overall capabilities. Here, we discuss several engi- structural modifications.
neering choices we made when constructing the ■■ Display size. Larger monitors are preferable as
Reality Deck. long as they can deliver the required pixel density.
■■ Image quality. The monitors should use high-
Display Selection and Customization quality panels with good contrast, backlight
Arguably the most critical component of a visu- uniformity, and viewing angles.
alization environment is the display subsystem. ■■ Stereo support. Stereo is very desirable, but not at
CAVEs are usually based on projectors, whereas the cost of image quality or significantly reduced
tiled display arrays use both projectors and LCD pixel density.
monitors.
Projectors can create a nearly seamless image We evaluated a number of displays with panel
but require regular maintenance. For our five- sizes from 23 to 60 in.; IPS (in-plane switch-
sided, 10-projector CAVE-like Immersive Cabin,6 ing), PVA (patterned vertical alignment), and PLS
we always manually recalibrate the system after (plane-to-line switching) panel technologies; and
(a) (b)
Figure 6. The Reality Deck’s door. (a) A 3 × 5 grid of displays is mounted on an aluminum subframe and rotates around a spring-
loaded hinge. The mechanized door can be operated from within the Reality Deck. (b) When closed, the door blends into the rest
of the structure, in this case showing a futuristic view of New York City.
various bezel sizes. We also considered secondary even for products from the same batch. We evalu-
factors, such as power consumption and weight, ated every monitor before modification, looking
which affect the requirements for the mounting, primarily at image uniformity when the monitor
power, and cooling infrastructure. We simulated displayed a full white and a full black signal. We
the different tiled-display designs in our Immersive also identified color reproduction issues. We took
Cabin (see Figure 5) to analyze the bezels’ percep- three photographs of each monitor from a fixed
tual effects on different visualization tasks, (for camera position, with a standardized set of camera
example, medical and architectural visualization) and display settings. On the basis of the stacks of
in an informal user study. We took this study’s images, we replaced 98 of the first batch of 441
results into account when selecting the displays. monitors. After testing the second batch, we se-
We considered several offerings from vari- lected the best 416 displays, and a set of spares, for
ous vendors, including ultranarrow-bezel LCDs modification and use.
and monitors with stereoscopic 3D support. We Using lightweight monitors let us design custom
rejected these offerings, for reasons we explain mounting brackets and a simple aluminum frame
later. At the time of construction, no commer- so that individual monitors can be aligned with
cially available display satisfied all five criteria; submillimeter accuracy (confirmed via laser level-
however, the Samsung S27A850D provided a good ing) and can be replaced by a single person. The
balance. It’s a professional 27 in. PLS panel with plastic cover of the S27A850D houses the circuit
2,560 × 1,440 resolution and excellent contrast, board, user controls, and power supply. We moved
color saturation, and viewing angles. Unlike CCFL these components to the rear bracket, creating a
(cold-cathode fluorescent lamp) monitors, the uniform black frame around the display with no
S27A850D uses LED backlighting, which signifi- visual distractions. The door to the facility is a sec-
cantly reduces the weight and power requirements tion of the frame that’s mounted on a hinge and
(46 W for the S27A850D versus 134 W for a Dell holds a 3 × 5 grid of monitors. It is power oper-
U2711). Finally, although the original bezel is rela- ated but can be opened manually in an emergency.
tively large, we easily modified the monitors with When closed, the door is completely flush with
a custom mount that reduced the bezel to 14 mm. the rest of the wall and visually indistinguishable
Given the available physical space, we arranged from the other displays (see Figure 6). The displays
the monitors in four orthogonal surfaces. The are offset from the floor by approximately 7 in. to
front and back walls are 16 displays wide; the left allow for the installation of tracking cameras and
and right span 10 displays. All four walls are eight sound speakers.
displays tall, for a total of 416 tiled monitors. Figure 4b shows a heatmap of the VA in our fa-
The mass-produced nature of commercial moni- cility, illustrating the 384 sq. ft. space in which the
tors entails certain variation in image quality, system achieves 20/20 or better VA.
38 January/February 2015
Rotate 135°
clockwise
y y
0° 135°
z x z x
Rotate
clockwise
Rotate
245° clockwise
y y
180°
315°
560°
z x z x
Figure 7. A synthetic view of the Reality Deck illustrating the use of the Infinite Canvas to explore the GLIMPSE/
MIPSGAL Milky Way survey. The user first looks at the front wall then rotates clockwise, while the system uses
the aggregate rotation s to calculate the new data mapping. After an aggregate rotation greater than 360
degrees (see the bottom-left image), the user sees a new view of the data. All wall images are captured directly
from the Reality Deck and show that the remapping discontinuity is always behind the user.
also use our research on acuity-driven gigapixel Infinite Canvas interactively manipulates the
visualization (which we describe later) to further mapping on the basis of the user’s position and
optimize data transfers, on the basis of the user’s orientation in the Reality Deck. Through this
location in the Reality Deck. manipulation, users see the illusion of a surface
that extends arbitrarily along one dimension be-
Supporting Techniques cause the wraparound discontinuity is kept out-
The Reality Deck is the only display to offer more side their FOV.
than a billion pixels in a horizontally immersive More formally, using the tracking system, we
setting and a large workspace that encourages obtain the user’s position p and 2D orientation d,
physical navigation. The continuous display sur- as well as the aggregate rotation s with respect to
face enables unique data exploration techniques, a fixed reference vector. Additionally, we intersect
such as the Infinite Canvas. Meanwhile, the the vector p – d with the canvas’s geometry to ob-
gigapixel resolution presents unique challenges in tain pback, the point directly behind the user on
rendering different data types. Our acuity-driven the canvas surface. We then calculate an angular
gigapixel-visualization framework accounts for offset sstart:
the distance between the user and the display
when making streaming choices. We’ve also de- s
sstart = 2π − atan2 ( p back x − p x , p back y − p y ) .
veloped a novel frameless visualization to enable 2π
high-resolution volume rendering at interactive
frame rates. If the data to be visualized spans a [0, 1] range
on the horizontal axis, we can extract a region
The Infinite Canvas a ⋅ [sstart 2π , sstart 2π + 1) , where a is a scale factor
The Infinite Canvas is a navigation technique equal to the ratio between the mapped geometry’s
for physically exploring high-resolution data that circumference and the canvas’s total width. (For
extends arbitrarily along a single dimension. We example, if the data is three times larger than the
first place the virtual camera in a closed, curvi- immersive display, a = 1/3.) We then map this
linear surface with dimensions that approximate section onto the geometry, starting and ending at
the aspect ratio of the facility’s floor. We then pback, placing the mapping discontinuity behind
map imagery onto this geometric surface. The the user (see Figure 7). This mapping is computed
40 January/February 2015
(a)
(b)
(c)
Figure 8. Using our acuity-driven gigapixel-visualization technique to explore a panorama of Dubai from Gigapan. (a) The user
starts in the middle of the Reality Deck (the renderings on the left show her position). On the basis of her distance from the
displays, our system selects the appropriate level of detail (LOD) for each pixel. The current LOD is color mapped and visible in
the wide-angle photographs from the Reality Deck’s interior. (b) As the user approaches the front wall, the system adapts the
LOD, delivering the full detail of the data to the displays closest to the user. (c) The LOD updates as the user moves to the facility’s
front-left corner. The insets on the right show zoomed-in views of a small section of the front wall, illustrating the change in
resolution as the user changes position.
inside a GLSL shader at runtime and is used to (assuming a one-to-one mapping between display
sample an out-of-core texture. For additional im- resolution and texels). However, users can perceive
plementation information, see “Visual Exploration this detail only when standing at an optimal dis-
of the Infinite Canvas.”5 tance from the displays (in our case, roughly 31
We found that the Infinite Canvas comple- in.). At larger distances, adjacent pixels become
ments well the traditional approach to data ex- increasingly indistinguishable as their projections
ploration in the Reality Deck (walking along the in the user’s visual system begin overlapping. So,
displays and looking for points of interest). In- it makes sense to select the image level of detail
deed, without the Infinite Canvas, users eventu- (LOD) dynamically, on the basis of the user’s dis-
ally encounter the 2D frustum discontinuity at tance from each display.
the facility’s rear wall and then use a control- Most commodity rendering pipelines determine
ler to further scroll the data along its major di- the image LOD on the basis of the projection of a
mension. The Infinite Canvas does let users skip particular pixel into texture space. In our visual-
through sections of the data with an external in- ization framework,9 we scale this projection on the
put device. However, it allows them a greater de- basis of the user’s distance D′ from a particular
gree of mental immersion by obviating the need display. Specifically, if we assume an original tex-
for a context switch between manipulating the ture space projection A, we calculate
visualization virtually and navigating physically
D′
by walking. A′ = A
Dopt
Acuity-Driven Gigapixel Visualization (the farther away the user, the larger the texture
During the exploration of gigapixel imagery, data space projection, resulting in a higher selected
transfer overhead to the visualization nodes can level in the LOD pyramid). On the basis of this
be substantial. To texture a 1.5-Gpixel display at notion, we calculate an acuity-based LOD offset
full detail using 2562 tiles, more than 13 Gbytes macu:
per second of bandwidth are required for a 30 fps
refresh rate. In the previous example, all the Re- 1 Dopt D′ D ′
= ⇒ 2macu = ⇒ macu = log 2 .
ality Deck’s displays were textured at full detail 2macu
D′ Dopt Dopt
Figure 9. Exploring the Visible Human dataset using our frameless-visualization technique on a 471-Mpixel
section of the Reality Deck. Despite the massive rendering resolution, the system remains interactive,
achieving a stable 5 fps. Traditional volume rendering requires approximately 10 s to generate one frame,
resulting in a noninteractive experience. The inset shows the image quality (including high-quality filtering,
ambient occlusion, and global illumination) that can be achieved when the frameless visualization algorithm
converges.
We bias this offset for quality, for the resulting sparse sample sets.10 This comes at the expense of
LOD level temporal coherency.
We developed a system for reconstructing high-
D ′
m′ = mMIP + max 0,log 2 , resolution, high-frame-rate images from a multitiered
Dopt collection of samples that are rendered framelessly.
Unlike traditional frameless-rendering techniques,
where mMIP is the normal LOD selected by the ren- we generate the lowest-latency samples locally.
dering system. These initial points also guide the sampling of
Figure 8 illustrates the operation of our tech- more complex and computationally expensive ef-
nique (which happens per pixel on the GPU, with fects (for example, global illumination).
minimal performance overhead). Because this Our system is based on a distributed single-pass
acuity-driven texturing delivers less than maxi- ray-casting volume renderer. At the level of a sin-
mum detail to the displays, we evaluated the po- gle GPU and its attached displays, our system uses
tential impact on user performance. A user study ray casting to generate samples at subnative reso-
had subjects search for different-sized targets in lution. It asynchronously reconstructs these sam-
a gigapixel-resolution image. With our technique, ples into a low-resolution temporally upsampled
users didn’t have to make special accommoda- preview image. This image is also used during the
tions in their distance from the displays when creation of the priority map that guides the remote
searching for targets.9 Finally, we benchmarked rendering of higher-quality unstructured samples.
our technique on synthetic usage sessions stem- The system can modify this map on the basis of
ming from real tracking data and observed a the user’s position in the Reality Deck.
substantial 70 percent reduction in data transfer An external source can provide a stream of ren-
overhead. dered samples, which the system combines with
the ones stored locally to progressively refine the
Frameless Visualization final image. For Figure 9, a GPU cluster was the
Most applications use the GPU to produce a set sample source. However, depending on the display
of pixels over a regular 2D grid, which is then configuration, the source can be an auxiliary GPU
displayed using double buffering. As the resolu- in the same system or even a cloud-based service.
tion increases, so does the latency associated with The resulting sample stream is broadcast across
generating a single frame. As an alternative, re- the network, and each visualization node stores
searchers have proposed frameless rendering that samples belonging to the local viewport in a buffer
decouples pixel computation from the display sys- that matches the display’s full resolution.
tem by approximating full-resolution images from During compositing, the system uses various
42 January/February 2015
various visualization tasks when the display spans Reality 2013, Proc. SPIE, vol. 8469, 2013.
upward of 1 Gpixel in aggregate. 5. K. Petkov, C. Papadopoulos, and A.E. Kaufman,
“Visual Exploration of the Infinite Canvas,” Proc.
T
2013 IEEE Conf. Virtual Reality, 2013, pp. 11–14.
he Reality Deck (and other facilities such as 6. F. Qiu et al., “Enclosed Five-Wall Immersive Cabin,”
the CAVE2) differs fundamentally from prior Proc. 4th Int’l Symp. Advances in Visual Computing,
visualization systems in that it defines not only a 2008, pp. 891–900.
display surface but also a clearly outlined space 7. S. Eilemann, “Equalizer: A Scalable Parallel
large enough to promote user movement. The Rendering Framework,” IEEE Trans. Visualization and
techniques we’ve described arose organically from Computer Graphics, vol. 15, no. 3, 2009, pp. 436–452.
patterns in this movement. The acuity-driven LOD 8. C. Hollemeersch et al., “Accelerating Virtual Texturing
scheme leverages a fundamental property of large Using CUDA,” GPU Pro: Advanced Rendering Techniques,
displays (users might be looking at portions of the W. Engel, ed., A K Peters, 2010, pp. 623–641.
visualization beyond their VA threshold), which is 9. C. Papadopoulos and A.E. Kaufman, “Acuity-Driven
amplified by the Reality Deck’s sheer size. Gigapixel Visualization,” IEEE Trans. Visualization
The Infinite Canvas was inspired by users’ ten- and Computer Graphics, vol. 19, no. 12, 2013, pp.
dency to scan visualizations by physically navi- 2886–2895.
gating from one end of the display to the other. 10. G. Bishop et al., “Frameless Rendering: Double
Our techniques are motivated by relatively basic Buffering Considered Harmful,” Proc. Siggraph,
observations, but they can still enable seamless 1994, pp. 175–176.
interaction and substantially improve system per- 11. A. Kulik et al., “C1x6: A Stereoscopic Six-User
formance. Although physical navigation has been Display for Co-located Collaboration in Shared
an active research area for several years, we feel Virtual Environments,” ACM Trans. Graphics, vol.
that its evaluation in the context of specific ap- 30, no. 6, 2011, p. 188.
plications in extremely large immersive systems 12. J. Marbach, “Image Blending and View Clustering for
presents fertile ground for innovation. Multi-viewer Immersive Projection Environments,”
Proc. 2009 IEEE Conf. Virtual Reality, 2009, pp.
51–54.
Acknowledgments
This research has been supported by US National Sci- Charilaos “Harris” Papadopoulos is pursuing a PhD in
ence Foundation grants CNS-0959979, IIP-1069147, computer science at Stony Brook University and is a research
IIS-1117132, and IIS-0916235. The datasets used to assistant at the Center for Visual Computing. His research
generate the article figures are courtesy of Gigapan, focuses on user interactions with room-sized, gigapixel-
NASA, the US Geological Survey, New York City resolution displays, such as the Reality Deck. Papadopoulos
OpenData, the Jet Propulsion Lab, the University of received a BSc in Informatics from the Athens University of
Arizona, the US National Library of Medicine, and the Economics and Business. He’s a student member of IEEE.
Research Collaboratory for Structural Bioinformatics Contact him at cpapadopoulo@cs.stonybrook.edu; www.
Protein Data Bank. papado.org.
44 January/February 2015
www.computer.org/annals