You are on page 1of 41

Augmented ReAlity, ARt And technology

01
april
2012
intRoducing
Added woRlds
Yolande Kolstee
the
technology
Behind
Augmented
ReAlity
Pieter Jonker
Re-intRoducing
mosquitos
Maarten Lamers
how did we do it
Wim van Eck
2
AR[t]
Magazine about Augmented
Reality, art and technology
APRil 2012
3
4 5
issn numBeR
2213-2481
contAct
The Augmented Reality Lab (AR Lab)
Royal Academy of Art, The Hague
(Koninklijke Academie van Beeldende Kunsten)
Prinsessegracht 4
2514 AN The Hague
The Netherlands
+31 (0)70 3154795
www.arlab.nl
info@arlab.nl
editoRiAl teAm
Yolande Kolstee, Hanna Schraffenberger,
Esm Vahrmeijer (graphic design)
and Jouke Verlinden.
contRiButoRs
Wim van Eck, Jeroen van Erp, Pieter Jonker,
Maarten Lamers, Stephan Lukosch, Ferenc Molnr
(photography) and Robert Prevel.
coVeR
George, an augmented reality headset designed
by Niels Mulder during his Post Graduate Course
Industrial Design (KABK), 2008

www.arlab.nl
CoLoPHoN
welcome
to AR[t]
intRoducing Added woRlds
Yolande Kolstee
inteRView with
helen PAPAgiAnnis
Hanna Schraffenberger
the technology Behind AR
Pieter Jonker
Re-intRoducing mosquitos
Maarten Lamers
lieVen VAn VelthoVen
the RAcing stAR
Hanna Schraffenberger
how did we do it
Wim van Eck
Pixels wAnt to Be fReed!
intRoducing Augmented
ReAlity enABling hARdwARe
technologies
Jouke Verlinden
ARtist in Residence
PoRtRAit: mARinA de hAAs
Hanna Schraffenberger
A mAgicAl leVeRAge
in seARch of the
killeR APPlicAtion.
Jeroen van Erp
the Positioning
of ViRtuAl oBjects
Robert Prevel
mediAted ReAlity foR cRime
scene inVestigAtion
Stephan Lukosch
die wAlkRe
Wim van Eck, AR Lab Student Project
07 60
66
70
08
12
20
28 72
30
36
42
TABLE oF CoNTENTS
36 32
76
6 7
Starting with this issue, AR[t] is an aspiring
magazine series for the emerging AR commu-
nity inside and outside the Netherlands. The
magzine is run by a small and dedicated team
of researchers, artists and lecturers of the AR
Lab (based at the Royal Academy of Arts, The
Hague), Delft University of Technology (TU
Delft), Leiden University and SME. In AR[t], we
share our interest in Augmented Reality (AR),
discuss its applications in the arts and provide
insight into the underlying technology.
At the AR Lab, we aim to understand, develop,
refne and improve the amalgamation of the
physical world with the virtual. We do this
through a project-based approach and with the
help of research funding from RAAK-Pro. In the
magazine series, we invite writers from the in-
dustry, interview artists working with Augment-
ed Reality and discuss the latest technological
developments.
It is our belief that AR and its associated
technologies are important to the feld of new
media: media artists experiment with the inter-
section of the physical and the virtual and probe
the limits of our sensory perception in order to
create new experiences. Managers of cultural
heritage are seeking after new possibilities for
worldwide access to their collections. Design-
ers, developers, architects and urban planners
are looking for new ways to better communicate
to the frst issue of AR[t],
the magazine about
Augmented Reality, art
and technology!
WELCoME...
their designs to clients. Designers of games and
theme parks want to create immersive experi-
ences that integrate both the physical and the
virtual world. Marketing specialists are working
with new interactive forms of communication.
For all of them, AR can serve as a powerful tool
to realize their visions.
Media artists and designers who want to acquire
an interesting position within the domain of new
media have to gain knowledge about and experi-
ence with AR. This magazine series is intended to
provide both theoretical knowledge as well as a
guide towards frst practical experiences with AR.
our special focus lies on the diversity of contri-
butions. Consequently, everybody who wants
to know more about AR should be able to fnd
something of interest in this magzine, be they
art and design students, students from techni-
cal backgrounds as well as engineers, develop-
ers, inventors, philosophers or readers who just
happened to hear about AR and got curious.
We hope you enjoy the frst issue and invite you
to check out the website www.arlab.nl to learn
more about Augmented Reality in the arts and
the work of the AR Lab.
www.arlab.nl
9
intRoducing Added woRlds:
Augmented ReAlity is heRe!
Augmented Reality is a relatively recent computer
based technology that differs from the earlier
known concept of Virtual Reality. Virtual Reality is
a computer based reality where the actual, outer
world is not directly part of, whereas Augmented
Reality can be characterized by a combination of
the real and the virtual.
Augmented Reality is part of the broader concept of
Mixed Reality: environments that consist of the real
and the virtual. To make these differences and rela-
tions more clear, industrial engineer Paul Milgram
and Fumio Kishino introduced the Mixed Reality
Continuum diagram in 1994, in which the real world
is placed on the one end and the virtual world is
placed on the other end.
By yolande kolstee
Virtuality continuum by Paul Milgram and Fumio Kishino (1994)
Real
environment
Augmented
Virtuality (AV)
Virtual
environment
Augmented
Reality (AR)
mixed ReAlity (mR)
8
A shoRt oVeRView of AR
We defne Augmented Reality as integrating 3-D
virtual objects or scenes into a 3-D environment
in real time (cf. Azuma, 1997).
WHERE 3D VIRTUAL oBJECTS
oR SCENES CoME FRoM

What is shown in the virtual world, is created
frst. There are three ways of creating virtual
objects:
1. By hand: using 3d computer graphics
Designers create 3D drawings of objects, game
developers create 3D drawings of (human) fgures,
(urban) architects create 3D drawings of build-
ings and cities. This 3D modeling by (product)
designers, architects, and visual artists is done
by using specifc software. Numerous software
programs are developed. While some software
packages can be downloaded for free, others
are pretty expensive. Well known examples are
Maya, Cinema 4D, Studio Max, Blender, Sketch up,
Rhinoceros, Solidworks, Revit, Zbrush, AutoCad,
Autodesk. By now at least 170 different software
programs are available.
2. By computer controlled imaging
equipment/3d scanners.
We can distinguish different types of three-
dimensional scanners the ones used in the
bio-medical world and the ones used for other
purposes although there is some overlapping.
Inspecting a piece of medieval art or inspecting a
living human being is different but somehow also
alike. In recent years we see a vigorous expan-
sion of the use of image-producing bio-medical
equipment. We owe these developments to the
of engineer sir Godfrey Hounsfeld and physi-
cist Allan Cormack, among others, who were
jointly awarded the Nobel Prize in 1979 for their
pioneering work on X-ray computed tomography
(CT). Another couple of Nobel Prize winners are
Paul C. Lauterbur and Peter Mansfeld, who won
the prize in 2003 for their discoveries concern-
ing magnetic resonance imaging (MRI). Although
their original goals were different, in the feld of
Augmented Reality one might use the 3D virtual
models that are produced by such systems. How-
ever, they have to be processed prior to use in
AR because they might be too heavy. A 3D laser
scanner is a device that analyses a real-world ob-
ject or environment to collect data on its shape
and its appearance (i.e. colour). The collected
data can then be used to construct digital, three
dimensional models. These scanners are some-
times called 3D digitizers. The difference is that
the above medical scanners are looking inside to
create a 3D model while the laser scanners are
creating a virtual image from the refection of
the outside of an object.
3. Photo and/or flm images
It is possible to use a (moving) 2D image like a
picture as a skin on a virtual 3D model. In this
way the 2D model gives a three-dimensional
impression.
INTEGRATING 3-D VIRTUAL oB-
JECTS IN THE REAL WoRLD IN
REAL TIME

There are different ways of integrating the vir-
tual objects or scenes into the real world. For all
three we need a display possibility. This might
be a screen or monitor, small screens in AR
glasses, or an object on which the 3D images are
projected. We distinguish three types of (visual)
Augmented Reality:
display type i : screen based
AR on a monitor, for example on a fatscreen or
on a smart phone (using e.g. LAYAR). With this
technology we see the real world and added at
the same time on a computer screen, monitor,
smartphone or tablet computer, the virtual
object. In that way, we can, for example,
10 11
add information to a book, by looking at the
book and the screen at the same time.

display type ii:
AR glasses (off-screen)
A far more sophisticated but not yet consumer
friendly method uses AR glasses or a head
mounted display (HMD), also called a head-up
display. With this device the extra information is
mixed with ones own perception of the world.
The virtual images appear in the air, in the real
world, around you, and are not projected on a
screen. In type II there are two types of mixing
the real world with the virtual world:
Video see-through: a camera captures the real
world. The virtual images are mixed with the
captures (video images) of the real world and
this mix creates an Augmented Reality.
optical see-through: the real world is perceived
directly with ones own eyes in real time. Via
small translucent mirrors in goggles, virtual
images are displayed on top of the perceived
Reality.
display type iii:
Projection based Augmented Reality
With projection based AR we project virtual 3D
scenes or objects on a surface of a building of an
object (or a person). To do this, we need to know
exactly the dimensions of the object we project
AR info onto. The projection is seen on the object
or building with remarkable precision. This can
generate very sophisticated or wild projections
on buildings. The Augmented Matter in Context
group, led by Jouke Verlinden at the Faculty of
Industrial Design Engineering, TU-Delft, uses
pro jection-based AR for manipulating the appear-
ance of products.
CoNNECTING ART AND
TECHNoLoGY
The 2011 IEEE International Symposium on Mixed
and Augmented Reality (ISMAR) was held in
Basel, Switzerland. In the track Arts, Media, and
Artist: kARolinA soBeckA | http://www.gravitytrap.com
Humanities, 40 articles were offered discussing
the connection of hard physics and soft art.
There are several ways in which art and Aug-
mented Reality technology can be connected:
we can, for example, make art with Augmented
Reality technology, create Augmented Reality
artworks or use Augmented Reality technology
to show and explain existing art (such as a
monument like the Greek Pantheon or paintings
from the grottos of Lascaux). Most of the contri-
butions of the conference concerned Augmented
Reality as a tool to present, explain or augment
existing art. However, some visual artists use AR
as a medium to create art.
The role of the artist in working with the emerg-
ing technology of Augmented Reality has been
discussed by Helen Papagiannis in her ISMAR
paper The Role of the Artist in Evolving AR as a
New Medium (2011). In her paper, Helen Papagi-
annis reviews how the use of technology as a cre-
ative medium has been discussed in recent years.
She points out, that in 1988 John Pearson wrote
about how the computer offers artists new
means for expressing their ideas (p.73., cited in
Papagiannis, 2011, p.61). According to Pearson,
Technology has always been, the handmaiden of
the visual arts, as is obvious, a technical means is
always necessary for the visual communication of
ideas, of expression or the development of works
of arttools and materials are required. (p. 73)
However, he points out that new technologies
were not developed by the artistic community
for artistic purposes, but by science and industry
to serve the pragmatic or utilitarian needs of
society. (p.73., cited in Papagiannis, 2011, p.61)
As Helen Papagiannis concludes, it is then up to
the artist to act as a pioneer, pushing forward
a new aesthetic that exploits the unique materi-
als of the novel technology (2011, p.61). Like
Helen, we believe this holds also for the emerging
feld of AR technologies and we hope, artists will
set out to create exciting new Augmented Real-
ity art and thereby contribute to the interplay
between art and technology. An interview with
Helen Papagiannis can be found on page 12 of this
magazine. A portrait of the artist Marina de Haas,
who did a residency at the AR Lab, can be found
on page 60.
RefeRences
Milgram P. and Kishino, F., A Taxonomy of
Mixed Reality Visual Displays, IEICE Trans.
Information Systems, vol. E77-D, no. 12, 1994,
pp. 1321-1329.
Azuma, Ronald T., A Survey of Augmented
Reality. In Presence: Teleoperators and
Virtual Environments 6, 4 (August 1997),
pp. 355-385.
Papagiannis, H., The Role of the Art-
ist in Evolving AR as a New Medium, 2011
IEEE International Symposium on Mixed and
Augmented Reality(ISMAR) Arts, Media, and
Humanities (ISMAR-AMH), Basel, Switserland,
pp. 61-65.
Pearson, J., The computer: Liberator or
Jailer of The creative Spirit. Leonardo,
Supplemental Issue, Electronic Art, 1 (1988),
pp. 73-80.
13
BIoGRAPHY -
HELEN PAPAGIANNIS
helen Papagiannis is a designer, artist,
and Phd researcher specializing in Aug-
mented Reality (AR) in toronto, canada.
helen has been working with AR since
2005, exploring the creative possibilities
for AR with a focus on content develop-
ment and storytelling. she is a senior
Research Associate at the Augmented
Reality lab at york university, in the
department of film, faculty of fine Arts.
helen has presented her interactive
artwork and research at global juried
conferences and events including tedx
(technology, entertainment, design),
ismAR (international society for mixed
and Augmented Reality) and iseA (inter-
national symposium for electronic Art).
Prior to her Augmented life, helen was a
member of the internationally renowned
Bruce mau design studio where she was
project lead on massive change:
the future of global design." Read more
about helens work on her blog and fol-
low her on twitter: @ARstories.
www.augmentedstories.com
12
INTERVIEW WITH
HELEN PAPAGIANNIS
What is Augmented Reality?
Augmented Reality (AR) is a real-time layering of
virtual digital elements including text, images,
video and 3D animations on top of our existing
reality, made visible through AR enabled devices
such as smart phones or tablets equipped with
a camera. I often compare AR to cinema when
it was frst new, for we are at a similar moment
in ARs evolution where there are currently no
conventions or set aesthetics; this is a time ripe
with possibilities for ARs creative advancement.
Like cinema when it frst emerged, AR has com-
menced with a focus on the technology with
little consideration to content. AR content needs
to catch up with AR technology. As a community
of designers, artists, researchers and commer-
cial industry, we need to advance content in AR
and not stop with the technology, but look at
what unique stories and utility AR can present.
So far, AR technologies are still
new to many people and often
AR works cause a magical experi-
ence. Do you think AR will lose
its magic once people get used to
the technology and have devel-
oped an understanding of how AR
works? How have you worked with
this magical element in your
work The Amazing Cinemagician?
I wholeheartedly agree that AR can create a
magical experience. In my TEDx 2010 talk, How
Does Wonderment Guide the Creative Process
(http://youtu.be/ScLgtkVTHDc), I discuss how
AR enables a sense of wonder, allowing us to see
our environments anew. I often feel like a magi-
cian when presenting demos of my AR work live;
astonishment flls the eyes of the beholder ques-
tioning, How did you do that? So what happens
when the magic trick is revealed, as you ask,
when the illusion loses its novelty and becomes
habitual? In Virtual Art: Illusion to Immersion
(2004), new media art-historian oliver Grau
discusses how audiences are frst overwhelmed
by new and unaccustomed visual experiences,
but later, once habituation chips away at the
illusion, the new medium no longer possesses
the power to captivate (p. 152). Grau writes
that at this stage the medium becomes stale
and the audience is hardened to its attempts
at illusion; however, he notes, that it is at this
stage that the observers are receptive to con-
tent and media competence (p. 152).
When the initial wonder and novelty of the
technology wear off, will it be then that AR is
explored as a possible media format for various
content and receive a wider public reception as
a mass medium? or is there an element of won-
der that need exist in the technology for it to
be effective and fourish?
BY HANNA SCHRAFFENBERGER
15
Pick a card. Place it here.
Prepare to be amazed and
entertained.
14
P
i
c
t
u
r
e
:

P
i
P
P
i
n

l
e
e
I believe AR is currently entering the stage of
content development and storytelling, however,
I dont feel AR has lost its power to captivate
or become stale, and that as artists, design-
ers, researchers and storytellers, we continue to
maintain wonderment in AR and allow it to guide
and inspire story and content. Lets not forget
the enchantment and magic of the medium. I
often reference the work of French flmmaker
and magician George Mlis (1861-1938) as a
great inspiration and recently named him the
Patron Saint of AR in an article for The Creators
Project (http://www.thecreatorsproject.com/
blog/celebrating-georges-mlis-patron-saint-
of-augmented-reality) on what would have been
Mlis 150th birthday. Mlis was frst a stage
magician before being introduced to cinema at
a preview of the Lumiere brothers invention,
where he is said to have exclaimed, Thats
for me, what a great trick. Mlis became
famous for the trick-flm, which employed a
stop- motion and substitution technique. Mlis
applied the newfound medium of cinema to
extend magic into novel, seemingly impossible
visualities on the screen.
I consider AR, too, to be very much about creat-
ing impossible visualities. We can think of AR as
a real-time stop-substitution, which layers con-
tent dynamically atop the physical environment
and creates virtual actualities with shapeshifting
objects, magically appearing and disappearing
as Mlis frst did in cinema.
In tribute to Mlis, my Mixed Reality exhibit,
The Amazing Cinemagician integrates Radio
Frequency Identifcation (RFID) technology with
the FogScreen, a translucent projection screen
consisting of a thin curtain of dry fog. The
Amazing Cinemagician speaks to technology as
magic, linking the emerging technology of the
FogScreen with the pre-cinematic magic lantern
and phantasmagoria spectacles of the Victorian
era. The installation is based on a card-trick,
using physical playing cards as an interface
to interact with the FogScreen. RFID tags are
hidden within each physical playing card. Part
of the magic and illusion of this project was to
disguise the RFID tag as a normal object, out
of the viewers sight. Each of these tags cor-
responds to a short flm clip by Mlis, which is
projected onto the FogScreen once a selected
card is placed atop the RFID tag reader. The
RFID card reader is hidden within an antique
wooden podium (adding to the aura of the magic
performance and historical time period).
The following instructions were provided to the
participant: Pick a card. Place it here. Prepare
to be amazed and entertained. once the
participant placed a selected card atop the des-
ignated area on the podium (atop the concealed
RFID reader), an image of the corresponding
card was revealed on the FogScreen, which was
then followed by one of Mlis flms. The deci-
sion was made to provide visual feedback of the
participants selected card to add to the magic
of the experience and to generate a sense of
wonder, similar to the witnessing and question-
ing of a magic trick, with participants asking,
How did you know that was my card? How did
you do that? This curiosity inspired further
exploration of each of the cards (and in turn,
Mlis flms) to determine if each of the par-
ticipants cards could be properly identifed.
You are an artist and researcher.
Your scientifc work as well as
your artistic work explores how
AR can be used as a creative
medium. Whats the difference
between your work as an artist /
designer and your work as a re-
searcher?
Excellent question! I believe that artists and
designers are researchers. They propose novel
paths for innovation introducing detours into the
usual processes. In my most recent TEDx 2011
talk in Dubai, Augmented Reality and the Power
of Imagination (http://youtu.be/7QrB4cYxjmk),
16 17
I discuss how as a designer/artist/PhD researcher
I am both a practitioner and a researcher, a mak-
er and a believer. As a practitioner, I do, create,
design; as a researcher I dream, aspire, hope.
I am a make-believer working with a technology
that is about make-believe, about imagining
possibilities atop actualities. Now, more than
ever, we need more creative adventurers and
make-believers to help AR continue to evolve
and become a wondrous new medium, unlike
anything weve ever seen before! I spoke to the
importance and power of imagination and make-
believe, and how they pertain to AR at this criti-
cal junction in the mediums evolution. When
we make-believe and when we imagine, we are
in two places simultaneously; make-believe is
about projecting or layering our imagination
on top of a current situation or circumstance.
In many ways, this is what AR is too: layering
imagined worlds on top of our existing reality.
Youve had quite a success with
your AR pop-up book Whos
Afraid of Bugs? In your blog you
talk about your inspiration for
the story behind the book: it was
inspired by AR psychotherapy
studies for the treatment of
phobias such as arachnophobia.
Can you tell us more?
Whos Afraid of Bugs? was the worlds frst Aug-
mented Reality (AR) Pop-up designed for iPad2
and iPhone 4. The book combines hand-crafted
paper-engineering and AR on mobile devices to
create a tactile and hands-on storybook that
explores the fear of bugs through narrative and
play. Integrating image tracking in the design,
as opposed to black and white glyphs commonly
seen in AR, the book can hence be enjoyed alone
as a regular pop-up book, or supplemented with
Augmented digital content when viewed through
a mobile device equipped with a camera. The
book is a playful exploration of fears using AR in
a meaningful and fun way. Rhyming text takes
P
i
c
t
u
r
e
:

H
E
L
E
N

P
A
P
A
G
I
A
N
N
I
S
the reader through the storybook where various
creepy crawlies (spider, ant, and butterfy) are
awaiting to be discovered, appearing virtually
as 3D models you can interact with. A tarantula
attacks when you touch it, an ant hyperlinks to
educational content with images and diagrams,
and a butterfy appears fapping its wings atop
a fower in a meadow. Hands are integrated
throughout the book design, whether its pla cing
ones hand down to have the tarantula crawl
over you virtually, the hand holding the magnify-
ing lens that sees the ant, or the hands that pop-
up holding the fower upon which the butterfy
appears. Its a method to involve the reader in
the narrative, but also comments on the unique
tactility AR presents, bridging the digital with
the physical. Further, the story for the AR
Pop-up Book was inspired by AR psychotherapy
studies for the treatment of phobias such as
arachnophobia. AR provides a safe, controlled
environment to conduct exposure therapy
within a patients physical surroundings, creat-
ing a more believable scenario with heightened
presence (defned as the sense of really being in
an imagined or perceived place or scenario) and
provides greater immediacy than in Virtual Real-
ity (VR). A video of the book may be watched at
http://vimeo.com/25608606.
In your work, technology serves
as an inspiration. For example,
rather than starting with a story
which is then adapted to a certain
technology, you start out with
AR technology, investigate its
strengths and weaknesses and so
the story evolves. However, this
does not limit you to only use the
strength of a medium.
On the contrary, weaknesses such
as accidents and glitches have
for example infuenced your work
Hallucinatory AR. Can you tell us
a bit more about this work?
Hallucinatory Augmented Reality (AR), 2007,
was an experiment which investigated the
possibility of images which were not glyphs/AR
trackables to generate AR imagery. The projects
evolved out of accidents, incidents in earlier
experiments in which the AR software was mis-
taking non-marker imagery for AR glyphs and
attempted to generate AR imagery. This confu-
sion, by the software, resulted in unexpected
and random fickering AR imagery. I decided to
explore the creative and artistic possibilities
of this effect further and conduct experiments
with non-traditional marker-based tracking.
The process entailed a study of what types of
non-marker images might generate such hallu-
cinations and a search for imagery that would
evoke or call upon multiple AR imagery/videos
from a single image/non-marker.
Upon multiple image searches, one image
emerged which proved to be quite extraordi-
nary. A cathedral stained glass window was
able to evoke four different AR videos, the only
instance, from among many other images, in
which multiple AR imagery appeared. Upon close
examination of the image, focusing in and out
with a web camera, a face began to emerge in
the black and white pattern. A fantastical im-
age of a man was encountered. Interestingly, it
was when the image was blurred into this face
using the web camera that the AR hallucinatory
imagery worked best, rapidly multiplying and
appearing more prominently. Although numer-
ous attempts were made with similar images,
no other such instances occurred; this image
appeared to be unique.
The challenge now rested in the choice of what
types of imagery to curate into this hallucinatory
viewing: what imagery would be best suited to
this phantasmagoric and dream-like form?
My criteria for imagery/videos were like-form
and shape, in an attempt to create a collage-like
set of visuals. As the sequence or duration of
the imagery in Hallucinatory AR could not be
predetermined, the goal was to identify imagery
18
that possessed similarities, through which the
possibility for visual synchronicities existed.
Themes of intrusions and chance encounters are
at play in Hallucinatory AR, inspired in part by
Surrealist artist Max Ernst. In What is the Mecha-
nism of Collage? (1936), Ernst writes:
one rainy day in 1919, fnding myself on a village
on the Rhine, I was struck by the obsession
which held under my gaze the pages of an illus-
trated catalogue showing objects designed for
anthropologic, microscopic, psychologic, miner-
alogic, and paleontologic demonstration. There
I found brought together elements of fguration
so remote that the sheer absurdity of that col-
lection provoked a sudden intensifcation of
the visionary faculties in me and brought forth
an illusive succession of contradictory images,
double, triple, and multiple images, piling up
on each other with the persistence and rapidity
which are particular to love memories and vi-
sions of half-sleep (p. 427).
of particular interest to my work in exploring
and experimenting with Hallucinatory AR was
Ernsts description of an illusive succession of
contradictory images that were brought forth
(as though independent of the artist), rapidly
multiplying and piling up in a state of half-
sleep. Similarities can be drawn to the process
of the seemingly disparate AR images jarringly
coming in and out of view, layered atop one
another.
one wonders if these visual accidents are what
the future of AR might hold: of unwelcome
glitches in software systems as Bruce Sterling
describes on Beyond the Beyond in 2009; or
perhaps we might come to delight in the visual
poetry of these Augmented hallucinations that
are As beautiful as the chance encounter of a
sewing machine and an umbrella on an operating
table.
1
To a computer scientist, these glitches, as
applied in Hallucinatory AR, could potentially
be viewed or interpreted as a disaster, as an
example of the technology failing. To the artist,
however, there is poetry in these glitches, with
new possibilities of expression and new visual
forms emerging.
on the topic of glitches and accidents, Id like to
return to Mlis. Mlis became famous for the
stop trick, or double exposure special effect,
a technique which evolved from an accident:
Mlis camera jammed while flming the streets
of Paris; upon playing back the flm, he observed
an omnibus transforming into a hearse. Rather
than discounting this as a technical failure, or
glitch, he utilized it as a technique in his flms.
Hallucinatory AR also evolved from an accident,
which was embraced and applied in attempt
to evolve a potentially new visual mode in the
medium of AR. Mlis introduced new formal
styles, conventions and techniques that were
specifc to the medium of flm; novel styles and
new conventions will also emerge from AR art-
ists and creative adventurers who fully embrace
the medium.
[1] Comte de Lautreamonts often quoted allegory,
famous for inspiring both Max Ernst and Andrew
Breton, qtd. in: Williams, Robert. Art Theory: An
Historical Introduction. Malden, MA: Blackwell
Publishing, 2004: 197
As beautiful as the chance
encounter of a sewing
machine and an umbrella
on an operating table.
Picture: PiPPin lee
19
Comte de Lautramont
THE TECHNoLoGY BEHIND
AUGMENTED REALITY
Augmented Reality (AR) is a feld that is primarily
concerned with realistically adding computer-
generated images to the image one perceives
from the real world.
AR comes in several favors. Best known is the
practice of using fatscreens or projectors,
but nowadays AR can be experienced even on
smartphones and tablet PCs. The crux is that 3D
digital data from another source is added to the
ordinary physical world, which is for example
seen through a camera. We can create this ad-
ditional data ourselves, e.g. using 3D drawing
programs such as 3D Studio Max, but we can
also add CT and MRI data or even live TV images
to the real world. Likewise, animated three
dimensional objects (avatars), which then can be
displayed in the real world, can be made using a
visualization program like Cinema 4D. Instead of
displaying information on conventional monitors,
the data can also be added to the vision of the
user by means of a head-mounted display (HMD)
or Head-Up Display. This is a second, less known
form of Augmented Reality. It is already known
to fghter pilots, among others. We distinguish
two types of HMDs, namely: optical See Through
(oST) headsets and Video See Through (VST)
headsets. oST headsets use semi-transparent
mirrors or prisms, through which one can keep
seeing the real world. At the same time, virtual
objects can be added to this view using small
displays that are placed on top of the prisms.
VSTs are in essence Virtual Reality goggles, so
the displays are placed directly in front of your
eyes. In order to see the real world, there are
two cameras attached on the other side of the
little displays. You can then see the Augmented
Reality by mixing the video signal coming from
the camera with the video signal containing the
virtual objects.
20 21

undeRlying technology
screens and glasses
unlike screen-based AR, hmds provide depth
perception as both eyes receive an image.
when objects are projected on a 2d screen,
one can convey an experience of depth by
letting the objects move. Recent 3d screens
allow you to view stationary objects in depth.
3d televisions that work with glasses quickly
alternate the right and left image - in sync with
this, the glasses use active shutters which let
the image in turn reach the left or the right
eye. this happens so fast that it looks like you
view both, the left and right image simultane-
ously. 3d television displays that work without
glasses make use of little lenses which are
placed directly on the screen. those refract
the left and right image, so that each eye can
only see the corresponding image. see for
example www.dimenco.eu/display-technology.
this is essentially the same method as used
on the well known 3d postcards on which a
beautiful lady winks when the card is slightly
turned. 3d flm makes use of two projectors
that show the left and right images simultane-
ously, however, each of them is polarized in a
different way. the left and right lenses of the
glasses have matching polarizations and only
let through the light of to the corresponding
projector. the important point with screens
is that you are always bound to the physical
location of the display while headset based
techniques allow you to roam freely. this is
called immersive visualization you are im-
mersed in a virtual world. you can walk around
in the 3d world and move around and enter
virtual 3d objects.
Video-see-through AR will become popular
within a very short time and ultimately be-
come an extension of the smartphone. this is
because both display technology and camera
technology have made great strides with the
advent of smartphones. what currently still
might stand in the way of smartphone models
is computing power and energy consumption.
companies such as microsoft, google, sony,
Zeiss,... will enter the consumer market soon
with AR technology.
tracking technology
A current obstacle for major applications which
soon will be resolved is the tracking technology.
the problem with AR is embedding the virtual
objects in the real world. you can compare
this with color printing: the colors, e.g., cyan,
magenta, yellow and black have to be printed
properly aligned to each other. what you
often see in prints which are not cut yet, are
so called fducial markers on the edge of the
printing plates that serve as a reference for
the alignment of the colors. these are also
necessary in AR. often, you see that mark-
ers are used onto which a 3d virtual object is
projected. moving and rotating the marker, lets
you move and rotate the virtual object. such
a marker is comparable to the fducial marker
in color printing. with the help of computer
vision technology, the camera of the headset
can identify the marker and based on its size,
shape and position, conclude the relative posi-
tion of the camera. if you move your head rela-
tive to the marker (with the virtual object), the
computer knows how the image on the display
must be transformed so that the virtual object
remains stationary. And conversely, if your
head is stationary and you rotate the marker,
it knows how the virtual object should rotate
so that it remains on top of the marker.
AR smartphone applications such as layar use
the build in gPs and compass for the tracking.
this has an accuracy of meters and measures
angles of 5-10 degrees. camera-based tracking,
however, is accurate to the centimetre and can
measure angles of several degrees. nowadays,
using markers for the tracking is already out
of date and we use so called natural feature
tracking also called keypoint tracking.
here, the computer searches for conspicuous
22
(salient) key points in the left and right camera
image. if, for example, you twist your head, this
shift is determined on the basis of those key points
with more than 30 frames per second. this way, a
3d map of these keypoints can be built and the com-
puter knows the relationship (distance and angle)
between the keypoints and the stereo camera. this
method is more robust than marker based tracking
because you have many keypoints widely spread
in the scene and not just the four corners of the
marker close together in the scene. if someone
walks in front of the camera and blocks some of the
keypoints, there will still be enough keypoints left
and the tracking is not lost. moreover, you do not
have to stick markers all over the world.
collaboration with the Royal Academy of Arts
(kABk) in the hague in the AR lab (Royal
Academy, tu delft, leiden university, various
smes) in the realization of applications.
the tu delft has done research on AR since 1999.
since 2006, the university works with the art
academy in the hague. the idea is that AR is a new
technology with its own merits. Artists are very
good at fnding out what is possible with the new
technology. here are some pictures of realized
projects. liseerde projecten
Fig 1. The current technology that replaces the markers
with natural feature tracking or so called keypoint
tracking. Instead of the four corners of the marker, the
computer itself determines which points in the left and
right images can be used as anchor points for calculating
the 3D pose of the camera in 3D space. From top:
1: you can use all points in the left and right images
to slowly build a complete 3D map. Such a map can,
for example, be used to relive your past experience
because you can again walk in the now virtual space.
2: the 3D keypoint space and the trace of the
camera position within it.
3: keypoints (the color indicates the suitability)
4: you can place virtual objects (eyes) on an existing
surface
23
Fig 2. Virtual furniture exhibition at the Salone di Mobile in Milan (2008); students of the Royal
Academy of Art, The Hague show their furnitures by means of AR headsets. This saves transpor-
tation costs.
Fig 3. Virtual sculpture exhibition in Krller-Mller (2009). From left:
1) visitors on adventure with laptops on walkers, 2) inside with a optical see-through headset,
3) large pivotable screen on a feld of grass, 4) virtual image.
24
Fig 4. Exhibition in Museum Boijmans van Beuningen (2008-2009). From left: 1) Sgrafftto in 3D;
2) the 3D print version may be picked up by the spectator, 3) animated shards, the table covered
in ancient pottery can be seen via the headset, 4) scanning antique pottery with the CT scanner
delivers a 3D digital image.
Fig 5. The TUD, partially in collaboration with the Royal Academy (with the oldest industrial design
course in the Netherlands), has designed a number of headsets.This design of headsets is an ongoing
activity. From left: 1) frst optical see-through headset with Sony headset and self-made inertia
tracker (2000), 2) on a construction helmet (2006), 3) SmartCam and tracker taped on a Cyber Mind
Visette headset (2007); 4) headset design with engines by Niels Mulder, a student at Royal Academy
of Art, The Hague (2007), based on Cybermind technology, 5) low cost prototype based on the Carl
Zeiss Cinemizer headset, 6) future AR Vizor?, 7) future AR lens?
25
26 27
there are many applications
that can be realized using AR;
they will fnd their way in the
coming decades:

1. head-up displays have already been used
for many years in the Air force for fghter
pilots; this can be extended to other
vehicles and civil applications.
2. the billboards during the broadcast of
a football game are essentially also AR;
more can be done by also ivolving the
game itself an allowing interaction of teh
user, such as off-side line projection.
3. in the professional sphere, you can, for
example, visualize where pipes under the
street lie or should lie. ditto for designing
ships, houses, planes, trucks and cars.
whats outlined in a cAd drawing could
be drawn in the real world, allowing
you to see in 3d if and where there is a
mismatch.
4. you can easily fnd books you are looking
for in the library.
5. you can fnd out where restaurants are in
a city...
6. you can pimp theater / musical / opera /
pop concerts with (immersive) AR decor.
7. you can arrange virtual furniture or cur-
tains from the ikeA catalog and see how
they look in your home.
8. maintenance of complex devices will
become easier, e.g. you can virtually see
where the paper in the copier is jammed.
9. if you enter a restaurant or the hardware
store, a virtual avatar can show you the
place to fnd that special bolt or table.
showing the seRRA Room
in museum BoijmAns VAn
Beuningen duRing the
exhiBition sgRAffito in 3d
Picture: joAchim RotteVeel
28
ARoUND 2004, MY YoUNGER BRoTHER VALENTIJN INTRoDUCED ME
To THE FASCINATING WoRLD oF AUGMENTED REALITY. HE WAS A
MoBILE PHoNE SALESMAN AT THE TIME, AND SIEMENS HAD JUST
LAUNCHED THEIR FIRST SMARTPHoNE, THE BULKY SIEMENS SX1.
THIS PHoNE WAS QUITE MARVELoUS, WE THoUGHT IT RAN THE
SYMBIAN oPERATING SYSTEM, HAD A BUILT-IN CAMERA, AND CAME
WITH THREE GAMES.
RE-INTRoDUCING MoSQUIToS
MAARTEN LAMERS
29
one of these games was mozzies, a.k.a Virtual
mosquito hunt, which apparently won some
2003 Best mobile game Award and my brother
was eager to show it to me in the store where
he worked at that time. i was immediately
hooked mozzies lets you kill virtual mos-
quitos that fy around superimposed over the
live camera feed. By physically moving the
phone you could chase after the mosquitos
when they attempted to fy off the phones
display. those are all the ingredients for
Augmented Reality in my personal opinion:
something that interacts with my perception
and manipulation of the world around me, at
that location, at that time. And mozzies did
exactly that.
now almost eight years later, not much
has changed. whenever people around me
speak of AR, because they got tired of saying
Augmented Reality, they still refer to bulky
equipment (even bulkier than the siemens
sx1!) that projects stuff over a live camera
feed and lets you interact with whatever
that stuff is. in mozzies it was pesky little
mosquitos -- nowadays it is anything from
restaurant information to crime scene data.
But nothing really changed, right?
Right! technology became more advanced,
so we no longer need to hold the phone in
our hand, but get to wear it strapped to our
skull in the form of goggles. But the idea is
unchanged; you look at fake stuff in the real
world and physically move around to deal
with it. you still dont get the tactile sensa-
tion of swatting a mosquito or collecting
virtually heavy information. you still dont
even hear the mosquito fying around you
its time to focus on those matters also, in my
opinion. lets take up the challenge and make
AR more than visual, exploring interaction
models for other senses. lets enjoy the full
experience of seeing, hearing, and particu-
larly swatting mosquitos, but without the
itchy bites.
30 31
LIEVEN VAN
VELTHoVEN
THE RACING STAR
BY HANNA SCHRAFFENBERGER
IT AINT FUN IF IT AINT REAL TIME
WHEN I ENTER LIEVEN VAN
VELTHoVENS RooM, THE PEoPLE
FRoM THE EFTELING HAVE JUST
LEFT. THEY ARE INTERESTED IN HIS
VIRTUAL GRoWTH INSTALLATIoN.
AND THEY ARE NoT THE oNLY oNES
INTERESTED IN LIEVENS WoRK. IN
THE LAST YEAR, HE HAS WoN THE
JURY AWARD FoR BEST NEW MEDIA
PRoDUCTIoN 2011 oF THE INTER-
NATIoNAL CINEKID YoUTH MEDIA
FESTIVAL AS WELL AS THE DUTCH
GAME AWARD 2011 FoR THE BEST
STUDENT GAME. THE WINNING

MIXED REALITY GAME RooM RAC-
ERS HAS BEEN SHoWN AT THE
DISCoVERY FESTIVAL, MEDIAMATIC,
THE STRP FESTIVAL AND THE ZKM IN
KARLSRUHE. HIS VIRTUAL GRoWTH
INSTALLATIoN HAS EMBELLISHED
THE STREETS oF AMSTERDAM AT
NIGHT. NoW, HE IS GoING To SHoW
RooM RACERS To ME, IN HIS LIVING
RooM WHERE IT ALL STARTED.
The room is packed with stuff and on frst sight
it seems rather chaotic, with a lot of random
things laying on the foor. There are a few
plants, which probably dont get enough light,
because Lieven likes the dark (thats when his
projections look best). It is only when he turns
on the beamer, that I realize that his room is
actually not chaotic at all. The shoe, magnifying
class, video games, tape and stapler which cover
the foor are all part of the game.
You create your own race game
tracks by placing real stuff on the
foor
Lieven tells me. He hands me a controller and
soon we are racing the little projected cars
around the chocolate spread, marbles, a remote
control and a fash light. Trying not to crash the
car into a belt, I tell him what I remember about
when I frst met him a few years ago at a Media
Technology course at Leiden University. Back
then, he was programming a virtual bird, which
would fy from one room to another, preferring
the room in which it was quiet. Loud and sudden
sounds would scare the bird away into another
room. The course for which he developed it was
called sound space interaction, and his instal-
lation was solely based on sound. I ask him
whether the virtual bird was his frst contact
with Augmented Reality. Lieven laughs.
Its interesting that you call it
AR, as it only uses sound!
Indeed, most of Lievens work is based on
interactive projections and plays with visual
augmentations of our real environment. But like
the bird, all of them are interactive and work
in real-time. Looking back, the bird was not his
frst AR work.
My frst encounter with AR was
during our frst Media Technology
course a visit to the Ars Elec-
troncia festival in 2007 where
I saw Pablo Valbuenas Augmented
Sculpture. It was amazing. I was
asking myself, can I do something
like this but interactive instead?
Armed with a bachelor in technical computer
science from TU Delft and the new found possi-
bility to bring in his own curiosity and ideas at
the Media Technology Master program at Leiden
University, he set out to build his own inter-
active projection based works.
32
Room RAceRs
Up to four players race their virtual cars around real objects
which are lying on the foor. Players can drop in or out of the
game at any time. Everything you can fnd can be placed on
the foor to change the route.
Room Racers makes use of projection-based mixed reality.
The structure of the foor is analysed in real-time using a
modifed camera and self-written software. Virtual cars are
projected onto the real environment and interact with the
detected objects that are lying on the foor.
The game has won the Jury Award for Best New Media Produc-
tion 2011 of the international Cinekid Youth Media Festival,
and the Dutch Game Award 2011 for Best Student Game. Room
Racers shas been shown at several international media festi-
vals. You can play Room Racers at the 'Car Culture' exposition
at the Lentos Kunstmuseum in Linz, Austria until 4th of July
2012.
Picture: lieVen VAn VelthoVen, Room RAceRs At Zkm | centeR foR ARts And mediA
in kARlsRuhe, geRmAny on june 19th, 2011
33
34 35
The frst time, I experimented
with the combination of the real
and the virtual myself was in a
piece called shadow creatures
which I made with Lisa Dalhuijsen
during our frst semester in 2007.
More interactive projections followed in the
next semester and in 2008, the idea for Room
Racers was born. A frst prototype was build in
a week: a projected car bumping into real world
things. After that followed months and months
of optimizations. Everything is done by Lieven
himself, mostly at night in front of the computer.
My projects are never really
fnished, they are always work in
progress, but if something works
fne in my room, its time to take
it out in the world.
After having friends over and playing with the
cars until six oclock in the morning, Lieven
knows its time to steer the cars out of his room
and show them to the outside world.
I wanted to present Room Rac-
ers but I didnt know anyone, and
no one knew me. There was no
network I was part of.
Uninhibited by this, Lieven took the initiative
and asked the Discovery Festival if they were
interested in his work. Luckily, they were and
showed two of his interactive games at the Dis-
covery Festival 2010. After the festival requests
started coming and the cars kept rolling. When
I ask him about this continuing success he is
divided:
Its fun, but it takes a lot of time
I have not been able to program
as much as I used to.
His success does surprise him and he especially
did not expect the attention it gets in an art
context.
I knew it was fun. That became
clear when I had friends over and
we played with it all night. But I
did not expect the awards. And I
did not expect it to be relevant
in the art scene. I do not think
its art, its just a game. I dont
consider myself an artist. I am a
developer and I like to do interac-
tive projections. Room Racers is
my least arty project, neverthe-
less it got a lot of response in the
art context.
A piece which he actually considers more of an
artwork is Virtual Growth: a mobile installation
which projects autonomous growing structures
onto any environment you place it in, be it
buildings, people or nature.
For me AR has to take place in
the real world. I dont like screens.
I want to get away from them. I
have always been interested in
other ways of interacting with
computers, without mice, without
screens. There is a lot of screen
based AR, but for me AR is re-
ally about projecting into the real
world. Put it in the real world,
identify real world objects, do it in
real-time, thats my philosophy. It
aint fun if it aint real-time. One
day, I want to go through a city
with a van and do projections on
buildings, trees, people and what-
ever else I pass.
For now, he is bound to a bike but that does
not stop him. Virtual Growth works fast and
stable, even on a bike. That has been witnessed
in Amsterdam, where the audiovisual bicycle
project Volle Band put beamers on bikes and
invented Lieven to augmented the city with his
mobile installation. People who experienced
Virtual Growth on his journeys around Amster-
dam, at festivals and parties, are enthusiastic
about his (smashing!) entertainment-art. As the
virtual structure grows, the audience members
not only start to interact with the piece but also
with each other.
They put themselves in front
of the projector, have it project-
ing onto themselves and pass on
the projection to other people
by touching them. I dont explain
anything. I believe in simple
ideas, not complicated concepts.
The piece has to speak for itself.
If people try it, immediately get
it, enjoy it and tell other people
about it, it works!
Virtual Growth works, that becomes clear from
the many happy smiling faces the projection
grows upon. And thats also what counts for
Lieven.
At frst it was hard, I didnt get
paid for doing these projects. But
when people see them and are en-
thusiastic, that makes me happy.
If I see people enjoying my work,
and playing with it, thats what
really counts.
I wonder where he gets the energy to work that
much alongside being a student. He tells me,
what drives him, is that he enjoys it. He likes to
spend the evenings with the programming lan-
guage C#. But the fact that he enjoys working on
his ideas, does not only keep him motivated but
also has caused him to postpone a few courses
at university. While talking, he smokes his
cigarette and takes the ashtray from the foor.
With the road no longer blocked by it, the cars
take a different route now. Lieven might take a
different route soon as well. I ask him, if he will
still be working from his living room, realizing
his own ideas, once he has graduated.
Its actually funny. It all started
to fll my portfolio in order to get
a cool job. I wanted to have some
things to show besides a diploma.
Thats why I started realizing my
ideas. It got out of control and
soon I was realizing one idea af-
ter the other. And maybe, Ill just
continue doing it. But also, there
are quite some companies and
jobs Id enjoy working for. First
I have to graduate anyway.
If I have learned anything about Lieven and his
work, I am sure his graduation project will be
placed in the real world and work in in real-
time. More than that, it will be fun. It aint
Lieven, if it aint fun.
name: lieven van Velthoven
Born: 1984
study: media technology msc,
leiden university
Background: computer science,
tu delft
selected AR works: Room Racers,
Virtual growth
watch: http://www.youtube.com/
user/lievenvv
36 37
HoW DID WE Do IT:
ADDING VIRTUAL SCULPTURES
AT THE KRLLER-MLLER MUSEUM
By Wim van Eck
ALWAYS WANTED To CREATE YoUR oWN AUGMENTED REALITY PRo-
JECTS BUT NEVER KNEW HoW? DoNT WoRRY, AR[T] IS GoING To
HELP YoU! HoWEVER, THERE ARE MANY HURDLES To TAKE WHEN
REALIZING AN AUGMENTED REALITY PRoJECT. IDEALLY YoU SHoULD
BE A SKILLFUL 3D ANIMAToR To CREATE YoUR oWN VIRTUAL oB-
JECTS, AND A GREAT PRoGRAMMER To MAKE THE PRoJECT TECHNI-
CALLY WoRK. PRoVIDING YoU DoNT JUST WANT To MAKE A FANCY
TECH-DEMo, YoU ALSo NEED To CoME UP WITH A GREAT CoNCEPT!
My name is Wim van Eck and I work at the AR
Lab, based at the Royal Academy of Art. one of
my tasks is to help art-students realize their Aug-
mented Reality projects. These students have
great concepts, but often lack experience in 3d
animation and programming. Logically I should
tell them to follow animation and programming
courses, but since the average deadline for their
projects is counted in weeks instead of months
or years there is seldom time for that... In the
coming issues of AR[t] I will explain how the AR
Lab helps students to realize their projects and
how we try to overcome technical boundaries,
showing actual projects we worked on by exam-
ple. Since this is the frst issue of our magazine
I will give a short overview of recommendable
programs for Augmented Reality development.
We will start with 3d animation programs, which
we need to create our 3d models. There are
many 3d animation packages, the more well
known ones include 3ds Max, Maya, Cinema 4d,
Softimage, Lightwave, Modo and the open source
Blender (www.blender.org). These are all great
programs, however at the AR Lab we mostly use
Cinema 4d (image 1) since it is very user friendly
and because of that easier to learn. It is a shame
that the free Blender still has a steep learning
curve since it is otherwise an excellent program.
You can download a demo of Cinema 4d at
http://www.maxon.net/downloads/demo-ver-
sion.html, these are some good tutorial sites to
get you started:
http://www.cineversity.com
http://www.c4dcafe.com
http://greyscalegorilla.com

image 1
In case you dont want to create your own 3d
models you can also download them from vari-
ous websites. Turbosquid (http://www.turbos-
quid.com), for example, offers good quality but
often at a high price, while free sites such as
Artist-3d (http://artist-3d.com) have a more var-
ied quality. When a 3d model is not constructed
properly it might give problems when you import
it or visualize it. In coming issues of AR[t] we
will talk more about optimizing 3d models for
Augmented Reality usage. To actually add these
3d models to the real world you need Aug-
mented Reality software. Again there are many
options, with new software being added continu-
ously. Probably the easiest to use software is
BuildAR (http://www.buildar.co.nz) which is
available for Windows and oSX. It is easy to
import 3d models, video and sound and there is
a demo available. There are excellent tutorials
on their site to get you started. In case you want
to develop for ioS or Android the free Junaio
(http://www.junaio.com) is a good option. Their
online GLUE application is easy to use, though
their preferred .m2d format for 3d models is
not the most common. In my opinion the most
powerful Augmented Reality software right now
is Vuforia (https://developer.qualcomm.com/
develop/mobile-technologies/Augmented-reality)
in combination with the excellent game-engine
Unity (www.unity3d.com). This combination
offers high-quality visuals with easy to script
interaction on ioS and Android devices
Sweet summer nights
at the Krller-Mller
Museum.
As mentioned before in the introduction we
will show the workfow of AR Lab projects with
these How did we do it articles. In 2009 the AR
Lab was invited by the Krller-Mller Museum to
present during the Sweet Summer Nights, an
evening full of cultural activities in the famous
sculpture garden of the museum. We were asked
to develop an Augmented Reality installation
aimed at the whole family and found a diverse
group of students to work on the project. Now
the most important part of the project started,
brainstorming!
our location in the sculpture garden was in-
between two sculptures, Man and woman, a
stone sculpture of a couple by Eugne Dodeigne
(image 2) and Igloo di pietra, a dome shaped
sculpture by Mario Merz (image 3). We decided
to read more about these works, and learned
that Dodeigne had originally intended to create
two couples instead of one, placed together in a
wild natural environment. We decided to virtu-
ally add the second couple and also add a more
wild environment, just as Dodeigne initially had
in mind. To be able to see these additions we
placed a screen which can rotate 360 degrees
between the two sculptures (image 4).

image 2 image 3 | Picture by klaas A. mulder image 4
38
actually build what the camera will see. This will
already save us quite some work. We can also
see the screen is positioned quite far away from
the sculpture, and when an object is viewed
from a distance it will optically lose its depth.
When you are one meter away from an object
and take one step aside you will see the side of
the object, but if the same object is a hundred
meter away you will hardly see a change in per-
spective when changing your position (see image
6). From that distance people will hardly see the
difference between an actual 3d model and a
plain 2d image. This means we could actually use
photographs or drawings instead of a complex 3d
model, making the whole process easier again.
We decided to follow this route.
A webcam was placed on top of the screen,
and a laptop running ARToolkit (http://www.
hitl.washington.edu/artoolkit) was mounted
on the back of the screen. A large marker was
placed near the sculpture as a reference point
for ARToolkit.
Now it was time to create the 3d models of the
extra couple and environment. The students
working on this part of the project didnt have
much experience with 3d animation, and there
wasnt much time to teach them, so manually
modeling the sculptures would be a diffcult task.
Soon options such as 3d scanning the sculpture
were opted, but it still needs quite some skill
to actually prepare a 3d scan for Augmented
Reality usage. We will talk more about that in
a coming issue of this magazine.
But when we look carefully at our setup (image
5) we can draw some interesting conclusions.
our screen is immobile, we will always see our
added 3d model from the same angle. So since
we will never be able to see the back of the 3d
model there is no need to actually model this
part. This is a common practice while making 3d
models, you can compare it with set construc-
tion for Hollywood movies where they also only
image 5
image 6
=
image 7
image 8
image 9
image 10
image 11
39
40 41
To be able to place the photograph of the
sculpture in our 3d scene we have to assign
it to a placeholder, a single polygon, image 7
shows how this could look.
This actually looks quite awful, we see the
statue but also all the white around it from the
image. To solve this we need to make usage of
something called an alpha channel, an option
you can fnd in every 3d animation package
(image 8 shows where it is located in the mate-
rial editor of Cinema 4d). An alpha channel is
a grayscale image which declares which parts
of an image are visible, white is opaque, black
is transparent. Detailed tutorials about alpha
channels are easily found on the internet.
As you can see this looks much better (image 9).
We followed the same procedure for the second
statue and the grass (image 10), using many
separate polygons to create enough randomness
for the grass. As long as you see these models
from the right angle they look quite realistic
(image 11). In this case this 2.5d approach prob-
ably gives even better results than a normal 3d
model, and it is much easier to create. Another
advantage is that the 2.5d approach is very easy
image 12
o
r
i
g
i
n
a
l

p
h
o
t
o
g
r
a
p
h

b
y

k
l
a
a
s

A
.

m
u
l
d
e
r
to compute since it uses few polygons, so you
dont need a very powerful computer to run it
or you can have many models on screen at the
same time. Image 12 shows the fnal setup.
For the iglo sculpture by Mario Merz we used
a similar approach. A graphic design student
imagined what could be living inside the iglo,
and started drawing a variety of plants and
creatures. Using the same 2.5d approach as
described before we used these drawings and
placed them around the iglo, and an animation
was shown of a plant growing out of the iglo
(image 12).
We can conclude that it is good practice to
analyze your scene before you start making your
3d models. You dont always need to model all
the detail, and using photographs or drawings
can be a very good alternative. The next issue
of AR[t] will feature a new How did we do it, in
case you have any questions you can contact me
at w.vaneck@kabk.nl
The Lab collaborated in this project with students from different depart-
ments of the KABK: Ferenc Molnar, Mit Koevoets, Jing Foon Yu, Marcel
Kerkmans and Alrik Stelling. The AR Lab team consisted of: Yolande
Kolstee, Wim van Eck, Melissa Coleman en Pawel Pokutycki, supported by
Martin Sjardijn and Joachim Rotteveel.
42 43
Pixels wAnt to Be fReed!
intRoducing Augmented ReAlity
enABling hARdwARe technologies
By jouke VeRlinden
From the early head-up display in the movie
Robocop to the present, Augmented Reality
(AR) has evolved to a manageable ICT environ-
ment that must be considered by product design-
ers of the 21st century.
Instead of focusing on a variety of applications
and software solutions, this article will discuss
the essential hardware of Augmented Reality
(AR): display techniques and tracking techniques.
We argue that these two felds differentiate AR
from regular human-user interfaces and tuning
these is essential in realizing an AR experience.
As often, there is a vast body of knowledge be-
hind each of the principles discussed below,
hence a large variety of literature references is
given.
Furthermore, the frst author of this article
found it important to elude his own prefer-
ences and experiences throughout this discussion.
We hope that this material strikes a chord and
makes you consider employing AR in your de-
signs. After all, why should digital information
always be confned to a dull, rectangular screen?
1. introduction
45
2. display technologies
to categorise AR display technologies, two
important characteristics should be identifed:
imaging generation principle and physical
layout.
generic AR technology surveys describe a
large variety of display technologies that sup-
port imaging generation (Azuma, 1997; Azuma
et al., 2001); these principles can be catego-
rised into:
1. Video-mixing. A camera is mounted some-
where on the product; computer graphics
are combined with captured video frames
in real time. the result is displayed on an
oblique surface, for example, an immer-
sive head-mounted display (hmd).
2. see-through: Augmentation by this
principle typically employs half-silvered
mirrors to superimpose computer graphics
onto the users view, as found in head-up
displays of modern fghter jets.
3. Projector-based systems: one or more
projectors cast digital imagery directly
on the physical environment.
As Raskar and Bimber (2004, p.72) argued, an
important consideration in deploying an Aug-
mented system is the physical layout of the
image generation. for each imaging genera-
tion principle mentioned above, the imaging
display can be arranged between user and
physical object in three distinct ways:
a) head-attached, which presents digital
images directly in front of the viewers
eyes, establishing a personal information
display.
b) hand-held, carried by a user and does not
cover the whole feld of view
c) spatial, which is fxed to the environment.
the resulting imaging and arrangement combi-
nations are summarised in table 1.
1.Video-mixing 2. see-through 3. Projection-based
A. head-attached head-mounted display (hmd)
B. hand-held handheld devices
see-through boards spatial projection-based
c. spatial embedded display
Table 1. Image generation principles for Augmented Reality
when the AR image generation and layout principles are combined, the following collection of
display technologies are identifed: hmd, handheld devices, embedded screens, see-through
boards and spatial projection-based AR. these are briefy discussed in the following sections.
44
2.1 head-mounted display
Head-attached systems refer to HMD solutions,
which can employ either of the three image
generation technologies. Even the frst head-
mounted displays developed by virtue of the
Virtual Reality already considered a see-through
system with half-silvered mirrors to merge
virtual line drawings with the physical environ-
ment (Sutherland, 1967). Since then, the variety
of head-attached imaging systems has been
expanded and encompasses all three principles
for AR: video-mixing, see-through and direct
projection on the physical world (Azuma et al.,
2001). A beneft of this approach is its hands-
free nature. Secondly, it offers personalised
content, enabling each user to have a private
view of the scene with customised and sensi-
tive data that das not have to be shared. For
most applications, HMDs have been considered
inadequate, both in the case of see-through and
video-mixing imaging. According to Klinker et al.
(2002), HMDs introduce a large barrier between
the user and the object and their resolution is
insuffcient for IAP typically 800 600 pixels
for the complete feld of view (rendering the
user legally blindby American standards).
Similar reasoning was found in Bochenek et al.
(2001), in which both the objective and subjec-
tive assessment of HMDs were less than those of
hand-held or spatial imaging devices. However,
new developments (specifcally high-resolution
oLED displays) show promising new devices, spe-
cifcally for the professional market (Carl Zeiss)
and enterntainment (Sony), see fgure right.
Figure 1. Recent heAd mounted disPlAys (ABoVe: kABk the
hAgue And undeR: cARl Zeiss).
2.2 handheld display
Hand-held video-mixing solutions are based on
smartphones, PDAs or other mobile devices
equipped with a screen and camera. With the
advent of powerful mobile electronics, handheld
Augmented Reality technologies are emerging.
By employing built-in cameras on smartphones
or PDAs, video mixing is enabled while concur-
rent use is being supported by communication
46 47
through wireless networks (Schmalstieg and
Wagner, 2008). The resulting device acts as a
hand-held window of a mixed reality. An exam-
ple of such a solution is shown in Figure 2, which
is a combination of an Ultra Mobile Personal
Computer (UMPC), a Global Positioning System
(GPS) antenna for global position tracking, a
camera for local position and orientation sensing
along with video mixing. As of today, such sys-
tems are found in each modern smartphone,
and apps such as Layar (www.layar.com) and
Junaio (www.junaio.com) offer such functions
for free to the user allowing different layers of
content to the user (often social-media based).
The advantage of using a video-mixing approach
is that the lag times in processing are less infuen-
tial than with the see-through or projector-based
systems the live video feed is also delayed and,
thus, establishes a consistent combined image.
This hand-held solution works well for occasional,
mobile use. Long-term use can cause strain in the
arms. The challenges in employing this principle
are the limited screen coverage/resolution (typi-
cally with a 4-in diameter and a resolution of 320
240 pixels). Furthermore, memory, processing
power and graphics processing is limited to ren-
dering relatively simple 3D scenes, although these
capabilities are rapidly improving by the upcom-
ing dual-core and quad-core mobile CPUs.
figure 2. the VesPR deVice foR undeRgRound infRAstRuctuRe VisuAliZAtion (schAll et Al., 2008).
such systems
are found in
each modern
smartphone
GPS Antenna
Camera + IMU
Joystick Handles
UMPC
with such screens. To our knowledge, no such
systems have been developed or commercialised
so far. Although it does not support changing
light effects, the Luminex material approximates
this by using an LED/fbreglass based fabric (see
Figure 4). A Dutch company recently presented
a fully interactive light-emitting fabric based on
integrated RGB LEDs labelled lumalive. These
initiatives can manifest as new ways to support
prototyping scenarios that require a high local
resolution and complete unobstructedness. How-
ever, the ft to the underlying geometry remains
a challenge, as well as embedding the associated
control electronics/wiring. An elegant solution
to the second challenge was given by (Saakes et
al 2010) entitled the slow display: by temporar-
ily changing the color of photochromatic paint
properties by UV laser projection. This effect
lasts for a couple of minutes and demonstrates
how fashion and AR could meet.


2.3 embedded display
Another AR display option is to include a number
of small LCD screens in the observed object in
order to display the virtual elements directly on
the physical object. Although arguably an aug-
mentation solution, embedded screens do add
digital information on product surfaces.
This practice is found in the later stages of pro-
totyping mobile phones and similar information
appliances. Such screens typically have a similar
resolution as that of PDAs and mobile phones,
which is QVGA: 320 240 pixels. Such devices
are connected to a workstation by a specialised
cable, which can be omitted if autonomously
components are used, such as a smartphone.
Regular embedded screens can only be used on
planar surfaces and their size is limited while
their weight impedes larger use. With the ad-
vent of novel, fexible e-Paper and organic Light-
Emitting Diode (oLED) technologies, it might
be possible to cover a part of a physical model
figure 3. imPRession of the luminex mAteRiAl

2.4 see-through board
See-through boards vary in size between desk-
top and hand-held versions. The Augmented
engineering system (Bimber et al., 2001) and
the AR extension of the haptic sculpting project
(Bordegoni and Covarrubias, 2007) are examples
of the use of see-through technologies, which
typically employ a half-silvered mirror to mix
virtual models with a physical object (Figure
4). Similar to the Peppers ghost phenomenon,
standard stereoscopic Virtual Reality (VR) work-
bench systems such as the Barco Baron are used
to project the virtual information. In addition
to the need to wear shutter glasses to view ste-
reoscopic graphics, head tracking is required to
align the virtual image between the object and
the viewer. An advantage of this approach is
that digital images are not occluded by the us-
ers hand or environment and that graphics can
be displayed outside the physical object (i.e.,
to display the environment or annotations and
tools). Furthermore, the user does not have to
wear heavy equipment and the resolution of the
projection can be extremely high enabling a
compelling display system for exhibits and trade
fairs. However, see-through boards obstruct user
interaction with the physical object. Multiple
viewers cannot share the same device, although
a limited solution is offered by the virtual
showcase by establishing a faceted and curved
mirroring surface (Bimber, 2002).
48
Figure 4. the Augmented engineeRing see-thRough
disPlAy (BimBeR et Al., 2001).
This technique is also known as Shader Lamps
by (Raskar et al., 2001) and was extended in
(Raskar&Bimber, 2004) to a variety of imaging so-
lutions, including projections on irregular surface
textures and combinations of projections with
(static) holograms. In the feld of advertising and
performance arts, this technique recently gained
popularity labelled as Projection Mapping: to
project on buildings, cars or other large objects,
replacing traditional screens as display means, cf.
Figure 5. In such cases, theatre projector systems
are used that are prohibitively expensive (>30.000
euros). The principle of spatial projection-based
technologies is shown in Figure 6. Casting an im-
age to a physical object is considered comple-
mentary to constructing a perspective image
of a virtual object by a pinhole camera. If the
physical object is of the same geometry as the
virtual object, a straightforward 3D perspective
transformation (described by a 4 4 matrix)
is suffcient to predistort the digital image. To
obtain this transformation, it suffces to indicate
6 corresponding points in the physical world
and virtual world: an algorithm entitled Linear
Camera Calibration can then be applied (see
Appendix). If the physical and virtual shapes dif-
fer, the projection is viewpoint-dependent and
the head position needs to be tracked. Impor-
tant projector characteristics involve weight
and size versus the power (in lumens) of the
2.5 spatial projection-based displays
figure 5. two PRojections on A chuRch chAPel in utRecht (hoeBen, 2010).
49
51
figure 6. PRojection-BAsed disPlAy PRinciPle
(AdAPted fRom (RAskAR And low, 2001)), on the
Right the dynAmic shAdeR lAmPs demonstRAtion
(BAndyoPAdhyAy et Al., 2001)).
projector. There are initiatives to employ LED
lasers for direct holographic projection, which
also decreases power consumption compared to
traditional video projectors and ensures that the
projection is always in focus without requiring
optics (Eisenberg, 2004). Both fxed and hand-
held spatial projection-based systems have been
demonstrated. At present, hand-held projectors
measure 10 5 2 cm and weigh 150 g, includ-
ing the processing unit and battery. However,
the light output is little (1545 lumens).
The advantage of spatial projection-based tech-
nologies is that they support the perception of
all visual and tactile/haptic depth cues without
the need for shutter glasses or HMDs. Further-
more, the display can be shared by multiple
co-located users. It requires less expensive
equipment, which are often already available at
design studios. Challenges to projector-based AR
approaches include optics and occlusion. First,
only a limited feld of view and focus depth can
be achieved. To reduce these problems, multiple
video projectors can be used. An alternative
so lution is to employ a portable projector, as
proposed in the iLamps and the I/o Pad concepts
(Raskar et al., 2003) (Verlinden et al., 2008).
other issues include occlusion and shadows,
which are cast on the surface by the user or
other parts of the system. Projection on non-
convex geometries depends on the granularity
and orientation of the projector. The perceived
quality is sensitive to projection errors (also
known as registration errors), especially projec-
tion overshoot (Verlinden et al., 2003b).

A solution for this problem is either to include an
offset (dilatation) of the physical model or intro-
duce pixel masking in the rendering pipeline. As
projectors are now being embedded in consumer
cameras and smartphones, we are expecting this
type of augmentation in the years to come.

50
Logitech 3D Tracker, Microscribe and Minolta VI-
900). All these should be considered for object
tracking in Augmented prototyping scenarios.
There are signifcant differences in the tracker/
marker size, action radius and accuracy. As
the physical model might consist of a number
of parts or a global shape and some additional
components (e.g., buttons), the number of items
to be tracked is also of importance. For simple
tracking scenarios, either magnetic or passive
optical technologies are often used.
In some experiments we found out that a projec-
tor could not be equipped with a standard Flock
of Birds 3D magnetic tracker due to interfer-
ence. other tracking techniques should be used
for this paradigm. For example, the ARToolkit
employs complex patterns and a regular web-
camera to determine the position, orientation
and identifcation of the marker. This is done by
measuring the size, 2D position and perspective
distortion of a known rectangular marker, cf.
Figure 7 (Kato and Billinghurst, 1999).
Passive markers enable a relatively untethered
system, as no wiring is necessary. The optical
markers are obtrusive when markers are visible
to the user while handling the object. Although
computationally intensive, marker-less optical
table 2. summARy of tRAcking technologies.
tracking type
size of
tracker
(mm)
typical
number of
trackers
Action
radius/
accuracy
dof issues
magnetic 16x16x16 2
1.5 m
(1 mm)
6 Ferro-magnetic interference
optical
passive
80x80x0.01 >10
3 m
(1 mm)
6 line of sight
optical
active
10x10x5 >10
3 m
(0.5 mm)
3 line of sight, wired connections
ultrasound 20x20x10 1
1 m
(3 mm)
6 line of sight
mechanical
linkage
defned by
working
envelope
1
0.7 m
(0.1 mm)
5
limited degrees of freedom,
inertia
laser
scanning
none infnite
2 m
( 0.2mm)
6
line of sight, frequency, object
recognition
3. input technologies
In order to merge the digital and physical, posi-
tion and orientation tracking of the physical
components is required. Here, we will discuss
two different types of input technologies: track-
ing and event sensing. Furthermore, we will
briefy discuss other input modalities.
3.1 Position tracking
Welch and Foxlin (2002) presented a compre-
hensive overview of the tracking principles
that are currently available. In the ideal case,
the measurement should be as unobtrusive and
invisible as possible while still offering accurate
and rapid data. They concluded that there is
currently no ideal solution (silver bullet) for
position tracking in general, but some respect-
able alternatives are available. Table 2 sum-
marises the most important characteristics of
these tracking methods for Augmented Reality
purposes. The data have been gathered from
commercially available equipment (the As-
cension Flock of Birds, ARToolkit, optotrack,
52 53
tracking has been proposed (Prince et al.,2002).
The employment of Laser-Based tracking sys-
tems is demonstrated by the illuminating Clay
system by Piper et al. (2002): a slab of Plasti-
cine acts as an interactive surface the user
infuences a 3D simulation by sculpting the clay,
while the simulation results are projected on the
surface. A laser-based Minolta Vivid 3D scan-
ner is employed to continuously scan the clay
surface. In the article, this principle was applied
to geodesic analysis, yet it can be adapted to
design applications, e.g., the sculpting of car
bodies. This method has a number of challenges
when used as a real-time tracking means, includ-
ing the recognition of objects and their posture.
However, with the emergence of depth cameras
for gaming such as the Kinect (Microsoft), similar
systems are now being devised with a very small
technological threshold.
In particular cases, a global measuring system is
combined with a different local tracking principle
to increase the level of detail, for example, to
track the position and arrangement of buttons on
figure 7. woRkflow of the ARtoolkit oPticAl tRAcking AlgoRithm,
http://www.hitl.washington.edu/artoolkit/documentation/userarwork.html
figure 8. illuminAting clAy system with A PRojectoR/lAseR scAnneR (PiPeR et Al., 2002).
the objects surface. Such local positioning sys-
tems might have less advanced technical require-
ments; for example, the sampling frequency can
be decreased to only once a minute. one local
tracking system is based on magnetic resonance,
as used in digital drawing tablets. The Sensetable
demonstrates this by equipping an altered com-
mercial digital drawing tablet with custom-made
wireless interaction devices (Patten et al., 2001).
The Senseboard (Jacob et al., 2002) has similar
functions and an intricate grid of RFID receivers
to determine the (2D) location of an RFID tag on
a board. In practice, these systems rely on a rigid
tracking table, but it is possible to extend this to
a fexible sensing grid. A different technology was
proposed by Hudson (2004) to use LED pixels as
light emitters and sensors. By operating one pixel
as a sensor whilst its neighbours are illuminated,
it is possible to detect light refected from a
fngertip close to the surface. This principle could
be applied to embedded displays, as mentioned
in Section 2.3.
3.2 event sensing
Apart from location and orientation tracking,
Augmented prototyping applications require
inter action with parts of the physical object,
for example, to mimic the interaction with the
artefact. This interaction differs per AR sce-
nario, so a variety of events should be sensed
to cater to these applications.
Physical sensors
The employment of traditional sensors labelled
physical widgets (phidgets) has been studied
extensively in the Computer-Human Interface
(CHI) community. Greenberg and Fitchett (2001)
introduced a simple electronics hardware and
software library to interface PCs with sensors
(and actuators) that can be used to discern
user interaction. The sensors include switches,
sliders, rotation knobs and sensors to measure
force, touch and light. More elaborate compo-
nents like a mini joystick, Infrared (IR) motion
sensor, air pressure and temperature sensor are
commercially available. Similar initiatives are
iStuff (Ballagas et al., 2003), which also hosts a
number of wireless connections to sensors. Some
systems embed switches with short-range wire-
less connections, for example, the Switcheroo
and Calder systems (Avrahami and Hudson, 2002;
Lee et al., 2004) (cf. Figure 9). This allows a
greater freedom in modifying the location of the
interactive components while prototyping. The
Switcheroo system uses custom-made RFID tags.
A receiver antenna has to be located nearby
(within a 10-cm distance), so the movement en-
velope is rather small, while the physical model
is wired to a workstation. The Calder toolkit
(Lee et al., 2004) uses a capacitive coupling
technique that has a smaller range (6 cm with
small antennae), but is able to receive and trans-
mit for long periods on a small 12 mm coin cell.
other active wireless technologies would draw
more power, leading to a system that would
only ft a few hours. Although the costs for this
system have not been specifed, only standard
electronics components are required to build
such a receiver.
hand tracking
Instead of attaching sensors to the physi-
cal environment, fngertip and hand tracking
technologies can also be used to generate user
events. Embedded skins represent a type of
interactive surface technology that allows the
accurate measurement of touch on the objects
surface (Paradiso et al., 2000). For example, the
Smartskin by Reikimoto (2002) consists of a fex-
ible grid of antennae. The proximity or touch of
human fngers changes the capacity locally in the
grid and establishes a multi-fnger tracking cloth,
which can be wrapped around an object. Such a
solution could be combined with embedded dis-
plays, as discussed in Section 2.3. Direct electric
54 55
contact can also be used to track user interac-
tion; the Paper Buttons concept (Pedersen et
al., 2000) embeds electronics on the objects and
equips the fnger with a two-wire plug that sup-
plies power and allows bidirectional communica-
tion with the embedded components when they
are touched. Magic Touch (Pedersen, 2001) uses
a similar wireless system; the user wears an RFID
reader on his or her fnger and can interact by
touching the components, which have hidden
RFID tags. This method has been adapted to
Augmented Reality for design by Kanai et al.
(2007). optical tracking can be used for fnger -
tip and hand tracking as well. A simple example
is the light widgets system (Fails and olsen,
2002) that traces skin colour and determines
fnger/hand position by 2D blobs. The openNI
library enables hand and body tracking of depth
range cameras such as the Kinect (openNi.org).
A more elaborate example is the virtual drawing
tablet by Ukita and Kidode (2004); fngertips
are recognised on a rectangular sheet by a
head-mounted infrared camera. Traditional VR
gloves can also be used for this type of tracking
(Schfer et al., 1997).
figure 9. mockuP equiPPed with wiReless switches thAt cAn Be RelocAted to exPloRe usABility
(lee et Al., 2004).
3.3 other input modalities
Speech and gesture recognition require consid-
eration in AR as well. In particular, pen-based
interaction would be a natural extension to the
expressiveness of todays designer skills. oviatt
et al. (2000) offer an comprehensive overview of
the so-called Recognition-Based User Interfaces
(RUIs), including the issues and Human Factors
aspects of these modalities. Furthermore,
speech-based interaction can also be useful to
activate operations while the hands are used for
selection.
4. conclusions and further
reading
This article introduces two important hardware
systems for AR: displays and input technologies.
To superimpose virtual images onto physical
models, head mounted-displays (HMDs), see-
through boards, projection-based techniques
and embedded displays have been employed.
An important observation is that HMDs, though
best known by the public, have serious limita-
tions and constraints in terms of the feld of
view and resolution and lend themselves to a
kind of isolation. For all display technologies,
the current challenges include an untethered
interface, the enhancement of graphics capabili-
ties, visual coverage of the display and improve-
ment of resolution. LED-based laser projection
and oLEDs are expected to play an important
role in the next generation of IAP devices
because this technology can be employed by
see-through or projection-based displays.
To interactively merge the digital and physical
parts of Augmented prototypes, position and
orientation tracking of the physical components
is needed, as well as additional user input
means. For global position tracking, a variety of
principles exist. optical tracking and scanning
suffer from the issues concerning line of sight
and occlusion. Magnetic, mechanical linkage and
ultrasound-based position trackers are obtrusive
and only a limited number of trackers can be
used concurrently.
The resulting palette of solutions is summarized
in Table 3 as a morphological chart. In devising a
solution for your AR system, you can use this as
a checklist or inspiration of display and input.
display imaging principle Video Mixing
Projector-
based

See-through
display arrangment
Head-
attached
Handheld/
wearable
Spatial
input tech-
nologies
Position
tracking
Magnetic
optical
Ultrasound Mechanical
Passive
markers
Active
markers
3D laser
scanning
event
sensing
Physical sensors Virtual
Wired
connection

Wireless Surface
tracking
3D
tracking
Table 3. Morphological chart of AR enabling technologies.
56
further reading
For those interested in research in this area,
the following publication means offer a range of
detailed solutions:
International Symposium on Mixed and
Augmented Reality (ISMAR) ACM-sponsored
annual convention on AR, covering both spe-
cifc applications as emerging technologies.
accesible through http://dl.acm.org
Augmented Reality Times a daily update
on demos and trends in commercial and aca-
demic AR systems: http://artimes.rouli.net
Procams workshop annual workshop on
projector-camera systems, coinciding with
the IEEE conference on Image Recognition
and Robot Vision. The resulting proceedings
are freely accessible at http://www.procams.
org
Raskar, R. and Bimber, o. (2004) Spatial
Augmented Reality, A.K. Peters, ISBN:
1568812302 personal copy can be down-
loaded for free at http://140.78.90.140/me-
dien/ar/SpatialAR/download.php
BuildAR download simple webcam-based
application that uses markers, http://www.
buildar.co.nz/buildar-free-version
Appendix: linear camera
calibration

This procedure has been published in (Raskar
and Bimber, 2004) to some degree, but is slightly
adapted to be more accessible for those with
less knowledge of the feld of image processing.
C source code that implements this mathemati-
cal procedure can be found in appendix A1 of
(Faugeras, 1993). It basically uses point corres-
pondences between original x,y,z coordinates
and their projected u,v, counterparts to resolve
internal and external camera parameters.
In general cases, 6 point correspondences are
suffcient (Faugeras 1993, Proposition 3.11).
Let I and E be the internal and external param-
eters of the projector, respectively. Then a point
P in 3D-space is transformed to:
p=[IE] P (1)
where p is a point in the projectors coordinate
system. If we decompose rotation and transla-
tion components in this matrix transformation
we obtain:
p=[R t] P (2)
In which R is a 3x3 matrix corresponding to the
rotational components of the transformation and
t the 3x1 translation vector. Then we split the
rotation columns into row vectors R1, R2, and R3
of formula 3. Applying the perspective division
results in the following two formulae:
(3)
(4)
in which the 2D point pi is split into (ui,vi).
Given n measured point-point correspondences
(p
i
; P
i
); (i = 1::n), we obtain 2n equations:
R
1
P
i
u
i
R
3
P
i
+ t
x
- u
i
t
z
= 0 (5)
R
2
P
i
v
i
R
3
P
i
+ t
y
- u
i
t
z
= 0 (6)
We can rewrite these 2n equations as a matrix
multiplication with a vector of 12 unknown
variables, comprising the original transformation
components R and t of formula 3. Due to mea-
surement errors, a solution is usually non-singu-
lar; we wish to estimate this transformation with
a minimal estimation deviation. In the algorithm
presented at (Bimber & Raskar, 2004), the mini-
max theorem is used to extract these based on
determining the singular values. In a straightfor-
ward matter, internal and external transforma-
tions I and E of formula 1 can be extracted from
the resulting transformation.
References

Avrahami, d. and hudson, s.e. (2002)
Forming interactivity: a tool for rapid pro-
totyping of physical interactive products,
Proceedings of DIS 02, pp.141146.
Azuma, R. (1997)
A survey of augmented reality, Presence:
Teleoperators and Virtual Environments,
Vol. 6, No. 4, pp.355385.
Azuma, R., Baillot, y., Behringer, R., fein-
er, s., julier, s. and macintyre, B. (2001)
Recent advances in augmented reality, IEEE
Computer Graphics and Applications, Vol. 21,
No. 6, pp.3447.
Ballagas, R., Ringel, m., stone,
m. and Borchers, j. (2003)
iStuff: a physical user interface toolkit for
ubiquitous computing environments, Pro-
ceedings of CHI 2003, pp.537544.
Bandyopadhyay, d., Raskar, R. and fuchs,
h. (2001)
Dynamic shader lamps: painting on movable
objects, International Symposium on Aug-
mented Reality (ISMAR), pp.207216.
Bimber, o. (2002)
Interactive rendering for projection-based
augmented reality displays, PhD disserta-
tion, Darmstadt University of Technology.
Bimber, o., stork, A. and Branco, P. (2001)
Projection-based augmented engineering,
Proceedings of International Conference on
Human-Computer Interaction (HCI2001),
Vol. 1, pp.787791.
Bochenek, g.m., Ragusa, j.m. and malone,
l.c. (2001)
Integrating virtual 3-D display systems into
product design reviews: some insights from
empirical testing, Int. J. Technology Man-
agement, Vol. 21, Nos. 34, pp.340352.
Bordegoni, m. and covarrubias, m. (2007)
Augmented visualization system for a haptic
interface, HCI International 2007 Poster.
eisenberg, A. (2004)
For your viewing pleasure, a projector in
your pocket, New York Times, 4 November.
faugeras, o. (1993)
Three-Dimensional Computer Vision:
a Geometric Viewpoint, MIT press.
fails, j.A. and olsen, d.R. (2002)
LightWidgets: interacting in everyday
spaces, Proceedings of IUI 02, pp.6369.
greenberg, s. and fitchett, c. (2001)
Phidgets: easy development of physical in-
terfaces through physical widgets, Proceed-
ings of UIST 01, pp.209218.
hoeben, A. (2010)
Using a projected Trompe Loeil to highlight
a church interior from the outside, EVA
2010
hudson, s. (2004)
Using light emitting diode arrays as touch-
sensitive input and output devices, Proceed-
ings of the ACM Symposium on User Interface
Software and Technology, pp.287290.
jacob, R.j., ishii, h., Pangaro, g. and Pat-
ten, j. (2002)
A tangible interface for organizing informa-
tion using a grid, Proceedings of CHI 02,
pp.339346.
57
kanai, s., horiuchi, s., shiroma, y., yo-
koyama, A. and kikuta, y. (2007)
An integrated environment for testing and
assessing the usability of information appli-
ances using digital and physical mock-ups,
Lecture Notes in Computer Science, Vol.
4563, pp.478487.
kato, h. and Billinghurst, m. (1999)
Marker tracking and HMD calibration for a
video-based augmented reality conferenc-
ing system, Proceedings of International
Workshop on Augmented Reality (IWAR 99),
pp.8594.
klinker, g., dutoit, A.h., Bauer, m., Bayer,
j., novak, V. and matzke, d. (2002)
Fata Morgana a presentation system for
product design, Proceedings of ISMAR 02,
pp.7685.
oviatt, s.l., cohen, P.R., wu, l., Vergo,
j., duncan, l., suhm, B., Bers, j., et al.
(2000)
Designing the user interface for multimodal
speech and gesture applications: state-of-
the-art systems and research directions,
Human Computer Interaction, Vol. 15, No. 4,
pp.263322.
Paradiso, j.A., hsiao, k., strickon, j.,
lifton, j. and Adler, A. (2000)
Sensor systems for interactive surfaces,
IBM Systems Journal, Vol. 39, Nos. 34,
pp.892914.
Patten, j., ishii, h., hines, j. and Pangaro,
g. (2001)
Sensetable: a wireless object tracking plat-
form for tangible user interfaces, Proceed-
ings of CHI 01, pp.253260.
Pedersen, e.R., sokoler, t. and nelson, l.
(2000)
PaperButtons: expanding a tangible user in-
terface, Proceedings of DIS 00, pp.216223.
Pederson, t. (2001)
Magic touch: a simple object location track-
ing system enabling the development of
physical- virtual artefacts in offce environ-
ments, Personal Ubiquitous Comput., Janu-
ary, Vol. 5, No. 1, pp.5457.

Piper, B., Ratti, c. and ishii, h. (2002)
Illuminating clay: a 3-D tangible interface
for landscape analysis, Proceedings of CHI
02, pp.355362.
Prince, s.j., xu, k. and cheok, A.d. (2002)
Augmented reality camera tracking with
homographies, IEEE Comput. Graph. Appl.,
November, Vol. 22, No. 6, pp.3945.
Raskar, R., welch, g., low, k-l. and
Bandyopadhyay, d. (2001)
Shader lamps: animating real objects with
image based illumination, Proceedings
of Eurographics Workshop on Rendering,
pp.89102.
Raskar, R. and low, k-l. (2001)
Interacting with spatially augmented real-
ity, ACM International Conference on Virtual
Reality, Computer Graphics and Visualization
in Africa (AFRIGRAPH), pp.101108.
Raskar, R., van Baar, j., Beardsley, P.,
willwacher, t., Rao, s. and forlines, c.
(2003)
iLamps: geometrically aware and self-con-
fguring projectors, SIGGRAPH, pp.809818.
Raskar, R. and Bimber, o. (2004)
Spatial Augmented Reality, A.K. Peters,
ISBN: 1568812302.
Reikimoto, j. (2002)
SmartSkin: an infrastructure for freehand
manipulation on interactive surfaces,
Proceedings of CHI 02, pp.113120.
58
saakes, d.P., chui, k., hutchison, t.,
Buczyk, B.m., koizumi, n., inami, m.
and Raskar, R. (2010)
Slow Display. In SIGGRAPH 2010 emerg-
ing technologies: Proceedings of the 37th
annual conference on Computer graphics and
interactive techniques, July 2010.
schfer, k., Brauer, V. and Bruns, w.
(1997)
A new approach to human-computer
interaction synchronous modelling in real
and virtual spaces, Proceedings of DIS 97,
pp.335344.
schall, g., mendez, e., kruijff, e., Veas, e.,
sebastian, j., Reitinger, B. and schmal-
stieg, d. (2008)
Handheld augmented reality for under-
ground infrastructure visualization, Journal
of Personal and Ubiquitous Computing,
Springer, DoI 10.1007/s00779-008-0204-5.
schmalstieg, d. and wagner, d. (2008)
Mobile phones as a platform for augmented
reality, Proceedings of the IEEE VR 2008
Workshop on Software Engineering and Ar-
chitectures for Realtime Interactive Systems,
pp.4344.
sutherland, i.e. (1968)
A head-mounted three-dimensional dis-
play, Proceedings of AFIPS, Part I, Vol. 33,
pp.757764.
ukita, n. and kidode, m. (2004)
Wearable virtual tablet: fngertip drawing
on a portable plane-object using an active-
infrared camera, Proceedings of IUI 2004,
pp.169175.
Verlinden, j.c., de smit, A., horvth, i.,
epema, e. and de jong, m. (2003)
Time compression characteristics of the
augmented prototyping pipeline, Proceed-
ings of Euro-uRapid03, p.A/1.
Verlinden, j., horvath, i. (2008)
Enabling interactive augmented prototyp-
ing by portable hardware and a plugin-based
software architecture Journal of Mechani-
cal Engineering, Slovenia, Vol 54(6), pp.
458-470.
welch, g. and foxlin, e. (2002)
Motion tracking: no silver bullet, but a re-
spectable arsenal, IEEE Computer Graphics
and Applications, Vol. 22, No. 6, pp.2438.
.
59
60 61
LIKE RIDING A BIKE. LIKE PARKING A CAR.
PoRTRAIT oF THE ARTIST IN RESIDENCE
MARINA DE HAAS
BY HANNA SCHRAFFENBERGER
"Hi Marina. Nice to meet you!
I have heard a lot about you."
I usually avoid this kind of phrases. Judging from
my experience, telling people that you have
heard a lot about them makes them feel uncom-
fortable. But this time I say it. After all, its no
secret that Marina and the AR Lab in The Hague
share a history which dates back much longer
than her current residency at the AR Lab. At the
lab, she is known as one of the frst students
who overcame the initial resistance of the fne
arts program and started working with AR. With
support of the lab, she has realized the AR art-
works Out of the blue and Drops of white in the
course of her study. In 2008 she graduated with
an AR installation that shows her 3d animated
portfolio. Then, having worked with AR for three
years, she decided to take a break from technol-
ogy and returned to photography, drawing and
painting. Now, after yet another three years,
she is back in the mixed reality world. Convinced
by her concepts for future works, the AR Lab
has invited her as an Artist in Residence. That is
what I have heard about her, and made me want
to meet her for an artist-portrait. Knowing quite
62 63
a lot about her past, I am interested in what she
is currently working on, in the context of her
residency. When she starts talking, it becomes
clear that she has never really stopped think-
ing about AR. Theres a handwritten notebook
full of concepts and sketches for future works.
Right now, she is working on animations of two
animals. once she is done animating, she'll use
AR technology to place the animals an insect
and a dove in the hands of the audience.
"They will hold a little funeral
monument in the shape of a tile
in their hands. Using AR technol-
ogy, the audience will then see a
dying dove or dying crane fy with
a missing foot.
Marina tells me her current piece is about imper-
manence and mortality, but also about the fact
that death can be the beginning of something
new. Likewise, the piece is not only about death
but also intended as an introduction and begin-
ning for a forthcoming work. The AR Lab makes
this beginning possible through fnancial support
but also provides technical assistance and serves
as a place for mutual inspiration and exchange.
Despite her long break from the digital arts, the
young artist feels confdent about working with
AR again:
Its a bit like biking, once youve
learned it, you never unlearn it. Its
the same with me and AR, of course
I had to practice a bit, but I still
have the feel for it. I think working
with AR is just a part of me.
I usually start
out with my own
photographs and a
certain space I want
to augment.
After having paused for three years, Marina is
positively surprised about how AR technology
has emerged in the meantime:
AR is out there, its alive, its
growing and fnally, it can be
markerless. I dont like the use of
markers. They are not part of my
art and people see them, when
they dont wear AR glasses. I am
also glad that so many people
know AR from their mobile phones
or at least have heard about it
before. Essentially, I dont want
the audience to wonder about the
technology, I want them to look at
the pictures and animations I cre-
ate. The more people are used to
the technology the more they will
focus on the content. I am really
happy and excited how AR has
evolved in the last years!

I ask, how working with brush and paint differs
from working with AR, but there seems to be
surprisingly little difference.
The main difference is that with
AR I am working with a pen-tab-
let, a computer and a screen. I
control the software, but if I work
with a brush I have the same kind
of control over it. In the past, I
used to think that there was a
difference, but now I think of the
computer as just another medium
to work with. There is no real
difference between working with
a brush and working with a com-
puter. My love for technology is
similar to my love for paint.
Marina discovered her love for technology
at a young age:
When I was a child I found a
book with code and so I pro-
grammed some games. That was
fun, I just understood it. Its the
same with creating AR works
now. My way of thinking perfectly
matches with how AR works. It
feels completely natural to me.
Nevertheless, working with technology also has
its downside:
The most annoying thing about
working with AR is that you are
always facing technical limita-
tions and there is so much that
can go wrong. No matter how well
you do it, there is always the risk
that something wont work.
I hope for technology to get more
stable in the future.
When realizing her artistic augmentations,
Marina sticks to an established workfow:
I usually start out with my own
photographs and a certain space
I want to augment. Preferably I
measure the dimensions of the
space, and then I work with that
64 65
room in my head. I have it in my
inner vision and I think in pictures.
There is a photo register in my
head which I can access. Its a bit
like parking a car. I can park a car
in a very small space extremely
well. I can feel the car around me
and I can feel the space I want to
put it in. Its the same with the
art I create. Once I have clear idea
of the artwork I want to create, I
use Cinema4D software to make
3d models. Then I use BuildAR to
place my 3d models it the real
space. If everything goes well,
things happen that you could not
have imagined.
A result of this process is, for example, the AR
installation Out of the blue which was shown
at Todays Art festival in The Hague in 2007:
The idea behind Out of the
blue came from a photograph
I took in an elevator. I took the
picture so that the lights in the
elevator looked like white el-
lipses on a black background. I
took this basic elliptical shape
as a basis for working in a very
big space. I was very curious if I
could use such a simple shape and
still convince the audience that it
really existed in the space. And
it worked people tried to touch
it with their hands and were very
surprised when that wasnt pos-
sible.
The fact that people believe in the existence of
her virtual objects is also important for Marinas
personal understanding of AR:
For me, Augmented Reality
means using digital images to
create something which is not
real. However, by giving meaning
to it, it becomes real and people
realize that it might as well
exist.
I wonder whether there is a specifc place or
space shed like to augment in the future and
Marina has quite some places in mind. They have
one thing in common: they are all known muse-
ums that show modern art.
I would love to create works
for the big museums such as the
TATE Modern or MoMa. In the
Netherlands, Id love to augment
spaces at the Stedelijk Museum in
Amsterdam or Boijmans museum
in Rotterdam. Thats my world.
Going to a museum means a lot
to me. Of course, one can place
AR artworks everywhere, also in
public spaces. But it is important
to me that people who experience
my work have actively chosen to
go somewhere to see art. I dont
want them to just see it by acci-
dent at a bus stop or in a park.
Rather than placing her virtual models in a spe-
cifc physical space, her current work follows a
different approach. This time, Marina will place
the animated dying animals in the hands of the
audiences. The artist has some ideas about how
to design this physical contact with the digital
animals.
In order for my piece to work,
the viewer needs to feel like he
is holding something in his hand.
Ideally, he will feel the weight of
the animal. The funeral monu-
ments will therefor have a certain
weight.
It is still open where and when we will be able
to experience the piece:
My residency lasts 10 weeks. But
of course thats not enough time
to fnish. In the past, a piece was
fnished when the time to work on
it was up. Now, a piece is fnished
when it feels complete. Its some-
thing I decide myself, I want to
have control over it. I dont want
any more restrictions. I avoid
deadlines.
Coming from a fne arts background, Marina has
a tip for art students who want to to follow in
her footsteps and are curious about working
with AR:
I know it can be diffcult to com-
bine technology with art, but it is
worth the effort. Open yourself
up to for art in all its possibili-
ties, including AR. AR is a chance
to take a step in a direction of
which you have no idea where
youll fnd yourself. You have to
be open for it and look beyond
the technology. AR is special
I couldnt live without it any-
more...
67
jeRoen VAn eRP gRAduAted
fRom the fAculty of indus-
tRiAl design At the techni-
cAl uniVeRsity of delft in
1988. in 1992, he wAs one of
the foundeRs of fABRique
in delft, which Positioned
itself As A multidisciPlinARy
design BuReAu. he estAB-
lished the inteRActiVe
mediA dePARtment in 1994,
focusing PRimARily on de-
VeloPing weBsites foR the
woRld wide weB - BRAnd
new At thAt time.
BIoGRAPHY -
JERoEN VAN ERP
under jeroens joint leadership, fabrique
has grown through the years into a multi-
faceted design bureau. it currently employs
more than 100 artists, engineers and story-
tellers working for a wide range of customers:
from supermarket chain Albert heijn to the
Rijksmuseum.
fabrique develops visions, helps its clients
think about strategies, branding and innova-
tion and realises designs. Preferably cutting
straight through the design disciplines, so
that the traditional borders between graphic
design, industrial design, spatial design and
interactive media are sometimes barely re-
cognisable. in the bureaus vision, this cross
media approach will be the only way to cre-
ate apparently simple solutions for complex
and relevant issues. the bureau also opened
a studio in Amsterdam in 2008.
jeroen is currently cco (chief creative
offcer) and a partner, and in this role he
is responsible for the creative policy of the
company. he has also been closely involved
in various projects as art director and de-
signer. he is a guest lecturer for various
courses and is a board member at nAgo
(the netherlands graphic design Archive)
and the design & emotion society.
www.fabrique.nl
66
the moment i was confronted with the technology of Augmented Reality,
back in 2006 at the Royal Academy of Arts in the hague, i was thrilled.
despite the heavy helmet, the clumsy equipment, the shaky images and
the lack of a well-defned purpose, it immediately had a profound impact
on me. from the start, it was clear that this technology had a lot of po-
tential, although at frst it was hard to grasp why. Almost six years later,
the fog that initially surrounded this new technology has gradually faded
away. to start with, the technology itself is developing rapidly, as is the
equipment. But more importantly: companies and cul-
tural institutions are starting to understand how they can beneft from
this technology. At the moment there are a variety of applications avail-
able (mainly mobile applications for tablets or smart phones) that create
added value for the user or consumer. this is great, because it not only
allows the audience to gain experience in the feld of this still-developing
technology, but also the industry. But to make Augmented Reality a real
success, the next step will be of vital importance.
A MAGICAL LEVERAGE
IN SEARCH oF THE KILLER APPLICATIoN
BY JERoEN VAN ERP
68 69
innoVAting oR innoVAting?
lets have a look at different forms of innovat-
ing in fgure 1. on the left we see innovations
with a bottom-up approach, and on the right a
top-down approach to innovating. A bottom-
up approach means that we have a promising
new technique, concept or idea although the
exact goal or matching business model arent
clear yet. in general, bottom-up developments
are technological or art-based, and are there-
fore what i would call autonomous: the means
are clear, but the exact goal has still to be
defned. the usual strategy to take it further
is to set up a start-up company in order to
develop the technique and hopefully to create
a market.
this is not always that simple. innovating from
a top-down approach means that the innova-
tion is steered on the basis of a more or less
clearly defned goal. in contrast with bottom-
up innovations, the goal is well-defned and
the designer or developer has to choose the
right means, and design a solution that fts
the goal. this can be a business goal, but also
a social goal. A business goal is often derived
from a beneft for the user or the consumer,
which is expected to generate an economic
beneft for the company. A marketing special-
ist would state that there is already a market.
this approach means that you have to inno-
vate with an intended goal in mind. A business
goal-driven innovation can be a product inno-
vation (either physical products, services or a
combination of the two) or a brand innovation
(storytelling, positioning), but always with an
intended economical or social beneft in mind.
As there is an expected beneft, people are
willing to invest.
its interesting to note the difference on the
vertical axis between radical innovations and
incremental changes (Robert Verganti design
drive innovation). incremental changes are
improvements of existing concepts or prod-
ucts. this is happening a lot, for instance in
the automotive industry. in general, a radical
innovation changes the experience of the
product in a fundamental way, and as a result
of this often changes an entire business.
figure 1.
this is something Apple has achieved several
times, but it has also been achieved by tom-
tom, and by Philips and douwe egberts with
their senseo coffee machine.
how ABout AR?
what about the position of Augmented Real-
ity? to start with, the Augmented Reality
technique is not a standalone innovation. its
not a standalone product but a technique or
feature that can be incorporated into products
or services with a magical leverage. At its core
it is a technique that was developed and is
still developing with specialist purposes in
mind. in principle there was no big demand
from the market. essentially, it is a bottom-
up technological development that needs a
concept, product or service.
you can argue about whether it is an incre-
mental innovation or a radical one. A virtual
reality expert will probably tell you that it is
an improvement (incremental innovation) of
the VR technique. But if you look from an ap-
plication perspective, there is a radical aspect
to it. i prefer to keep the truth in the middle.
At this moment in time, AR is in the blue area
(fgure 1).
it is clear that bottom-up innovation and top-
down innovation are different species. But
when it comes to economic leverage, it is a
challenge to be part of the top-down game.
this provides a guarantee for further develop-
ment, and broad acceptation of the technique
and principles. so the major challenge for AR
is to make the big step to the right part of fg-
ure 1 as indicated by the red arrow. Although
the principles of Augmented Reality are very
promising, its clear we arent there yet. An
example: we recently received a request to
do something with Augmented Reality. the
idea was to project the result of an AR appli-
cation onto a big wall. suddenly it occurred to
me that the experience of AR wasnt suitable
at all for this form of publishing. AR doesnt do
well on a projection screen. it does well in the
users head, where time, place, reality and
imagination can play an intriguing game with
our senses. it is unlikely that the technique of
Augmented Reality will lead to mass consump-
tion as in experiencing the same thing with
a lot of people at the same time. no, by their
nature, AR applications are intimate and in-
tense, and this is one of its biggest assets.
futuRe
we have come a long way, and the things we
can do with AR are becoming more amazing by
the day. the big challenge is to make it appli-
cable in relevant solutions. theres no discus-
sion about the value of AR in specialist areas,
such as the military industry. institutions in
the feld of art and culture have discovered
the endless possibilities, and now it is the
time to make the big leap towards solutions
with social or economic value (the green area
in fgure 1). this will give the technique the
chance to develop further in order to fourish
at the end. from that perspective, it wouldnt
surprise me if the frst really good, effcient
and economically proftable application will
emerge for educational purposes.
lets not forget we are talking about a tech-
nology that is still in its infant years. when i
look back at the websites we made 15 years
ago, i realize the gigantic steps we have made,
and i am aware of the fact that we could
hardly imagine then what the impact of the
internet would be on society today. of course,
its hard to compare the concept of Augment-
ed Reality with that of the internet, but it is
a valid comparison, because it gives the same
powerless feeling of not being able to predict
its future. But it will probably be bigger than
you can imagine.
THE PoSITIoNING
oF VIRTUAL oBJECTS
RoBERT PREVEL
70
WHEN USING AUGMENTED
REALITY (AR) FoR VISIoN,
VIRTUAL oBJECTS ARE ADDED
To THE REAL WoRLD AND DIS-
PLAYED IN SoME WAY To THE
USER; BE THAT VIA A MoNIToR,
PRoJECToR, oR HEAD-MoUNT-
ED DISPLAY (HMD). oFTEN IT IS
DESIRABLE, oR EVEN UNAVoID-
ABLE, FoR THE VIEWPoINT oF
THE USER To MoVE ARoUND
THE ENVIRoNMENT (THIS IS
PARTICULARLY THE CASE IF
THE USER IS WEARING A HMD).
THIS PRESENTS A PRoBLEM,
REGARDLESS oF THE TYPE oF
DISPLAY USED: HoW CAN THE
VIEWPoINT BE DECoUPLED
FRoM THE AUGMENTED VIR-
TUAL oBJECTS?
To recap, virtual objects are blended with the
real world view in order to achieve an Aug-
mented world view. From our initial viewpoint
we can determine what the virtual objects
position and orientation (pose) in 3D space,
and its scale, should be. However, if the view
point changes, then how we view the virtual
object should also change. For example, if
I walk around to face the back of a virtual
object, I expect to be able to see the rear
of that object.
The solution to this problem is to keep track
of the users viewpoint and, in the event that
the viewpoint changes, to update the pose of
any virtual content accordingly. There are a
number of ways in which this can be achieved,
by using, for example: positional sensors
(such as inertia trackers), a global positioning
system, computer vision techniques, etc. Typi-
cally the best results are those systems that
take the data from many tracking systems and
blend them together.
At TU Delft, we have been researching and
developing techniques to track position us-
ing computer vision techniques. often it is
the case that video cameras are used in AR
systems; indeed, in the case where the AR
system uses video see-through, the use of
cameras is necessary. Using computer vision
techniques, we can identify landmarks in the
environment, and, using these landmarks, we
can determine the pose of our camera with
basic geometry. If the camera is not used
directly as the viewpoint (as is the case in
71
optical see-through systems), then we can still
keep track of the viewpoint by attaching the
camera to it. Say, for example, that we have
an optical see-through HMD with an attached
video camera. Then, if we calculate the pose
of the camera, we can then determine the
pose of the viewpoint, provided that the
cameras position relative to the viewpoint
remains fxed.
The problem then, has been reduced to
identifying landmarks in the environment.
Historically, this has been achieved by the
use of fducial markers, which act as points
of reference in the image. Fiducial markers
provide us with a means of determining the
scale of the visible environment, provided
that: enough points of reference are visible,
we know their relative positions, and these
relative positions dont change. A typical
marker often used in AR applications consists
of a card with a black rectangle in the centre,
a white border, and an additional mark to
determine which edge of the card is consid-
ered the bottom. As we know that the corners
of the black rectangle are all 90 degrees, and
we know the distance between corners, we
can identify the marker and determine the
pose of the camera with regard to the points
of reference (in this case the four corners of
the card).
A large number of simple desktop AR applica-
tions make use of individual markers to track
camera pose, or conversely, to track the posi-
tion of the markers relative to our viewpoint.
Larger applications require multiple markers
linked together, normally distinguishable by
a unique pattern or barcode in the centre
of the marker. Typically the more points of
reference that are visible in a scene, the bet-
ter the results when determining the cameras
pose. The key advantage to using markers
for tracking the pose of the camera is that
an environment can be carefully prepared
in advance, and provided the environment
does not change, should deliver the same AR
experience each time. Sometimes however,
it is not feasible to prepare an environment
with markers. often it is desirable to use an
AR application in an unknown or unprepared
environment. In these cases, an alternative
to using markers is to identify the natural
features found in the environment.
The term natural features can be used to
describe the parts of an image that stand out.
Examples include: edges, corners, areas of
high contrast, etc. In order to be able to use
the natural features to track the camera posi-
tion in an unknown environment, we need to
be able to frst identify the natural features,
and then determine their relative positions
in the environment. Whereas you could place
20 markers in an environment and still only
have 80 identifable corners, there are often
hundreds of natural features in any one image.
This makes using natural features a more ro-
bust solution than using markers, as there are
far more landmarks we can use to navigate,
not all of which need to be visible. one of the
key advantages to using natural features over
markers is that: as we already need to identify
and keep track of those natural features seen
from our initial view point, we can use the
same method to continually update a 3D map
of features as we change our view point.
This allows our working environment to grow,
which could not be achieved in a prepared
environment.
Although we are able to determine the rela-
tive distance between features, the question
remains: how can we determine the absolute
position of features in an environment without
some known measurement? The short answer
is that we cannot; either we need to estimate
the distance or we can introduce a known
measurement. In a future edition we will
discuss the use of multiple video cameras and
how, given the absolute distance between the
cameras, we can determine the absolute posi-
tion of our identifed features.
73
within the csi the hague project (http://
www.csithehague.com) several companies and
research institutes cooperate under the guid-
ance of the netherlands forensic institute in
order to explore new technologies to improve
crime scene investigation by combining differ-
ent technologies to digitize, visualize and in-
vestigate the crime scene. the major motiva-
tion for the csi the hague project is that one
can investigate a crime scene only once. if you
do not secure all possible evidence during this
investigation, it will not be available for solv-
ing the committed crime. the digitalization
of the crime scene provides opportunities for
testing hypotheses and witness statements,
but can also be used to train future investi-
gators. for the csi the hague project, two
groups at the delft university of technology,
systems engineering
2
and Biomechanical en-
gineering
3
, joined their efforts to explore the
potential of mediated and Augmented Reality
for future crime scene investigation and to
tackle current issues in crime scene investi-
gation. in Augmented Reality, virtual data is
spatially overlaid on top of physical reality. with
this technology the fexibility of virtual reality
can be used and is grounded in physical reality
(Azuma, 1997). mediated reality refers to the
ability to add to, subtract information from, or
otherwise manipulate ones perception of real-
ity through the use of a wearable computer or
hand-held device (mann and Barfeld, 2003).
in order to reveal the current challenges for
supporting spatial analysis in crime scene
investigation, structured interviews with fve
international experts in the area of 3d crime
scene reconstruction were conducted. the
interviews showed a particular interest for
current challenges in spatial reconstruction
and the interaction with the reconstruction
data. the identifed challenges are:
MEDIATED REALITY FoR CRIME
SCENE INVESTIGATIoN
1
STEPHAN LUKoSCH
1
This article is based upon (Poelman et al., 2012).
2
http://www.sk.tbm.tudelft.nl
3
http://3me.tudelft.nl/en/about-the-faculty/departments/biomechanical-engineering/research/dbl-delft-biorobo-
tics-lab/people/
cRime scene inVestigAtion in the netheRlAnds is PRimARily
the ResPonsiBility of the locAl Police. foR seVeRe cRimes,
A nAtionAl teAm suPPoRted By the netheRlAnds foRensic
institute (nfi) is cAlled in. initiAlly cAPtuRing All detAils
of A cRime scene is of PRime imPoRtAnce (so thAt eVidence
is not Accidently destRoyed). nfis dePARtment of digitAl
imAge AnAlysis uses the infoRmAtion collected foR 3d
cRime scene ReconstRuction And AnAlysis.
72
tion: The hands of the CSIs have to be free to
physically interact with the crime scene when
needed, e.g. to secure evidence, open doors,
climb, etc. Additional hardware such as data
gloves or physically touching an interface
such as a mobile device is not acceptable.
Remote connection to and collaboration with
experts: Expert crime scene investigators are
a scarce resource and are not often available
at location on request. Setting up a remote
connection to guide a novice investigator
through the crime scene and to collaborative-
ly analyze the crime scene has the potential
to improve the investigation quality.
To address the above requirements, a novel
mediated reality system for collaborative spatial
analysis on location has been designed, devel-
oped and evaluated together with experts in the
feld and the NFI. This system supports collabo-
ration between crime scene investigators (CSIs)
on location who wear a HMD (see Figure 1) and
expert colleagues at a distance.

The mediated reality system builds a 3D map of
the environment in real-time, allows remote users
to virtually join and interact together in shared
Augmented space with the wearer of the HMD,
and uses bare hand gestures to operate the 3D
multi-touch user interface. The resulting medi-
Time needed for reconstruction: data cap-
ture, alignment, data clean-up, geometric
modelling and analyses are manual steps.
Expertise required to deploy dedicated soft-
ware and secure evidence at the crime scene.
Complexity: Situations differ signifcantly.
Time freeze: Data capture is often conducted
once after a scene has been contaminated.
The interview sessions ended with an open
discussion on how mediated reality can support
crime scene investigation in the future. Based
on these open discussions, the following require-
ments for a mediated reality system that is to
support crime scene investigation were identi-
fed:
Lightweight head-mounted display (HMD):
It became clear that the investigators whom
arrive frst on the crime scene currently carry
a digital camera. Weight and ease of use are
important design criteria. Experts would like
those close to a pair of glasses.
Contactless augmentation alignment (no
markers on the crime scene): The frst
investigator who arrives on a crime scene has
to keep the crime scene as untouched as pos-
sible. Technology that involves preparing the
scene is therefore unacceptable.
Bare hands gestures for user interface opera-
figure 1. mediAted ReAlity heAd mounted deVice in use duRing the exPeRiment in the dutch foRensic field lAB.
75
ated reality system supports a lightweight head-
mounted display (HMD), contactless augmentation
alignment, and a remote connection to and col-
laboration with expert crime scene investigators.
The video see-through of a modifed Carl Zeiss
Cinemizer oLED (cf. Figure 2) for displaying
content fulflls the requirement for a lightweight
HMD, as its total weight is ~180 grams. Two
Microsoft HD-5000 webcams are stripped and
mounted in front of the Cinemizer providing a
full stereoscopic 720p resolution pipeline. Both
cameras record at ~30hz in 720p, images are
projected in our engine, and render 720p stereo-
scopic images to the Cinemizer.
As for all mediated reality systems, robust real-
time pose estimation is one of the most crucial
parts, as the 3D pose of the camera in the physi-
cal world is needed to render virtual objects
correctly on required positions. We use a heavily
modifed version of PTAM (Parallel Tracking and
Mapping) (Klein and Murray, 2007), in which
a single camera setup is replaced by a stereo
camera setup using 3D natural feature matching
and estimation based on natural features. Using
this algorithm, a sparse metric map (cf. Figure
3) of the environment is created. This sparse
metric map can be used for pose estimation in
our Augmented Reality system.
In addition to the sparse metric map, a dense
3D map of the crime scene is created. The
dense metric map provides a detailed copy of
the crime scene enabling detailed analysis and
is created from a continuous stream of disparity
maps that are generated while the user moves
around the scene. Each new disparity map is
registered (combined) using the pose informa-
tion from the PE module to construct or extend
the 3D map of the scene. The point clouds are
used for occlusion and collision checks, and for
snapping digital objects to physical locations.
By using an innovative hand tracking system,
the mediated reality system can recognize bare
hands gestures for user interface operation.
This hand tracking system utilizes the stereo
camera rig to detect the hand movements in 3D.
figure 2. heAd mounted disPlAy, modified
cinemiZeR oled (cARl Zeiss) with two mi-
cRosoft hd-5000 weBcAms.
figure 3. sPARse 3d feAtuRe mAP geneRAted By
the Pose estimAtion module.
74
figure 4. gRAPhicAl useR inteRfAce oPtions menu.
The cameras are part of the HMD and an adap-
tive algorithm has been designed to determine
whether to rely on the color, disparity or on both
depending on the lighting conditions. This is the
core technology to fulfll the requirement of
bare hand interfacing. The user interface and
the virtual scene are general-purpose parts of
the mediated reality system. They can be used
for CSI, but also for any other mediated reality
application. The tool set, however, needs to be
tailored for the application domain. The current
mediated reality system supports the following
tasks for CSIs: recording the scene, placing tags,
loading 3D models, bullet trajectories and plac-
ing restricted area ribbons. Figure 4 shows the
corresponding menu attached to a users hand.
The mediated reality system has been evalu-
ated on a staged crime scene at the NFIs Lab
with three observers, one expert and one
layman with only limited background in CSI.
Within the experiment the layman, facilitated
by the expert, conducted three spatial tasks,
i.e. tagging a specifc part of the scene with
information tags, using barrier tape and poles
to spatially secure the body in the crime scene
and analyzing a bullet trajectory analysis with
ricochet. The experiment was analyzed along
seven dimensions (Burkhardt et al., 2007): fuid-
ity of collaboration, sustaining mutual under-
standing, information exchanges for problem
solving, argumentation and reaching consensus,
task and time management, cooperative ori-
entation, and individual task orientation. The
results show that the mediated reality system
supports remote spatial interaction with the
physical scene as well as collaboration in shared
augmented space while tackling current issues in
crime scene investigation. The results also show
that there is a need for more support to identify
whose turn it is and who wants the next turn,
etc. Additionally, the results show the need to
represent the expert in the scene to increase
the awareness and trust of working in a team
and to counterbalance the feeling of being ob-
served. Knowing the experts focus and current
activity could possibly help to overcome this is-
sue. Whether traditional patterns for computer-
mediated interaction (Schmmer and Lukosch,
2007) support awareness in mediated reality
or rather new forms of awareness need to be
designed, will be the subject of future research.
Further tasks for future research include the
design and evaluation of alternative interaction
possibilities, e.g. by using physical objects that
are readily available in the environment, sensor
fusion, image feeds from spectral cameras or
previously recorded laser scans, to provide more
situational awareness and the privacy, security
and validity of captured data. Finally, though
IT is being tested and used for educational
purposes within the CSI Lab of the Netherlands
Forensic Institute (NFI), only the application and
test of the mediated reality system in real set-
tings can show the added value for crime scene
investigation.
RefeRences
R. Azuma, A Survey of Augmented Reality,
Presence 6, Vol 4, 1997, 355-385
J. Burkhardt, F. Dtienne, A. Hbert, L. Perron,
S. Safn, P. Leclercq, An approach to assess the
quality of collaboration in technology-mediated
design situation, European Conference on Cognitive
Ergonomics: Designing beyond the Product - Under-
standing Activity and User Experience in Ubiquitous
Environments, 2009, 1-9
G. Klein, D. Murray, Parallel Tracking and Map-
ping for Small AR Workspaces, Proc. International
Symposium on Mixed and Augmented Reality, 2007,
225-234
S. Mann, W. Barfeld, Introduction to Mediated
Reality, International Journal of Human-Computer
Interaction, 2003, 205-208
R. Poelman, o. Akman, S. Lukosch, P. Jonker, As
if Being There: Mediated Reality for Crime Scene
Investigation, CSCW 12: Proceedings of the 2012
ACM conference on Computer Supported Coopera-
tive Work, ACM New York, NY, USA, 2012, 1267-1276,
http://dx.doi.org/10.1145/2145204.2145394
T. Schmmer, S. Lukosch, Patterns for Computer-
Mediated Interaction, John Wiley & Sons, Ltd. 2007
76 77
on fRidAy decemBeR 16th 2011
the symPhony oRchestRA
of the RoyAl conseRVAtoiRe
PlAyed die wAlkRe (Act 1)
By RichARd wAgneR, At the
BeAutiful conceRt hAll de
VeReeniging in nijmegen.
the AR lAB wAs inVited By
the RoyAl conseRVAtoiRe to
PRoVide VisuAls duRing this
liVe PeRfoRmAnce. togetheR
with students fRom diffeR-
ent dePARtments of the RoyAl
AcAdemy of ARt, we designed
A scReen consisting of 68
Pieces of tRAnsPARent cloth
(400x20 cm), hAnging in fouR
lAyeRs ABoVe the oRchestRA.
By PRojecting on this cloth
we cReAted VisuAls giVing the
illusion of dePth.
we chose 7 leitmotiVs (RecuR-
Ring theme, AssociAted with A
PARticulAR PeRson, PlAce, oR
ideA), And cReAted AnimAtions
RePResenting these using
colouR, shAPe And moVement.
these AnimAtions weRe PlAyed
At key-moments of the PeR-
foRmAnce.
78 79
researcher in professional universities) in the
feld of Innovative Visualisation Techniques in
higher Art Education for the Royal Academy of
Art, The Hague.
mAARten lAmeRs
Leiden University
lamers@liacs.nl


Maarten Lamers is assistant professor at the
Leiden Institute of Advanced Computer Science
(LIACS) and board member of the Media Technol-
ogy MSc program. Specializations include social
robotics, bio-hybrid computer games, scientifc
creativity, and models for perceptualization.
stePhAn lukosch
Delft University of Technology
S.g.lukosch@tudelft.nl


Stephan Lukosch is associate professor at the
Delft University of Technology. His current
research focuses on collaborative design and
engineering in traditional as well as emerging
interaction spaces such as augmented reality.
In this research, he combines recent results from
intelligent and context-adaptive collaboration
support, collaborative storytelling for know-
ledge elicitation and decision-making, and design
patterns for computer-mediated interaction.
feRenc molnR
Photographer
info@baseground.nl


Ferenc Molnr is a multimedia artist based in
The Hague since 1991. In 2006 he has returned
to the KABK to study photography and thats
where he started to experiment with AR. His
focus is on the possibilities and on the impact of
this new technology as a communication plat-
form in our visual culture.
CoNTRIBUToRS


wim VAn eck
Royal Academy of Art (KABK)
w.vaneck@kabk.nl
Wim van Eck is the 3D animation specialist of the
AR Lab. His main tasks are developing Augment-
ed Reality projects, supporting and supervising
students and creating 3d content. His interests
are, among others, real-time 3d animation, game
design and creative research.
jeRoen VAn eRP
Fabrique
jeroen@fabrique.nl
Jeroen van Erp co-founded Fabrique, a multi-
disciplinary design agency in which the different
design disciplines (graphic, industrial, spatial
and new media) are closely interwoven. As a
designer he was recently involved in the fag-
ship store of Giant Bicycles, the website for the
Dutch National Ballet and the automatic pass-
port control at Schiphol airport, among others.
PieteR jonkeR
Delft University of Technology
P.P.Jonker@tudelft.nl
Pieter Jonker is Professor at Delft University
of Technology, Faculty Mechanical, Maritime
and Materials Engineering (3ME). His main
interests and felds of research are: real-time
embedded image processing, parallel image
processing architectures, robot vision, robot
learning and Augmented Reality.
yolAnde kolstee
Royal Academy of Art (KABK)
Y.Kolstee@kabk.nl

Yolande Kolstee is head of the AR Lab since
2006. She holds the post of Lector (Dutch for
RoBeRt PReVel
Delft University of Technology
r.g.prevel@tudelft.nl


Robert Prevel is working on a PhD focusing on
localisation and mapping in Augmented Reality
applications at the Delft Biorobotics Lab, Delft
University of Technology under the supervision
of Prof.dr.ir P.P.Jonker.
hAnnA
schRAffenBeRgeR
Leiden University
hkschraf@liacs.nl
Hanna Schraffenberger works as a researcher
and PhD student at the Leiden Institute of
Advanced Computer Science (LIACS) and at
the AR Lab in The Hague. Her research in-
terests include interaction in interactive
art and (non-visual) Augmented Reality.
esm VAhRmeijeR
Royal Academy of Art (KABK)
e.vahrmeijer@kabk.nl

Esm Vahrmeijer is graphic designer and web-
master of the AR Lab. Besides her work at the
AR Lab, she is a part time student at the Royal
Academy of Art (KABK) and runs her own graphic
design studio ooxo. Her interests are in graphic
design, typography, web design, photography
and education.
jouke VeRlinden
Delft University of Technology
j.c.verlinden@tudelft.nl

Jouke Verlinden is assistant professor at the
section of computer aided design engineering
at the Faculty of Industrial Design Engineering.
With a background in virtual reality and interac-
tion design, he leads the Augmented Matter in
Context lab that focuses on blend between bits
and atoms for design and creativity. Co-founder
and lead of the minor on advanced prototyping
programme and editor of the International
Journal of Interactive Design, Engineering and
Manufacturing.
SPECIAL THANKS

We would like to thank Reba Wesdorp, Edwin
van der Heide, Tama McGlinn, Ronald Poelman,
Karolina Sobecka, Klaas A. Mulder, Joachim
Rotteveel and last but not least the Stichting
Innovatie Alliantie (SIA) and the RAAK (Regionale
Aandacht en Actie voor Kenniscirculatie) initia-
tive of the Dutch Ministry of Education, Culture
and Science.
NEXT ISSUE
The next issue of AR[t] will be out in october
2012.

You might also like