You are on page 1of 12

Future Generation Computer Systems 81 (2018) 516–527

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Evaluation of spatial interaction techniques for virtual heritage


applications: A case study of an interactive holographic projection
Giuseppe Caggianese, Luigi Gallo *, Pietro Neroni
Institute for High Performance Computing and Networking, National Research Council of Italy (ICAR-CNR), Naples, Italy

highlights

• Interaction design of a holographic projection system for use in a museum context.


• Design of task- and domain-specific touchless interaction techniques.
• Quantitative and qualitative user studies.

article info a b s t r a c t
Article history: The increasing use of information and communication technologies (ICT) in museums is providing
Received 1 March 2017 curators with new opportunities for the display of cultural heritage content, making it possible to merge
Received in revised form 12 June 2017 real and digital works of art in a coherent exhibition space. However, humans learn and perceive by
Accepted 21 July 2017
following an interactive process, a fact that is particularly true in relation to the understanding, analysis
Available online 6 August 2017
and interpretation of the cultural heritage. In order to allow visitors to fully exploit the potential of this
new hybrid cultural communication, interactivity is essential. This paper analyzes interaction design
Keywords:
Holographic projection focusing on a holographic projection system equipped with a gesture-based interface and discussing
Touchless interaction the results of both quantitative and qualitative user studies aimed at empirically investigating users’
Interaction technique preferences in relation to interaction techniques when used in a museum context. The experimental
Museum findings suggest the adoption of task-specific patterns in the design of touchless user interfaces for the
User study exploration of digital heritage content.
© 2017 Elsevier B.V. All rights reserved.

1. Introduction seventy countries. Moreover, many important cultural heritage


institutions, being increasingly involved in the management of
A huge number of cultural objects are not accessible to visitors digitalized cultural heritage content, have recently released web-
due to the lack of exhibition space in museums or to temporary based virtual exhibitions [2]. Examples include the Museum of
restoration works. Moreover, even when such objects are accessi- the Louvre, the Smithsonian National Museum of Natural History
ble, they can often only be seen at a single site since they cannot be of Washington, the Metropolitan Museum of New York and the
transported either due to their fragility or because they are part of National Archaeological Museum of Naples.
In addition to the adoption of ICT for the building of virtual,
the urban space and are simply immovable. In this context, there is
off-site visits, more recently we have also witnessed an increasing
an increasing interest in the introduction of ICT (information and
interest in applying such technology within conventional exhibi-
communications technology) to increase the number of cultural
tions in museums. Digital representations of real but inaccessible
heritage objects that can be exhibited and observed and to enhance
works of art offer new opportunities to curators in the display and
the quality of their presentation to the interested public [1]. In
dissemination of cultural heritage content, making it possible to
line with this trend, Google introduced its Art & Culture project,
merge visible and invisible, hypothetical and imaginary objects
which allows users to effortlessly navigate via web-based inter-
into a single, coherent space. Digital works of art can coexist with
faces among artworks from over a thousand museums across
real ones, providing more complete, dynamic and visitor-specific
information than static museum information boards. The result
is that nowadays it is common to hear about ‘museums of the
* Corresponding author. digital age’ [3] . In such an exhibition, rather than replacing the
E-mail addresses: giuseppe.caggianese@icar.cnr.it (G. Caggianese),
luigi.gallo@cnr.it (L. Gallo), pietro.neroni@icar.cnr.it (P. Neroni). real visit with a virtual one, the goal is to support the visitor

http://dx.doi.org/10.1016/j.future.2017.07.047
0167-739X/© 2017 Elsevier B.V. All rights reserved.
G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527 517

by stimulating her/his curiosity through a more personalized and the design choices and hardware configuration. Next, Section 4
informed experience. presents the touchless user interface designed to explore and ma-
Typically, such exhibitions involve the use of smartphone apps nipulate the digital content. In Section 5 we discuss the results of a
to provide in-depth, location-aware cultural services to visitors. user study in which several combinations of hardware configura-
However, although mobile technologies are nowadays pervasive, tions and interaction techniques have been compared in the lab,
they involve only a single visitor at a time. Using smartphones whereas in Section 6 we discuss the results of the experiments
makes it difficult to share experiences with other visitors, a fact carried out within the museum. Finally, Section 7 summarizes and
which often has a negative effect on the enjoyment of the visit. An concludes the paper.
attempt to make mobile technologies more shareable between vis-
itors is the ‘Mobile Digital Museum’, a completely itinerant digital 2. Related work
exhibition designed for the Inner Mongolia Museum in China [4].
However, even if this problem can be overcome, mobile tech- Over the past quarter century, different ICT systems have been
nologies also present other limitations: first, not all visitors have, designed with the goal of enhancing the cultural heritage expe-
or know how to use, a modern smartphone; secondly, in some rience. One of the first examples is the design of cultural guides
museums, the internet connection is not available; and thirdly, to interactively support the visitor during a visit to a museum.
most importantly, visitors may not want to concentrate on their Early applications worked with personal digital assistants (PDAs),
smartphones in preference to looking at the cultural space they are which were equipped with location-awareness capabilities [5–7].
actually visiting. Solutions based on RFID tags [8] or IrDA beacons [9] placed near to
To make the presence of digital installations within museums every point of interest along the cultural path within the exhibition
more user-friendly, and to allow visitors to switch seamlessly were investigated to further integrate context-awareness capabil-
between the different realities of the cultural information, holo- ities [10]. Since the context data generated through the interaction
graphic projection systems may present an interesting opportu- between visitors (equipped with smartphones, tablets, etc.) and
nity. They can provide an effective, 3D-like visualization of virtual artworks (equipped with sensors) continuously evolves, a recent
reconstructions of cultural heritage objects that can be shared by a challenge has been the design of systems able to handle and take
number of visitors at the same time. However, the more photoreal- advantage of such a large amount of data [11]. The latest generation
istic the virtual representation of the cultural artifact appears the of cultural guides includes various kinds of multimedia content,
more essential becomes the interactivity. Humans learn and per- to convey information about the cultural artifacts but especially
ceive by following an interactive process, a fact that is particularly to improve the visitor’s experience by offering a personalized ac-
true in relation to the understanding, analysis and interpretation of cess to that information [12–15]. In this context, visitors become
the cultural heritage. Digital installations within museums should active elements, able to discover cultural content and also share it
allow visitors to interact with, explore and manipulate data in through social recommendation sites on social media [16].
order to examine the cultural artifacts from their own perspective. With respect to virtual reality technologies, in the last decades
The interested visitor should be able to find her/his own path many virtual systems have been designed to make it possible to
to cultural knowledge enlightenment, by performing actions not show cultural content to remote visitors in an attractive man-
permissible with respect to the real cultural objects. In this way, ner [1,17]. A considerable amount of cultural digital heritage con-
the digital installation can be used not only when the work of art is tent has become available online on museum websites and plat-
inaccessible, but also in combination with it, when it is on display, forms. This allows visitors to enjoy virtual replicas of artworks
allowing the user to see and possibly feel the real artifact but, at the without the risk of damaging them, and to access the multimedia
same time, to inspect it and gather information about it by taking content associated with a specific exhibit [18,19]. The usability of
advantage of new tools. the user interface has been a primary concern from the early days,
In this work, we present a holographic projection system that since an easy-to-use interface could increase the level of cognitive
allows users to manipulate digital content by using spatial inter- engagement, so leading to a better learning process [20]. Several
action techniques in a touchless way. Using a touchless interface research works have focused on how to avoid leaving the visitor
presents many advantages: it does not require a long training time restricted to the formal aspects of the cultural heritage, instead
before being used, since it employs spatial iteration metaphors aiming to involve her/him more actively in satisfying specific in-
that are typical of everyday life; it is safe to use, since it does dividual interests. To achieve this goal, in [21] several interfaces
not require the user to physically touch any device; and, finally, for tabletop displays that enable visitors to perform a more active
it is unobtrusive for the museum, not requiring flash hardware interaction with digital systems in museum exhibitions were an-
configurations that would not be compatible with some museum alyzed. In [22], the opportunity of using serious games to facili-
facilities. Furthermore, in this work we present the results of a user tate the learning of cultural content by taking advantage of a fun
study, the aim of which was to empirically investigate how spatial experience was also investigated. In the last decade, additionally,
interaction techniques perform with respect to different hardware holographic installations based on the Pepper’s Ghost effect have
configurations. In more detail, the goal was to determine the most been developed for different application domains. An example is
efficient interaction technique to use in order to allow visitors to EducHolo [23], which allows students of mechanical engineering to
manipulate digital cultural content within a holographic projection improve their study of objects orthographically by visualizing the
system and to understand how performance changes according to holograms of 3D mechanical parts. However, although commercial
the sensor configuration. Finally, since the interactive holographic versions of holographic installations are also available, most of
projection system was installed at the National Archeological Mu- them do not allow users to interact directly with the 3D content.
seum of Naples for a period of four months, where it was used by While mobile applications in the cultural heritage domain are
hundreds of visitors, we also evaluate and discuss users’ prefer- now widely accepted, the design of systems to navigate virtual
ences with respect to the spatial interaction techniques and the representations of cultural objects is still in its infancy. Since mouse
whole system when effectively used in a museum context. and keyboard-based interaction systems are of limited use in the
The rest of the paper is structured as follows. Section 2 summa- exploration of 3D content, in recent years many researchers have
rizes the related work on digital technologies in today’s museums, been working on the design of task- and domain-specific natu-
specifically focusing on volumetric displays and natural interfaces. ral interfaces. One of the main goals of such interfaces is trans-
Section 3 introduces the holographic projection system, detailing parency: the visitor should forget that she/he is interfacing with a
518 G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527

Fig. 1. The main components of the holographic projection system.

computer, instead focusing primarily on the cultural content. The


achievement of this objective has been facilitated by the advent of
new and affordable sensing devices, which make it easy to design
gesture-based interfaces that allow users to replicate with virtual
objects the interaction techniques used with real ones. In [24],
the authors employed the Kinect sensor to navigate interactively
in a 3D museum. Another example of a gesture-based interface
was introduced in [25], where a natural user interface to interact
with digital artifacts without the need for special tools or training
was evaluated in the Museum of Palazzo Medici Riccardi and
in the Palazzo Vecchio in Florence, Italy. Complex natural user
interfaces for virtual heritage environments based on hand and
arm movements were described in [26,27]. Another type of natural Fig. 2. The cultural heritage artifact: on the left the real one, in the middle its 3D
reconstruction, and on the right the hologram as visualized in the pyramid.
interface for accessing cultural content makes use of cameras worn
by the users. An example is the system presented in [28], a gestural
interface to interact with artworks in museums. Finally, a novel
user interface was recently introduced in [29] to allow visitors to able to capture visitors’ fine-grained hand movements. Among the
interact with cultural content also in outdoor conditions by using short-range off-the-shelf sensors, the Leap Motion [31] provides a
wearable augmented reality technologies. robust hand segmentation even with the hand in a glove or in the
presence of cluttered backgrounds. It works by using two infrared
3. The holographic projection system cameras arranged so that their fields of view (FOVs) intersect. The
Leap Motion sensor provides a horizontal FOV of 135◦ and a vertical
The interactive projection system is based on the Pepper’s Ghost FOV of 120◦ with a maximum operating distance of 0.60 m, with
effect [30], which creates a hologram illusion by reflecting an an active interaction space that takes the shape of an inverted
image using a glass or mylar film placed at a 45◦ angle from the pyramid, resulting from the intersection of the binocular camera
audience. It is also equipped with a Leap Motion-based touchless FOVs.
user interface, which allows visitors to rotate and to magnify In our experience, among all the short-range IR sensors, the
the objects in a touchless way by using single hand motions. In Leap Motion is the one that suffers the least from solar radiation
the following section, we describe the system and the proposed interference. This aspect was relevant in our case study, since direct
interaction techniques. sunlight is often present in the museum where we performed
our evaluation. Furthermore, Leap Motion can be used in either a
3.1. Holographic pyramid design horizontal or vertical configuration, both of which were employed
in our experiments.
We designed a three-sided holographic pyramid, which allows Leap motion is the smallest off-the-shelf sensor of its category (
visitors to visualize the 3D object from three different points of 75 × 30 mm). Due to this small size, it can be easily integrated into
view. The choice of a three-sided pyramid, despite the commonly a holographic system, being placed in front of the pyramid below
used four-sided configuration, was made for two main reasons: to the projection area (see Fig. 1). Finally, the choice of Leap motion
reduce the problem of ambient light that can disturb the visualiza- can be explained on the grounds that, compared with other sensors
tion of the hologram; and, to enlarge the projecting area, it being in the same price range, it achieves the highest precision with a
equal to a 40-inch display. To project a different view of the object sub-millimeter accuracy [32].
on each side of the pyramid, the rendering window of the display
placed on top of the pyramid was organized in three different areas
(see Fig. 1). The system visualizes a 3D reconstruction of the statue 3.3. User interface
of the ‘Head of Apollo of Omphalos’, which was actually present in
the exhibition close to the system itself (see Fig. 2). The touchless interface allows a visitor to play audio content
related to the cultural object, rotate it and magnify a specific part
3.2. Input device of it to appreciate its details. To activate the interaction, the visitor
has to place her/his hand close to the tracking sensor at a distance
In order to allow the visitors to manipulate a virtual object being ranging from 15 to 25 cm in the area defined as the interaction
close to the holographic system, we needed a short-range sensor area [33]. Fig. 3 shows a state diagram of the system.
G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527 519

Fig. 3. The finite state machine describing the state transitions of the interface.

In more detail: In accordance with these general considerations, we designed


two techniques for the rotation task (namely Swipe and Clutching),
• When no visitor interacts with the system, it is in the Idle and two for the zooming task (namely Magnify and Grab), which
State; in this state, the cultural object continuously rotates are detailed in the next sections (see Fig. 4 and Table 1).
in order to show all its sides;
• Whenever a visitor enters the interaction area, the system 4.1. Rotation technique—clutching
switches to the Language State, to let her/him select the
language for the reproduction of the audio content related In the Clutching technique (Fig. 5(a)), to rotate the 3D object the
to the 3D content; user has to arrange the virtual hand next to it, assume the grasp
• After the language selection, the system goes into the Ready position and, while maintaining this hand posture, move her/his
State; the user can start rotating (Rotation State) the holo- hand in the direction in which she/he wants to rotate the hologram
gram or magnifying a specific part of it by performing the (position control mode). To conclude the rotation, the user has to
appropriate hand gesture (Zoom-in State). assume again the open hand posture. Since the rotation axis is
locked, rotating becomes a one degree of freedom task. Similarly
to the Virtual Sphere technique [34], small hand movements will
4. The interaction techniques result in a small angle of rotation, whereas an increased hand
motion will cause a large rotation angle. Basically, this technique is
The interaction techniques were specifically designed for the simple since it is similar to the drag-and-drop technique, which is
application, the task, the domain, the device and the ‘typical’ user commonly used in desktop interfaces. However, to carry out large
in a museum context. In order to improve the usability of the rotations, users have to grab, move and release several times.
developed techniques, we primarily considered visitor requests;
the visitors outlined some requirements which had to be taken into 4.2. Rotation technique—swipe
due consideration:
In the Swipe technique (Fig. 5(b)), the object rotates according
Uni vs Bi-manual interaction it is tiring to use both hands for to the user’s control hand position with respect to the sensor: when
the interaction and, anyway, displaying simultaneously two the hand is to the right (or left) of the sensor, the hologram contin-
virtual hands inside the pyramid would result in an increased uously rotates to the right (or left) with a speed of rotation that
occlusion of the cultural object; therefore, symmetric one- changes according to the distance of the hand position from the
handed interaction techniques are preferred; center of the interaction area (the rate control mode). To stop the
Virtual hand representation the users prefer to visualize within rotation, the user has to move her/his control hand either toward
the holographic system a virtual representation of the physical the center or outside of the interaction area. The rotation axis is
hand with which they are interacting (the control hand); locked, this being a one degree of freedom task. Differently from
Ease of learning since the system is designed for museum ex- the first technique, this time no clutching is required. However, it
hibitions, the users have to be considered as infrequent users; is necessary to adopt a more coordinated movement to stop the
therefore, it is necessary that the interaction techniques should rotation.
be easy to learn;
Embodied interaction the users rely on their embodied skills 4.3. Zoom-in technique—magnify
in everyday life when interacting with virtual content; accord-
ingly, interaction metaphors that come from everyday life ac- With the Magnify technique (Fig. 5(c)), when the user assumes
tions should be employed; the grasp position and her/his hand is placed over the 3D object, the
Responsiveness a lag between the event occurrence and the sys- virtual hand becomes a magnifying glass, which allows the user to
tem response affects in a strongly negative way the interaction zoom in on details of the object according to the position of the
experience; therefore, the system should be designed to react hand within the plane orthogonal to the sensor. The zoom level is
as quickly as possible; controlled by the visitor by moving forward and upward her/his
Degrees of freedom when possible, the number of degrees of hand. Finally, in order to stop the zoom, the user has to assume
freedom that have to be controlled simultaneously should be again the open hand posture. In total, this is a task involving three
reduced to make the interaction easier and more precise. degrees of freedom.
520 G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527

Table 1
Description of the interaction techniques.
Technique name Rotation task Zoom-in task Technique description DOFs
Engage Perform Disengage
Clutching ✓ Place the hand next to the 3D In grasp posture, move the Assume the open hand posture. 1
object in grasp posture. hand in the desired
direction.
Swipe ✓ Place the hand in open posture Move the hand left or right Move hand into the center or 1
in the interaction area. with respect to the sensor. outside of the interaction area.
Magnify ✓ Place the hand in grasp position, In grasp posture, move the Assume the open hand posture. 3
the virtual hand becomes thus magnifier on to the desired
becoming the magnifier. detail. To zoom-in, move
the hand toward the object.
Grab ✓ Hand in grasp position. In grasp posture, move the Assume the open hand posture. 3
hand in the space to
translate the 3D object. To
zoom-in, move the hand
away from the object

Fig. 4. The rotation techniques: (a) Clutching and (b) Swipe.

Fig. 5. The zoom-in techniques: (a) Magnify and (b) Grab.

4.4. Zoom-in technique—grab 5. In-the-lab user study

In the Grab technique (Fig. 5(d)), the user can enlarge (or re- 5.1. Goal
duce) the 3D object by performing a gesture that simulates the
We wanted to assess the efficiency and effectiveness of, and
action of grabbing the object and bringing it toward (or moving it
user preferences relating to, various combinations of touchless in-
away from) her/him. Therefore, when the virtual hand is over the
teraction techniques when used to manipulate 3D content visual-
3D object, the user should assume the grasp position to choose a ized in a holographic projection pyramid. We were also interested
specific point, then adjust the zoom level by moving her/his hand in investigating the interaction between the user’s performance
forward and upward. This technique should be easy to learn, since and the arrangement of the tracking sensor. In more detail, our
the interaction metaphor derives from real life interactions with goal was to assess whether the sensor arrangement, either pointed
objects. However, when the object is enlarged, some of its parts toward the user (vertical) or pointed toward the ceiling (hori-
leave the visualization area, disturbing the holographic effect. zontal), affects the user’s performance in executing manipulation
G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527 521

tasks and, if so, to understand its relation with the combination of proposed techniques repeating them three times, for a total of
techniques used to perform the task. 24 × 3 × 4 × 3 = 864 trials. The order of presentation of the tasks
to each group was counterbalanced using a balanced Latin square
5.2. Design to offset any learning effects. Before the execution of each trial, the
system revealed to the subjects the required target orientation of
We conducted a set of experiments employing a mixed-design the 3D object for the rotation tasks, and the target view for the
analysis of variance in which the between-subject factor is the zoom tasks. To complete the task, the subjects were required to
arrangement of the acquisition sensor at one of two levels, vertical keep the object still in the final position for two seconds.
or horizontal, whereas the within-subject factors are: i) in the first For each test, the dependent variables collected were the time
study, the rotation techniques (Clutching and Swipe) and angles (3 to completion (ET), the time taken to complete the required task
levels); and ii) in the second study, the zoom techniques (Magnify with a 95% accuracy and with an 80% accuracy. Each of the twelve
and Grab) and factors (3 levels). The performance was measured trials was executed three times consecutively in order to log an av-
in terms of execution time by repeated measures on all the six erage of the subject’s performances. The ET was counted from the
combinations of the two within-subject factors for each study. In moment the subject started to interact with the system, by placing
more detail, the performance was measured, as already performed her/his hand in the sensor range, until the task was completed. The
in [35], in terms of time to completion at 95% accuracy and time two seconds required to confirm the completion of the task were
to completion at 80% accuracy. Measuring also time to comple- not included in the count.
tion at 80% accuracy was undertaken to discriminate between the Finally, at the end of each task the facilitator proposed to the
ballistic phase, in which the user roughly rotates and zooms-in subject a task-level questionnaire in order to perform an evaluation
on the object, and the correction phase, generally present in the of the perceived difficulty. In more detail, the questionnaire was
time to completion at 95% accuracy, in which the user performs the Single Ease Question (SEQ), with a rating scale ranging from
small, precise adjustments [36]. We consider the time an espe- 1 (very easy) to 7 (very difficult). The SEQ has been proven to be
cially important measure because in the cultural heritage domain reliable, sensitive, and valid while also being easy to answer [37].
technological devices, like the one we are proposing in this paper,
have to be easy to use and to learn. For this reason we decided to 5.5. Results and interpretation
compute the time to completion at 80% accuracy because probably
at this level of accuracy a normal user will have already satisfied 5.5.1. Time to completion at 95-percent accuracy
her/his requirements without feeling the need to achieve perfectly The results (see Table 2) indicate that the between-groups
the target position. variable of sensor arrangement is not statistically significant, either
on the rotation techniques (p = 0.140 > 0.05) or on the zoom
5.3. Participants techniques (p = 0.588 > 0.05). In fact, averaging the ET on the
six test conditions given by the combinations of techniques and
The study involved twenty-four unpaid volunteers (19 males tasks, respectively for rotation and zoom, the obtained value did
and 5 females). Their ages ranged from 23 to 38 years old (M ≃ 32, not differ between the two sensor configurations. These findings
sd ≃ 4). All the participants were right-handed and had normal do not support the hypothesis that subjects feel more confident
or corrected-to-normal vision. They were randomly assigned to when performing the manipulation tasks with the sensor placed
two different groups: the first group used the acquisition sensor in the horizontal position, instead suggesting that the interaction
arranged in the vertical position, whereas the second group used techniques work similarly in both the sensor configurations.
it in the horizontal position. For each interaction technique, the However, a deeper analysis of the two rotation techniques
subjects were required to complete three different tasks, and to revealed a two-way interaction involving technique and sensor
repeat each task three times. arrangement (F1,22 = 8, 768, p < 0.05). Observing the esti-
mated marginal means, the Swipe technique shows a considerable
5.4. Procedure difference between the two used sensor arrangements, achieving
a better performance with the vertical arrangement (Horizontal:
The tasks proposed to the volunteers consisted in a manipula- M = 5.838, sd = 0.295 vs. Vertical: M = 4.751, sd = 0.295). With
tion of a 3D reconstruction of a sculpture visualized in the middle the Clutching technique, the ETs with both sensor configurations
of the pyramid. In the rotation tasks, the subjects were required were very close, with a slightly faster time reported with the
to rotate the object to a pre-defined position; in the zoom tasks, horizontal arrangement (horizontal: M = 5.182, sd = 0.227 vs.
they were required to magnify a specific area of the object to a pre- vertical: M = 5.322, sd = 0.227). These values are clearly visible
defined level. In the rotation tasks we defined three different target in Fig. 6.
positions achievable by rotating the object around its vertical axis These results can be explained in terms of a consideration of
by 45◦ , 90◦ and 135◦ . For the zooming tasks, the target positions the different interaction modalities of the two rotation techniques.
were identified as different details of the statue (the mount, left When using the Swipe technique the subject, once the direction of
eye, and right eye) with a specific zoom level. the rotation had been chosen, actually interacted with half of the
Before the test, a facilitator explained the entire procedure to interaction area by moving her/his hand from the middle toward
the subjects presenting the techniques and the interaction modal- one of the two sides. Differently, with the Clutching technique
ity for the sensor arrangement assigned to each group. The partic- the subject exploited all the interaction area, moving her/his hand
ipants were given the freedom to practice by executing the target from one border to the opposite one. Since the interaction area
selection task twice in all the conditions. We asked the subjects to is given by the intersection of the binocular camera FOVs of the
execute the tasks as quickly and accurately as possible. sensor, the interaction area becomes smaller when the hand is
For each sensor arrangement, there were twelve different con- too close to the sensor. During the execution of the tasks with
ditions to test (considering the two studies, rotation and zooming) the sensor in the horizontal position, the subjects were inclined
due to the combination of the within-subject factors, namely four to place their hand closer to the sensor reducing in that way the
techniques (Clutching, Swipe, Magnify and Grab) with three dif- interaction area usable to complete the task. The same behavior
ferent tasks. Therefore, each of the 24 participants, divided into did not occur with the sensor in the vertical position since, once
two different groups, completed three tasks for each of the four the initial position had been acquired (most of the time with the
522 G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527

Table 2
Average time to completion (ET) at 80% and 95% accuracy.
Sensor arrangement Manipulation task Technique Task ET at 80% ET at 95%
Mean (s) sd (s) Mean (s) sd (s)
Horizontal Rotation Swipe 45◦ 2.134 1.024 4.278 1.566
90◦ 3.021 1.409 5.260 1.889
135◦ 5.771 1.971 8.560 1.573
Clutching 45◦ 1.736 1.112 3.753 1.034
90◦ 3.169 1.622 5.002 1.593
135◦ 4.672 1.473 6.791 1.310
Zooming Magnify Mouth 2.593 1.387 5.373 1.824
Left eye 10.754 7.517 14.335 8.025
Right eye 4.970 4.242 6.973 4.538
Grab Mouth 6.391 5.252 10.679 9.717
Left eye 6.415 3.396 10.381 4.418
Right eye 11.366 7.409 14.542 7.148
Vertical Rotation Swipe 45◦ 1.236 0.778 3.351 1.085
90◦ 2.944 0.896 4.606 1.050
135◦ 3.840 1.079 6.297 1.647
Clutching 45◦ 2.100 1.581 3.440 1.589
90◦ 2.776 1.220 4.882 1.397
135◦ 5.309 1.100 7.643 1.047
Zooming Magnify Mouth 3.103 3.079 5.000 2.905
Left eye 8.459 10.635 10.650 12.085
Right eye 7.942 5.339 11.474 7.419
Grab Mouth 9.977 6.245 12.864 6.962
Left eye 8.090 4.458 9.800 4.650
Right eye 13.856 9.179 18.194 9.126

Fig. 6. Estimated marginal means of execution time to completion at 95% accuracy Fig. 7. Estimated marginal means of execution time to completion at 95% accuracy
of the two rotation techniques in the two sensor configurations. of the three rotation tasks in the two sensor configurations.

arm fully extended), the subjects were not motivated to move their in a rate control mode) that forced them to perform a correction.
whole body toward the sensor. Differently, the problem with the Clutching technique was related
For the rotation technique the analysis revealed also a sig- to the need to complete the task in multiple steps, requiring them
nificant effect of the factor task (F2,44 = 69.532, p < 0.001) to grab and release the object many times.
and a three-way interaction involving technique, task and sensor The ET of the zooming tasks was generally higher and with a
arrangement (F2,44 = 4.712, p < 0.05). The grand mean of wider sd with respect to the rotation tasks. This can be explained
the ET averaged on the other four conditions increases with the by virtue of the fact that while the rotation was a single degree of
complexity of the task (M = 3.706, sd = 0.198 for task 1, M = freedom task, the zooming required the control of three degrees of
4.792, sd = 0.276 for task 2 and M = 7.323, sd = 0.235 for task freedom at the same time: two for the positioning and one for the
3). Moreover, this result was confirmed by the Scheffe´ post-hoc zoom. The analysis revealed a significant effect of the technique
analysis which shows a significant quadratic trend of the three task (F1,22 = 9.935, p < 0.05) and task (F2,44 = 4.142, p < 0.05)
means (F1,22 = 5.594, p < 0.05). This trend is depicted in Fig. 7. on the ET, and two-way interactions between technique and task
An explanation for this quadratic trend may be found in the (F2,44 = 10.518, p < 0.001).
difficulty encountered by the subjects in performing the third The effect of the technique on the ET can be explained by
task. In fact, the subjects complained of difficulties with both taking into account the comments of the subjects collected at
the rotation techniques proposed. With the Swipe technique the the end of each task together with their attitude during the task
problem was the loss of control of rotation (the technique working completion. Almost all the subjects considered it easier to perform
G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527 523

Fig. 8. Estimated marginal means of execution time to completion at 95% accuracy Fig. 9. Estimated marginal means of execution time to completion at 80% accuracy
of the three tasks with the two zooming techniques. of the two rotation techniques in the two sensor configurations.

the zoom task with the Magnify technique rather than with the because their hand was occluding the object. Moreover, the Mag-
Grab. This finding is due to the fact that the virtual reproduction of nify technique for the second task proved to be slower with the
a magnifier allows you to easily frame the required details. On the sensor in the horizontal arrangement, because especially in that
contrary, the subjects considered the grab technique more difficult configuration the subjects were inclined to start the interaction
because, in order to zoom, she/he needed to drag the sculpture by aligning the hand to the sculpture leading to an occlusion of
toward her/himself, so obtaining a reduction of the visible part the object visualized in the pyramid. This problem in the task 2
of the sculpture and therefore an increased difficulty in staying (zoom on the left eye) resulted in a better performance of the
focused on the details required in the task. These considerations Grab technique. In fact, with the Grab technique the subject usually
are confirmed by the estimated marginal means averaged over grabbed the sculpture, again by positioning the target area in the
the six other conditions that show that the Magnify technique middle of the interaction area, so avoiding operating with her/his
performs better than the Gab technique (M = 8.968, sd = 1.033 hand exactly in her/his FOV.
vs. M = 12.743, sd = 1.071). Finally, the analysis showed a better performance of the vertical
The effect of the factor task on the ET is explained in terms of configuration of the sensor, confirmed by the ET value averaged
the increasing difficulty of the proposed tasks. In fact, the grand on all the other six conditions (M = 5.510, sd = 0.219 for
mean of the ET averaged on the four other conditions increases the horizontal configuration vs. M = 5.036, sd = 0.219 for the
with the complexity of the task (M = 8.479, sd = 0.989 for task vertical configuration), and a preference for the Swipe technique.
1, M = 11.291, sd = 1.319 for task 2 and M = 12.796, sd = Concerning the two zoom techniques, the results generally show a
1.358 for task 3). This result was confirmed by the Scheffeṕost-hoc better performance with the sensor in the horizontal configuration,
analysis which shows a significant linear trend of the three task as confirmed by the ET averaged on all the other six conditions
means (F1,22 = 9.554, p < 0.05). (M = 10.381, sd = 1.223 for the horizontal configuration vs.
The two-way interaction of technique and task revealed a dis- M = 11.330, sd = 1.223 for the vertical configuration), and a
crepancy with the previously reported result. In fact, as shown in general user preference for the Magnify interaction technique.
Fig. 8, the marginal means of the technique by task interaction
averaged on the sensor arrangement present an anomalous trend 5.5.2. Time to completion at 80% accuracy
with respect to task 2 (zoom on the left eye) with both techniques, The analysis of the ET at 80% accuracy does not reveal a signifi-
Magnify (M = 5.187, sd = 0.517 for task 1, M = 12.493, cant main effect of the sensor configuration either on the rotation
sd = 2.187 for task 2 and M = 9.224, sd = 1.311 for task 3) and technique (p = 0.232 > 0.05) or on the zoom technique (p =
Grab (M = 11.771, sd = 1.802 for task 1, M = 10.090, sd = 0.967 0.311 > 0.05). The estimated marginal means averaged over the
for task 2 and M = 16.368, sd = 1.748 for task 3). In fact, in both six conditions given by the combinations of technique and task
sensor configurations the Magnify technique performs differently for both rotation and zoom showed values not too dissimilar to
only on task 2 (zoom on the left eye), in which exceptionally the each other. This finding suggests that the ballistic phase in all the
Grab technique performs better. proposed techniques does not appear to be affected by the sensor
In relation to the Magnify technique, the subjects showed a arrangement.
greater difficulty in the execution of the second task because of the A closer analysis of these collected times for the two rotation
specific combination of position on which to zoom and the hand techniques revealed a main effect of task on the ET (F2,44 =
used to interact with the system. In fact, the second task required 76.131, p < 0.001) together with a two way interaction involving
the subjects to zoom in on the right eye of the sculpture that technique and sensor arrangement (F1,22 = 10.932, p < 0.05)
the subject saw on her/his left. During the execution, we noticed and a three way interaction involving technique, task and sensor
that almost all subjects (all of whom were right-handed), after arrangement F2,44 = 3.476, p < 0.05), as with the 95% ET. The
having positioned the virtual magnifier on the area corresponding first result was confirmed by the grand mean of the ET averaged
to the right eye of the sculpture, were forced to move their head on the other four conditions that, as expected, increased with the
524 G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527

Fig. 10. Estimated marginal means of execution time to completion at 80% accuracy Fig. 11. Estimated marginal means of execution time to completion at 80% accuracy
of the three rotation tasks in the two sensor configurations. of the three zooming task in the two sensor configurations.

complexity of the task (M = 1.801, sd = 0.166 for task 1, M = two way interaction of technique and task shows the same trend
2.978, sd = 0.261 for task 2 and M = 4.898, sd = 0.204 for seen for the 95% accuracy ET. The ballistic phase of the Magnify
task 3). The velocity of rotation in the ballistic phase was limited by technique proves to be shorter than that of the Grab technique
the tendency of the subject to place her/his hand too close to the in task 1 and 3 but not in task 2 in which, conversely, the Grab
sensor. Indeed, the reduction of the interaction area led to a slower technique seems to perform well. This finding is confirmed by the
phase that was not perceived as a problem by the subject because marginal means of the technique by task interaction averaged on
she/he focused on achieving the required angle. As reported before, the sensor arrangement, for Magnify (M = 2.848, sd = 0.509
this problem was more evident with the Swipe technique com- for task 1, M = 9.607, sd = 1.963 for task2 and M = 6.456,
pared to the Clutching one, and was worse when the sensor was sd = 1.028 for task 3) and Grab (M = 8.184, sd = 1.230 for task 1,
in the horizontal configuration (see Fig. 9). The Scheffe ṕost-hoc M = 7.253, sd = 0.845 for task 2 and M = 12.611, sd = 1.778 for
analysis showed that the three task means are significant according task 3) and by the graph in Fig. 11. The explanation for this result is
to a quadratic trend (F1,22 = 9.587, p = 0.05) (see Fig. 10). This that the ballistic phase is also affected by the previously discussed
outcome supports the finding reported in the previous discussion. problem. The subjects, while attempting to position the synthetic
In fact, with the Swipe technique the subjects took a longer time to magnifying glass on the target area of the sculpture, disturbed their
correct the position of the sculpture compared with the Clutching vision with their own hand. Moreover, the results in the ballistic
technique. Indeed, the time to increase the accuracy from 80% to phase in task 2 when using the Grab technique were better than
95%, averaged over the other six conditions, was greater for the those achieved in task 1 because the sculpture needed to be moved
Swipe technique (M = 2.170, sd = 0.855 vs. M = 1.958, sd = to a lesser extent.
0.958). Finally, the ET recorded to achieve 80% of accuracy does not
Concerning the zoom, in this case also the analysis revealed a differ from that discussed previously, confirming that the iden-
main effect of the technique (F1,22 = 8.119, p < 0.05) and task tified problems affect also the first phase of the interaction. In
(F2,44 = 4.330, p < 0.05) on the ET, and a two way interaction more detail, the performance of the ballistic phase of the rotation
involving the technique and task (F2,44 = 12.426, p < 0.001). techniques supported the positioning of the sensor in the vertical
The structure of the magnifying glass helped to frame the required configuration as confirmed by the ET averaged over all the other
details also in the ballistic phase, as confirmed by the estimated six conditions (M = 3.417, sd = 0.220 for the horizontal configu-
marginal means averaged over all the other six conditions that ration vs. M = 3.034, sd = 0.220 for the vertical configuration)
show that the Magnify technique (M = 6.304, sd = 0.882) with a preference for the Swipe technique. The results on the
performs better than the Grab technique (M = 9.349, sd = 0.908). zooming technique showed a better performance with the sensor
These results, together with those previously discussed, were also in the horizontal configuration (M = 7.082, sd = 1.015 for the
confirmed by considering the correction times (the time to achieve horizontal configuration vs. M = 8.571, sd = 1.015 for the vertical
95% accuracy from 80%). This time, averaged over all the other six configuration) and a preference for the Magnify technique.
conditions, was slower for the Magnify technique (M = 2.715,
sd = 1.784) compared to that achieved with the Grab technique 5.5.3. General considerations and the subjects’ perceived difficulties
(M = 3.394, sd = 2.344). This effect of the factor task was In summary, in relation to the rotation techniques, the experi-
expected due to the different levels of difficulty proposed in the mental results revealed a better performance for the Swipe tech-
tasks, a fact supported by the grand mean of the ET averaged on the nique compared to the Clutching technique, with a slightly better
other four conditions which increases with the task difficulty (M = performance achieved with the sensor in the vertical configuration.
5.516, sd = 0.724 for task 1, M = 8.430, sd = 1.185 for task 2 These results are confirmed also by the evaluation collected by
and M = 9.534, sd = 1.268 for task 3). This result was confirmed means of the SEQ questionnaires (see Fig. 12) that shows a slightly
by the Scheffeṕost-hoc analysis which shows a significant linear higher perceived difficulty for the Clutching technique (M =
trend of the three task means (F1,22 = 9.508, p < 0.05). The last 1.875, sd = 0.828) compared to the Swipe technique (M =
G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527 525

part of the exhibition ‘Oltre il visibile. I Campi Flegrei’ (Beyond


the Visible. The Phlegraean Fields) at the National Archaeological
Museum of Naples for almost four months. The first goal was to
test the acceptability and attractiveness of the installation in the
museum. The second, since the pyramid was specifically designed
to analyze novel ways of interacting with a virtual cultural object,
was to test two different modalities to supply information about
the visualized sculpture. Taking into account the results of the in-
the-lab user study, the interface used the Swipe rotation technique
and Magnify zooming technique, with the sensor placed in the
horizontal configuration.

6.2. Design

To test the acceptability of the installation, we placed the holo-


graphic projection system side by side with the real ‘Head of Apollo
of Omphalos’ statue, whose digital model was visualized in the
pyramid. In the evaluation, we considered the number of user
Fig. 12. Subjects’ perceived difficulty averaged on the six rotation tasks in the two sessions and their length. The start of a new session (engagement)
sensor configurations (scale ranges from 1 to 7, with higher scores representing was identified by the voluntary action of the user in placing her/his
greater difficulty).
hand over the sensor, while the length of the user session was
measured as the total engagement time. In order to better consider
the end of the interaction in an unsupervised environment, we
considered the session closed only if there were no more interac-
tions in the forty seconds after the last captured interaction. All this
information was collected by the system in a log file.
Finally, we evaluated two different modalities to provide addi-
tional information about the statue, installed for similar periods
of time during the four months of the presence of the holographic
projection system in the exhibition. The first modality was based
exclusively on textual information, visualized during the interac-
tion; in the second, instead of text, a female voice was used.

6.3. Results and interpretation

After the four months of the holographic projection system


installation, the analysis of the collected user interaction, on av-
erage, showed 38 interactions per day, the interaction length of
which was greater than 4 min. Furthermore, the analysis of the
engagement time across the two different modalities (text vs.
Fig. 13. Subjects’ perceived difficulty averaged on the six zooming tasks in the two voice) suggests that the users prefer the vocal comments dur-
sensor configurations (scale ranges from 1 to 7, with higher scores representing
greater difficulty).
ing the exploration of the artifact. In fact, although the number
of interactions remains approximately the same during the two
different periods, the average session time increases significantly
rising from 3 min in the first two months, when the text system
1.750, sd = 0.694). However, both the techniques were rated
was used, to 5 min in the two following months, when the voice
as very simple to use by the subjects. The Swipe was considered
was used.
slightly less difficult to use with the sensor placed in the vertical
Finally and most interestingly, we often noticed that the visitors
configuration, while the Clutching technique achieved an equal
who used the holographic pyramid were most of the time inter-
evaluation for both the sensor configurations.
ested in seeing up close the real artifact, in a continuous transition
In relation to the zoom techniques, the Magnify technique
between virtual and real, so as to directly verify the reliability of
achieved a better performance compared to the Grab, especially
the obtained information. This finding suggests that virtual content
with the sensor placed in the horizontal configuration. In this
cannot only replace a real artifact which is inaccessible, but can
case also, the evaluation of the results of the SEQ questionnaires
confirms the previous analysis. The Grab technique proved to have also complement it, when present, by providing a more complete
a higher perceived difficulty (M = 3.264, sd = 1.302) compared analysis.
with the Magnify technique (M = 2.569, sd = 1.072). However,
both the techniques were evaluated as easy to use, scoring better 7. Conclusions and future work
with the sensor in the horizontal configuration (see Fig. 13).
The rapid development and dissemination of digital technolo-
6. In-the-field user study gies in museum exhibitions has the potential to significantly im-
prove the enjoyment of the cultural heritage by visitors, actively in-
6.1. Goal volving them in the exploration of cultural content following their
own personal experiential path. However, the design of interactive
This section introduces the experiments conducted in the field, systems to allow visitors to easily navigate across virtual content
namely in a real museum. The holographic projection system was is still in its infancy.
526 G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527

In this work we have detailed the interaction design of a [14] O. Stock, M. Zancanaro, PEACH-Intelligent Interfaces for Museum Visits,
gesture-controlled holographic projection system. Different inter- Springer Science & Business Media, 2007.
action techniques, manipulation tasks and sensor configurations [15] A. Chianese, F. Marulli, F. Piccialli, P. Benedusi, J.E. Jung, An associative engines
based approach supporting collaborative analytics in the internet of cultural
have been evaluated both quantitatively and qualitatively. The
things, Future Gener. Comput. Syst. 66 (2017) 187–198.
experimental findings have implications for the design of appli- [16] M. Hong, J.J. Jung, F. Piccialli, A. Chianese, Social recommendation service for
cations where users are expected to interact through the use of cultural heritage, Pers. Ubiquitous Comput. 21 (2) (2017) 191–201. http://dx.
natural user interfaces for the exploration of 3D cultural content. doi.org/10.1007/s00779-016-0985-x.
In particular, the analysis of the relations between the interaction [17] M. Carrozzino, M. Bergamasco, Beyond virtual museums: Experiencing im-
space, limited by the tracking sensor choice and configuration, and mersive virtual reality in real museums, J. Cult. Herit. 11 (4) (2010) 452–458.
[18] R. Wojciechowski, K. Walczak, M. White, W. Cellary, Building virtual and aug-
the interaction techniques, could be very beneficial for the research
mented reality museum exhibitions, in: Proceedings of the Ninth International
community and could stimulate further research in this quickly Conference on 3D Web Technology, Web3D’04, ACM, New York, NY, USA,
evolving area. 2004, pp. 135–144. http://dx.doi.org/10.1145/985040.985060.
A future improvement would be to integrate cognitive and [19] D. Pletinckx, D. Callebaut, A.E. Killebrew, N.A. Silberman, Virtual-reality her-
conversational AI systems with interactive holographic projection, itage presentation at ename, IEEE MultiMedia 7 (2) (2000) 45–48. http://dx.
so designing a system able to interact with the visitor also us- doi.org/10.1109/93.848427.
ing natural language, mimicking a human-to-human conversation. [20] J. Pallud, Impact of interactive technologies on stimulating learning experi-
ences in a museum, Inf. Manage. 54 (4) (2017) 465–478. http://dx.doi.org/10.
The ultimate goal is to improve the information boards, which can-
1016/j.im.2016.10.004.
not provide either exhaustive or personalized information to the [21] T. Geller, Interactive tabletop exhibits in museums and galleries, IEEE Comput.
visitors. The use of conversational AI and interactive holographic Graph. Appl. 26 (5) (2006) 6–11.
projection could instead enable visitors to follow a personalized [22] M. Mortara, C.E. Catalano, F. Bellotti, G. Fiucci, M. Houry-Panchetti, P. Petridis,
path in the acquisition of knowledge, achieving more naturally and Learning cultural heritage by serious games, J. Cult. Herit. 15 (3) (2014) 318–
effectively the discover of cultural heritage information. 325.
[23] M.J. Figueiredo, P.J. Cardoso, C.D. Goncalves, J.M. Rodrigues, Augmented reality
and holograms for the visualization of mechanical engineering parts, in: 2014
Acknowledgments
18th International Conference on Information Visualisation, (IV), IEEE, 2014,
pp. 368–373.
The interactive holographic projection system was exhibited at [24] C.-S. Wang, D.-J. Chiang, Y.-C. Wei, Intuitional 3d museum navigation sys-
the National Archeological Museum of Naples from 27th July to tem using kinect, in: Information Technology Convergence, Springer, 2013,
8th November 2016, during the exhibition ‘Oltre il Visibile. I Campi pp. 587–596.
Flegrei’ (Beyond the Visible. The Phlegraean Fields). We would like [25] T.M. Alisi, A. Del Bimbo, A. Valli, Natural interfaces to enhance visitors’ expe-
to thank the director of the museum, Paolo Giulierini, the curator riences, IEEE MultiMedia 12 (3) (2005) 80–85.
[26] E. Pietroni, C. Ray, C. Rufa, D. Pletinckx, I. Van Kampen, Natural interaction
of the exhibition, Rossana Valenti, and the domain experts, Marco
in vr environments for cultural heritage and its impact inside museums:
De Gemmis and Simone Foresta, for helping us to understand the The etruscanning project, in: 2012 18th International Conference on Virtual
cultural heritage perspective beyond the technological perspective Systems and Multimedia, (VSMM), IEEE, 2012, pp. 339–346.
when designing the system. The authors are participants in the [27] G. Caggianese, L. Gallo, G. De Pietro, Design and preliminary evaluation of a
SNECS project of the High Technology Consortium for Cultural touchless interface for manipulating virtual heritage artefacts, in: 2014 Tenth
Heritage of the Campania Region, and are grateful for its support. International Conference on Signal-Image Technology and Internet-Based Sys-
tems, (SITIS), IEEE, 2014, pp. 493–500.
[28] L. Baraldi, F. Paci, G. Serra, L. Benini, R. Cucchiara, Gesture recognition using
References
wearable vision sensors to enhance visitors’ museum experiences, IEEE Sens.
J. 15 (5) (2015) 2705–2714.
[1] S. Styliani, L. Fotis, K. Kostas, P. Petros, Virtual museums, a survey and some
[29] N. Brancati, G. Caggianese, M. Frucci, L. Gallo, P. Neroni, Experiencing touchless
issues for consideration, J. Cult. Herit. 10 (4) (2009) 520–528.
interaction with augmented content on wearable head-mounted displays in
[2] Unesco, Information and communication technologies in schools - a handbook
cultural heritage applications, Pers. Ubiquitous Comput. (2016) 1–15.
for teachers, 2005.
[30] J.C. Sprott, Physics Demonstrations: A Sourcebook for Teachers of Physics, Univ
[3] R. Parry, Museums in a Digital Age, Routledge, 2013.
of Wisconsin Press, 2006.
[4] I.M. Museum, Mobile digital museum - the frontier for cultural heritage exhi-
[31] Leap motion controller, https://www.leapmotion.com/.
bitions, 2016. www.digitalmeetsculture.net/article/mobile-digital-museum-
[32] F. Weichert, D. Bachmann, B. Rudak, D. Fisseler, Analysis of the accuracy and
the-frontier-for-cultural-heritage-exhibitions.
robustness of the leap motion controller, Sensors 13 (5) (2013) 6380–6393.
[5] C. Ciavarella, F. Paternò, Visiting a museum with an handheld interactive
[33] A. Kumar, K. Waldron, The workspaces of a mechanical manipulator, J. Mech.
support, Demo at Mobile HCI.
Des. 103 (3) (1981) 665–672.
[6] A. Woodruff, P.M. Aoki, A. Hurst, M.H. Szymanski, Electronic guidebooks and
[34] D. Bowman, E. Kruijff, J.J. LaViola Jr, I. Poupyrev, 3D User Interfaces: Theory and
visitor attention, in: ICHIM (1), 2001, pp. 437–454.
Practice, CourseSmart eTextbook, Addison-Wesley, 2004.
[7] K. Yatani, M. Onuma, M. Sugimoto, F. Kusunoki, Musex: A system for support-
[35] L. Gallo, A study on the degrees of freedom in touchless interaction, in: SIGGRAPH
ing children’s collaborative learning in a museum with pdas, Syst. Comput.
Asia 2013 Technical Briefs, ACM, 2013, p. 28.
Japan 35 (14) (2004) 54–63.
[36] T. Grossman, R. Balakrishnan, A probabilistic approach to modeling two-
[8] S. Hsi, H. Fait, Rfid enhances visitors’ museum experience at the exploratorium,
dimensional pointing, ACM Trans. Comput.-Hum. Interact. (TOCHI) 12 (3)
Commun. ACM 48 (9) (2005) 60–65.
(2005) 435–459.
[9] M. Fleck, M. Frid, T. Kindberg, R. Rajani, E. O’Brien-Strain, M. Spasojevic, From
[37] J. Sauro, J.S. Dumas, Comparison of three one-question, post-task usability
informing to remembering: Deploying a ubiquitous system in an interactive
questionnaires, in: Proceedings of the SIGCHI Conference on Human Factors
science museum, IEEE Pervasive Comput. 1 (2) (2002) 13–21.
in Computing Systems, ACM, 2009, pp. 1599–1608.
[10] B. Schilit, N. Adams, R. Want, Context-aware computing applications, in: First
Workshop on Mobile Computing Systems and Applications, 1994, WMCSA
1994, IEEE, 1994, pp. 85–90.
[11] A. Chianese, F. Piccialli, A smart system to manage the context evolution in the
Giuseppe Caggianese received the Laurea degree in com-
cultural heritage domain, Comput. Electr. Eng. 55 (2016) 27–38.
puter science magna cum laude in 2010 and the Ph.D.
[12] M.F. Costabile, A. De Angeli, R. Lanzilotti, C. Ardito, P. Buono, T. Pederson,
degree in Methods and Technologies for Environmen-
Explore! possibilities and challenges of mobile learning, in: Proceedings of tal Monitoring in 2013 from the University of Basilicata,
the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2008, Italy. He is a Research Fellow at the National Research
pp. 145–154. Council of Italy and a Lecturer of Informatics at the Uni-
[13] C. Ardito, M.F. Costabile, R. Lanzilotti, A.L. Simeone, Combining multimedia versity of Basilicata. His research interests include nat-
resources for an engaging experience of cultural heritage, in: Proceedings of ural user interfaces and human interface aspects of vir-
the 2010 ACM Workshop on Social, Adaptive and Personalized Multimedia tual/augmented reality in relation to cultural heritage and
Interaction and Access, ACM, 2010, pp. 45–48. medical applications.
G. Caggianese et al. / Future Generation Computer Systems 81 (2018) 516–527 527

Luigi Gallo received the Laurea degree in computer engi- Pietro Neroni received the Laurea degree in computer
neering magna cum laude from the University of Naples science magna cum laude from the University of Naples
Federico II, Italy, in 2006 and the Ph.D. degree in in- Federico II, Italy, in 2013. He is a Research Associate at the
formation engineering from the University of Naples National Research Council of Italy. His research interests
Parthenope, Italy, in 2010. He is a Research Scientist at the include natural user interfaces and human interface as-
National Research Council of Italy and at the University of pects of virtual/augmented reality in relation to cultural
Naples Federico II. His research interests include theory, heritage and medical applications.
implementation and applications of natural interaction
and computer vision.

You might also like