You are on page 1of 97

Tangible Interfaces for Download: initial observations on users’ everyday environments

Politecnico di Torino III Facolt` di Ingegneria a

Tesi di Laurea Magistrale in Ingegneria Informatica
Author: Matteo Giaccone Supervisor Ecole Polytechnique F´d´rale de Lausanne: e e Enrico Costanza February 2010 Relatore Politecnico di Torino:

Prof. Aldo Laurentini


This work would not be possible without the help and motivation received in the years by my family, by Agnese and from a lot of old time friends. I have to thank also my friends in ACMOS that helped me to achieve things that i would not thought possible being alone. Thanks to the Media and Design Laboratory at EPFL to host my master project, and to the Swiss hospitality given to me by Enrico, Tal and Olivier that made my days in Lausanne be wonderful. Thanks to Professor Jeffrey Huang in EPFL for accepting to host me in the LDM lab and to all other people there for their hospitality. Thanks to Professor Aldo Laurentini for giving to me the opportunity to go in Switzerland supervising my master project from the Politecnico di Torino. Finally, thanks to all the users of d-touch that had the courage of trying this instruments and make this incredible adventure happen.



Tangible User Interfaces (TUIs) have been studied and discussed broadly in the Human Computer Interaction (HCI) community in the past 15 years. Most reported TUI projects were studied only in laboratories or in controlled environments for short amount of time. With this study we distributed d-touch, a low cost tangible interface for music composition and performance, through a website and we remotely recorded interactions in users’ everyday environments. We collected also user generated content deposed in public websites or sent directly to us to get a better picture of the users’ impressions about the experiment. We then qualitatively analyzed these content with an approach grounded in data and we supported our theory with a statistical analysis of interaction logs. The results show that, even if the instruments were at an early stage, the tangible nature of the interface has been perfectly accepted by the users as if it was obvious to use.


Acknowledgements Abstract 1 Introduction 2 Related Work 2.1 Early Work . . . . . . . 2.2 Audio TUI . . . . . . . . 2.3 Markers . . . . . . . . . 2.4 TUI User Studies . . . . 2.5 Large Scale User Studies 2.6 Grounded Theory . . . . 3 Audio d-touch 3.1 Working Principle . . . 3.2 The Software . . . . . 3.3 d-touch Drum Machine 3.4 d-touch Sequencer . . . 3.5 Logging System . . . . 3.6 Distribution . . . . . . 3.7 Diary of the Launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i ii 1 5 5 9 11 12 14 15 17 17 19 20 21 22 24 26 29 29 30 30 34

4 Logs collection and analysis 4.1 Data Collection . . . . . . . . . . 4.2 Data Analysis . . . . . . . . . . . 4.2.1 Quantitative Log Analysis 4.2.2 Qualitative Video Analysis

5 User Generated Content: collection and analysis 5.1 Data Collection . . . . . . . . . . . . . . . . . . . . 5.2 Data Analysis . . . . . . . . . . . . . . . . . . . . . 5.2.1 User Generated Content: Text . . . . . . . . 5.2.2 User Generated Content: Videos and Photos 6 Discussion 7 Future Work 8 Conclusions A UGC texts B Open Codes and Axial Coding C UGC photos and videos analysis D Riassunto in Italiano D.1 Introduzione . . . . . . . . . . . . . . . . . . . . . . D.2 Audio d-touch . . . . . . . . . . . . . . . . . . . . . D.2.1 Come funziona . . . . . . . . . . . . . . . . D.2.2 d-touch Drum Machine . . . . . . . . . . . . D.2.3 d-touch Sequencer . . . . . . . . . . . . . . D.2.4 Sistema di Registrazione Remota . . . . . . D.2.5 Distribuzione . . . . . . . . . . . . . . . . . D.2.6 Il Lancio Online . . . . . . . . . . . . . . . . D.3 Raccolta e Analisi dei Log . . . . . . . . . . . . . . D.4 Raccolta e Analisi di Contenuti Creati dagli Utenti D.4.1 Analisi dei Testi . . . . . . . . . . . . . . . . D.4.2 Analisi delle Foto e dei Video . . . . . . . . D.5 Discussione . . . . . . . . . . . . . . . . . . . . . . D.6 Lavoro Futuro . . . . . . . . . . . . . . . . . . . . . D.7 Conclusione . . . . . . . . . . . . . . . . . . . . . . Bibliography

. . . .

37 37 38 38 45 49 52 54 56 58 61

. . . . . . . . . . . . . . .

69 69 71 72 73 73 74 75 76 77 80 81 82 83 85 86 88


Chapter 1

Human Computer Interaction (HCI) is the field of work in which this project is inscribed. The range of topics studied by HCI is very broad and it includes various different subjects as computer science, design, cognitive psychology, ergonomics and more. A clear definition of HCI is given by ACM SIGCHI [1]: Human-computer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. With the unstoppable spread of computing systems in all forms and under every object, the interaction between humans and computers is everyday more important and present. Computer science should be interested in human-computer interaction studies at least because a pleasurable and efficient interactive object is able to sell better than others. It’s interesting to know that the studies in HCI began during World War II when the first telecommunication devices started to appear in war airplanes. The goals were to make this new devices efficient and easy to use during high tension and danger moments. Now the setting is completely changed and we are con1

stantly interacting with intelligent devices, from the personal computer to the VCR and tens of other invisible intelligent companions. This trend is described as ubiquitous or pervasive computing [2]. The importance of this studies is proved by the growing interest of the major software and hardware companies to the academic debate. All the major conferences on HCI, like CHI (ACM Conference on Human Factors in Computing Systems) or UIST (ACM Symposium on User Interface Software and Technology) are sponsored by companies like Microsoft, Google, Autodesk, IBM and Nokia1 . One particular field of work of HCI is around Tangible User Interfaces (TUIs). The basic concept is that TUIs couple physical representation (e.g., spatially manipulable physical objects) with digital representation (e.g., graphics and audio), yielding user interfaces that are computationally mediated but generally not identifiable as “computers” per-se [3]. Tangible interaction is described by: tangibility and materiality of the interface, physical embodiment of data, whole-body interaction by the user, embedding of the user and the interface in real spaces and contexts [4]. To give an analogy with more familiar Graphical User Interfaces2 , in TUIs physical objects represent digital informations and controls as in GUIs icons and graphic metaphors are used. TUIs are popular in HCI since the first half of 1990s when Wellner, Fitzmaurice, Ishii and Ullmer [5, 6, 3, 7] started developing these new interfaces. TUIs are debated in the HCI community since the pioneering works cited above, and since 2007 the ACM participates to the organization of a specific conference on tangible interfaces, the TEI conference3 . Some areas of application are education [8, 9, 10], creativity [11, 12] and public expositions. Tangible interaction has been proved
1 CHI 2 One

2010 website: and UIST 2009 website: classical example of GUI is the Desktop metaphor, used daily by millions of people. 3 Tangible and Embedded Interaction Conference.


interesting for certain type of educational tasks and improved creativity, especially when some kind of cooperation has been made possible. Tabletop tangible interaction is one branch of TUIs experiments, in particular where the interface is horizontally placed and interaction happens on it. This starts from the DigitalDesk of Wellner [5] and comes to Reactable [11] as examples in the history of tabletop TUIs. Augmented Reality (AR) is another key aspect of TUIs and it goes far beyond this specific application. With AR we are talking about the process in which some digital (or virtual) information is added to the perception of the real world. Some examples may be the digital projection over real objects or the use of head-mounted display to superimpose video to the real images. Some tangible interface uses the possibilities given by AR to improve the interaction with digital data through physical objects. A common approach in AR uses Computer Vision algorithms to analyze markers that are used as reference to draw the virtual images over the real environment through projection or in a video. Some different kind of markers have been developed to support this kind of applications. One of the most popular is the ARToolKit system developed in 1999 and used in hundreds of project, some of them in the mainstream advertisement field4 . Audio d-touch uses the d-touch marker recognition system, similar in concept to ARToolKit. The particular point of d-touch is that the markers are human designable so that they can convey information to humans and also to the machine. This feature is useful for TUIs to enrich even more the object containing data and to strengthen the link between virtual data and physical object. Until now no tangible interface has been tested in a real usage environment in a large scale. The problems that blocked this testing
4 See

for example:


were the high cost and the high technology that is usually needed by a TUI, for example a sensing system or a retro-projected surface [13, 11]. Other typical usage of TUIs were in museums or in controlled environments as school classes with trained teachers [14, 10]. The suitability of TUIs in everyday environment with untrained users remain an open question for everyday users’ environments. A recent and emerging trend in HCI is on the Web potential to create User Generated Content (UGC) and use them as a source of data for user studies. Examples are uses of UGC to evaluate commercial products and their emotional impact [15], the use of crowd-sourcing web services, as the Amazon Mechanical Turk [16] to recruit user study participants or the public Internet release of GUI prototypes to run a user study on them [17]. The thesis work is a study on TUIs used in everyday environments through an Internet-based logs observation and through UGC collection and analysis. For that purpose we used Audio d-touch, a previously existing TUI for musical composition and performance, we made the software freely available through a Web site and we gave instructions on how to build it and use it. We then logged all the interaction produced by people usage, we collected UGC produced by users on Internet and after a statistical analysis of logs and an analysis of all the collected data we concluded with some observations and future improvements on TUIs used in real settings. The main interesting point that we have found from this study is that the musical part of the application was not enough for our audience and that, surprisingly, the tangible interface was hardly perceived, as if it was completely obvious to use.


Chapter 2

Related Work
This chapter will analyze previous work on Tangible User Interfaces and inspirational material for user studies.


Early Work

The first example of tabletop tangible interface documented is the DigitalDesk by Pierre Wellner [5]. The aim of this project was to create a mixed reality application, where a real desktop was augmented with computational capabilities. Two cameras Figure 2.1: DigitalDesk. 1991. analyzed the desktop and one projector sent the output directly on the desk. In this project there are all the basic concepts found in many subsequent experiments. The camera watching the scene from above (in later projects

camera and projectors are under the table), the projector that augment the surface near the camera and the computer vision program to elaborate the data seen from the camera. This setup has inspired also the d-touch applications that use the same strategy. After the Wellner experiment, Bricks, a project by Fitzmaurice, Ishii and Buxton, is another well known experiment [6]. This experiment with the related paper is very important for the history of TUI, that were still called Graspable User InFigure 2.2: Bricks. 1995. terfaces at the time of Bricks, in 1995. The basic idea was to have a physical object (or more than one), a brick, that you could grasp and use as an input for the interface. The output was projected back under the bricks, creating another tabletop interface as the DigitalDesk. The importance of this paper lays in the fact that for the first time a series of important feature of TUIs (still called Graspable UIs) were analyzed. From all the conclusions, two important points should be highlighted. The experiments of Fitzmaurice et al. showed that humans have a lot of skills in bricks manipulation with two hands that with traditional GUIs are not used at all. The obvious conclusion for them was to use two bricks as input of the interface, showing also that a TUI have the possibility to be more consistent than a GUI in the interaction. The fact that GUIs have space-multiplexed output and time-multiplexed input restrict the possibilities since the point of interaction is always one instead of multiple possible. The TUI instead, gives a space-multiplexed output and a space-multiplexed input enriching the interaction as they showed in some practical experiments with CAD programs.


The word Tangible User Interface was born with the paper Tangible Bits by Ishii and Ullmer [7]. This paper is synergic with the Weiser paper on Ubiquitous Computing (UbiComp) [18]. The focus of the two papers is that with UbiComp and TUIs become possible the use of natural human abilities like the ability to interact with physical objects, and the use of background perception abilities of humans. The Weiser objective is to put as much actions as possible in the perceptive background. The concept behind is that if someone is doing something in his perceptive background, it means that he can do something else in his focus, so doing more without using more effort. With this idea in mind, Ishii and Ullmer developed some experiments, like the ambientROOM, where the background perception is used to react in case of ambient changes. Another example is the Live Wire of Jeremijenko [19], where a wire hanging from the ceiling was connected to a little motor activated by Ethernet signals. So when net traffic was higher the wire was spinning faster producing visual e audio feedback in the room. The people in the room quickly became unaware of the wire slow spinning, but when a change Figure 2.3: Urp. 1999. in speed happend everybody could perceive it and get the information of network activity without the need of being actively watching the network statistics. The project Urp [20] from Underkoffler and Ishii at the MIT Me7

dia Lab explores the possibility of tangible objects that express also a physical meaning. Urp is a tangible table-top interface for urban planning and design. The buildings in the applications are represented with objects with the shape of the real buildings in order to understand how they look like and to see the shadows projected on the table after Urp’s calculations. At the end of the paper the authors explore the possibilities given by objects on TUIs. Here in Urp objects can have the same shape of real buildings, or the shape of a clock to regulate time used in shadow calculation and projection. Starting from these difference they explore the possibilities of tangibles and their different possible meaning when used in TUIs. Ullmer and Ishii in 2000 wrote a paper [3] that described the state of the art of TUIs and posed a discussion framework, used afterwards in hundreds of papers. A new interaction model has been developed, starting from the MVC (ModelView-Controller) programming Figure 2.4: Han’s multi touch screen. 2005. paradigm. The new model is called MCRpd, that means Model-Control-Representation physical and Representation digital. The idea of the MVC is that data in inputed in the Control part, the Model has a digital representation of data, elaborates and gives back an output in the View. The MCRpd has always a digital Model of data, but the Control is tied with the physical Representation acting as an input and as an output at the same time. The digital Representation is an added output (that might be optional) as a digital information in form of video or audio. A good example to explain this MCRpd is the abacus made in the paper. In the abacus the representation is tightly coupled with the calculation model and there is no distinction between input and output. Acting on the physical representation is the same as acting directly on data, since

the model is directly connected to Representation and Control. In parallel with the approach of tangible objects as control interface, completely different experiments were done, for example using only touch screens. The Han’s multi-touch screen [21] is a good example in this different trend. The objective is always to get a better interaction and to use the human abilities not explored with traditional interfaces, for example the dual hand interaction already described above [6]. One strong difference is that in this approach tangible objects are not (or less) important in the interaction, while on TUIs the tangibles are the focus of the interface.


Audio TUI

As expressed in [3] one application field for TUI is the artistic/expressive. In this case we analyze the musical applications that have gained a lot of media attention in the last years1 . Audiopad [13] is the first tabletop TUI for musical performance that have gained media and scientific attention. The interface is a tabletop TUI with a sensing plane based on RF tag. Inspired by the DigitalDesk [5], the output is projected from above on the interface allowing Figure 2.5: Audiopad. 2002. the visual augmentation of the knobs used to control the application. The performer could play samples, use audio effects and regulate audio parameters with the knobs on the interface. The TUI then produced a MIDI output used to control the audio external application. The sensing table,
1 For

an extended list:


the video feedback and the complex interaction are parts of the project common in several subsequent experiments. Audio d-touch is different from this approach in the marker tracking (Computer Vision approach) and in the simple and essential interface, allowing an easy reproduction and installation. Reactable [11] is by far the most known and publicized tangible interface, used in concerts by famous groups2 and sold by a company3 . The concept of this application is the same as others already presented. There is Figure 2.6: Reactable. 2005. a table with a camera and a projector. The camera is used to track markers and hand gestures, the projector is used to give a video feedback on the interface. The setup is different from the others because the projector and the camera are under the table, allowing Reactable to be a table with nothing else around, easy to move, to expose and making possible the selling of a single object. The audio application is a modular synthesizer controlled by knobs with markers (passive plastic blocks, in contrast with Audiopad) placed on the table. The size of the table (around 2 meters of diameter) allow cooperation in music playing, exploiting another point of interest in TUIs. Another very popular tangible interface for music is BeatBearing [22]. Probably this application had a lot of attention because was easy to understand and it has been released online with instructions and source. This is not a classic tabletop musical TUI, but the tangibility of balls used to close electronic contacts that trigger musical samples is inspiring. The output of the interface was an
2 Bjork used Reactable in numerous 3







LCD screen used horizontally as the base of the instrument. Over the screen were placed the electronic circuits and holes that were closed with steel balls to trigger the music.



The use of markers to study novel interfaces was born in the Augmented Reality (AR) field where the work of Kato and Billinghurst [23] has been fundamental. They started a new trend in marker recognition that later became the famous ARToolKit4 . Now a lot of AR projects use this kind of markers that permit tracking and recognition in a 3D space after Figure 2.7: ARToolKit marker some camera calibration. The particularity of these markers is the square border that characterize them. The algorithm searches for the square and then it does a template recognition of the content to understand which marker has been recognized. This marker recognition system and the AR applications done with ARToolKit are very famous and easy to build since BuildAr from HITLabNZ5 , that enable users to create very easily AR applications on a desktop PC. Probably the most famous example of AR applications at this moment are FLARToolKit6 based web advertising. FLARToolKit is an ActionScript 3 port for Adobe Flash 9+7 of the ARToolKit system, allowing developers to easily build AR applications with Flash. This framework has been heavily used to build every kind of advertising on web, having probably the greatest success with cars8,9 , or collectible cards for children10,11 .
4 5 6 7

advertisement: gToA Z4 advertisement: 10 Pokemon collectible cards: 11 Baseball collectible cards:

8 Mini


Another framework to build AR applications on desktop is DART [24]. The focus of DART and DART-TUI is to easily prototype Tangible Interface applications through Adobe Director12 extensions. In the paper they also illustrate an example application built with their framework about physics and mathematics experiments in a school. Audio d-touch instruments use the Computer Vision approach to identify the position of tangible controllers, instead of the more expensive and complex sensing table found in other TUI experiments. The visual marker recognition approach used in dtouch [25] is similar to the one used in Reactable [26]. The algorithm detects markers using an adaptive image Figure 2.8: d-touch drum machine thresholding, followed by region adja- marker cency analysis. In this way neither geometry nor color are inherent properties, giving freedom to design your own shape. The design has been proved working and the results are encouraging [27]. For the d-touch drum machine we designed two particular markers that proved the algorithm flexibility. One is visible in Figure 2.8.


TUI User Studies

Paul Marshall tries to understand if tangible interfaces are a good tool in learning [9]. He analyzes previous experiments, possible interesting domains of action and he gives guidelines for possible benefits and areas that still need a scientific study. To confirm the suppositions of Marshall we can find a lot of user studies on tangible interfaces and a lot of them are in the field of learning [28, 12, 8, 10, 14].


Two user studies on TUIs that we have analyzed [29, 28], were done with adults and so it was possible the handling of a questionnaire (seven point Likert-scale or NASA-TLX) and a statistical analysis on answers. In either cases the results were used to see that the interface was working as expected and to compare it to a preexistent model to have a comparison. Instead, all the user studies analyzed used informal questions, or the analysis of videotaped experiments to gain a better understanding of the study and to find problems or possible improvements. We decided then to not use questionnaires neither to ask people to fill them, since it was an exploratory study and we didn’t find good previous work to compare in a large scale. We tried to gain some insight from the users answering emails or on the d-touch forum13 in form of informal questioning or just reading the thought that people autonomously sent us. One commont point that we tried to overcome was the small number of people involved in user studies. In the cited papers the number of participants was never larger than 50 people because the user testing takes time and because the technology used was expensive and difficult to reproduce. Often it was impossible to reproduce a large number of experiment because the TUI setup was difficult and researchers were needed to build it. Audio d-touch instead goes in the opposite direction. What is needed is just a webcam and paper. The setup of the system is easy and working (although still not very well explained to users) as we experimented with hundreds of users. Another common problem that appears from time to time is that users don’t have enough time to get acquainted with the interface since user studies rarely are longer than one hour. With our experiment we tried to overcome this problem giving the interface for free to the users and giving them complete freedom to use it when they wanted and as long as they wanted. So learning could be at the user’s speed and we could gather as much data as possible.



Large Scale User Studies

Previous work has been done using UGC collected on Internet to gain understanding on products from informal comments on websites. One example is the work done by Blythe et al. on YouTube comments regarding the launch of the iPhone 3G [15]. In this paper the grounded theory approach has been used to gather and organize hundreds of YouTube comment to draw conclusion on it. This paper has been very inspiring for our work and encouraged us to walk the same path as them and analyze the UGC collected on the Internet. Another work that has marked good points using the distributed intelligence of Internet users is the study on Mechanical Turk [16]. In this study the Amazon Mechanical Turk14 was used to understand if the shared intelligence of the Internet users is valuable as the intelligence of experts. Obviously the result is not completely satisfactory, but if the work is thought expressly for the Mechanical Turk, good results could be obtained. This paper is useful for us to understand that if searching for the right things, the tests with a large number of users are a valuable source of informations. A more traditional user study through Internet is Note to Self [17] that uses Internet to recruit users to study the impact of a particular GUI in note taking tasks. The interesting part is that the user study is almost completely held through usage logs taken remotely as we did in the study of Audio d-touch. Then some statistical analysis has been done on data and a questionnaire was requested. The difference from our research is that the participants were explicitly contacted and it was offered a prize while in our study no one has been directly asked to use the system and the interest was purely personal.
14 Amazon Mechanical Turk is a marketplace for small works that require human intelligence. It’s complete name is Mechanical Turk, Artificial artificial intelligence. Website at


A previous large scale user study on a particular interface or device is the Scent Field Trial [30] by Nokia. This study is peculiar because it’s held internally in the Nokia Intranet using a specific Nokia mobile phone, available to 800 employees. The basis of this study is that the device was already available to the users so that they didn’t have to spend money. For a novel TUI this approach is impossible except if the participants decide to buy the product studied as might happen in a mobile phones company or with other kind of commercial products.


Grounded Theory

To analyze the data that we collected on Internet in form of comments, emails, blog posts, etc. we used a method inspired by a theoretical framework called Grounded Theory as explained by Sharp et al. in [31]. This approach is useful to analyze qualitative data in a systematic way and get an insight of data collected especially in cases where the amount of data is significant as we experienced with Audio d-touch, collecting and categorizing around 600 sentences about the application. The method is strongly tied, or “grounded”, with analyzed data. In the beginning data are collected, in form of sentences for example, and iteratively categorized with open codes. Each time that a new theme is encountered, a new code is used to categorize the sentence. During the process of categorization the researcher have to focus on a specific theme and find relations with the categories already created. This process continues until the codes are saturated, meaning that no new codes are added when processing the data. The next phase, the axial coding, is the reduction of open codes through grouping and relating existing categories. The axial codes will support strongly the theory that is growing grounded from data. The last step is the selective coding, where the axial codes are organized around a central theme, the grounded theory, that can be better supported if more data are present. In

our research this last step has produced two different ideas grounded in data and not a real theory that could comprehend everything.


Chapter 3

Audio d-touch
The d-touch instruments are tabletop tangible interfaces for realtime musical composition and performance, presented for the first time in 2003 by Costanza, Shelley and Robinson [32, 33].


Working Principle

The working principle is similar to other tangible interfaces already presented: a camera is positioned over a table looking to an interactive table (or briefly the board ) that is an A4 paper characterized by four markers used as calibration for the system. Then the user can place over the board the interactive blocks (also called in literature as Bricks [6], Phicons [7, 3] or tokens [13]) that are tracked by the camera and from the position and angle of the blocks the application produce the sounds accordingly to the logic of the instrument. Both the markers on the blocks and the calibration markers on the board are d-touch markers [27] used by the application to track the blocks and to convey some information to the user, since the markers are designable they can be drawn to give some cue to the user. An example is the marker used in the drum machine, visible in

Figure 2.8, that represent one hand and is associated with medium volume of sample reproduction. Instead the marker in Figure 3.1, displaying two hands, was used to play sample at an higher volume. The markers were tested to be recognized at a resolution of 320 by 240 with the A4 board completely visible. The speed of the webcam is useful only to get a low latency response from the instrument but anything that could work at 15 frame per second or more is good. With this low requirements the webcam can be very cheap and the recognition algorithm was developed in 2003 and roughly tested for performance in previous papers [25]. We can say that the recognition algorithm was never the bottleneck of the application, the speed was regulated always by the camera frame rate. The application is capable to track in real-time (at 25/30 FPS) the four calibration markers, to adjust calibration of the system in case of small movements of the camera or of the interactive board and, at the same time, each block in the board is tracked with its x-y position, its angle and its type accordingly to the marker printed on d-touch drum machine it. With the A4 board, to have good Figure 3.1: two hands marker, with recognition and have the minimum size of the blocks, we had designed them to be 2.5 by 3 cm so that simultaneously on the board could be present and actively used around 25 blocks. The size of the system can be scaled to any desired size. From the audio point of view, both applications are loop-based sequencers. They scan the interactive board from left to right and then they loop it indefinitely. It’s like if there was a virtual cursor looping the board and each time that the cursor crosses a block the application plays the sound associated to it. The different arrangement of blocks on the board creates different musical compositions and different patterns. Each application, as we will see, can have

different behaviors and have different uses. Both instruments were designed to be simple and easy to learn. One rule that we followed to be coherent through the interface was the object-action paradigm from the GUI literature [34]. The idea is that using this paradigm it’s easier to avoid mode problems and the object is always in the attention focus of the user. The mode problems derive from the action-object paradigm, when the user has to select an action, so enter in a mode, and then the object. This approach is error prone since it’s very easy for the user to forget the current mode and do actions that bring to unexpected results. We translated this modeless approach in our TUIs using interactive blocks as objects and using horizontal surfaces as actions. In other words if the blocks are positioned in some areas of the board they activate actions on themselves, we will explore the particular behaviors and mapping in later sections when the particular instruments will be described.


The Software

Audio d-touch is C++ written with some external libraries. The Computer Vision engine is publicly available on SourceForge1 developed in C++ by Enrico Costanza with the support of different libraries to have the cross platform support of Figure 3.2: d-touch Sequencer board webcams. The audio engine is written using the C++ STK library developed at CCRMA2 .
1 2


The last part of the software, the remote logging engine and the activation GUI has been written in C++ with Qt3 . All the libraries are cross platform and we didn’t experienced particular problems using them.


d-touch Drum Machine

The d-touch drum machine is the first instrument that we designed and published online. The aim of this instrument is to be very simple and playful, just some kind of exploration of possibilities given by TUIs. The board is very Figure 3.3: d-touch drum machine board simple with a single active area where all the action takes place. The area is divided in eleven rows and sixteen columns: each row is a different sound, identified by a description written on rows, and each column correspond to a different time of play relatively to the loop. The design of the board can be seen in Figure 3.3. The objects used in this interface are of two different kind, identified by the markers in Figures 2.8 and 3.1. The first marker is used to play a normal drum hit, while the second one plays a louder hit, used to create patterns with accented hits. The playback of the sample is activated when a block is placed in the board and the virtual cue that loops the board passes the position where the marker currently is.



d-touch Sequencer

The d-touch Sequencer is an evolution of the drum machine instrument. The basic concept of sample reproduction is the same, as in the looping sequencer. The difference is that the samples that are reproduced by the d-touch sequencer are not the drum hits but the user’s recorded samples associated on the fly to a block that, from the recording moment, will trigger the new sample. The interface, as in the drum machine, is essential. The main areas are: the play area, divided horizontally in two equal parts, where the blocks will be played back by the instrument, the record area is where blocks trigger the action of recording and finally the store area where blocks trigger the recording of the currently played sequence in the block, allowing the construction of recursively complex sequences. In the play area the vertical axis maps the volume of the block and the horizontal axis, as in the drum machine, maps the relative time in the looped sequence when the sample will be played. The record and store areas are sized in manner that only one block at the time could fit. Another difference from the drum machine is that the rotation of the blocks influences the speed of reproduction of the sample associated with the block. The speed of the sample can be doubled or halved rotating the block clockwise or counterclockwise. The d-touch Sequencer blocks are of two different types: the Sound Container blocks visible in Figure 3.4a and the Control (a) Sound Container (b) Start/End Control (c) Time Control Block Block Block blocks visible in Figure 3.4: d-touch Sequencer markers Figures 3.4b, 3.4c. There are 18 different sound blocks representing 18 different sounds, more than one block of the same time can be present simultaneously

in the interface but they will reproduce the same sample. The colored arrow, if put pointing at the right, as the play symbol, indicates the normal speed of reproduction of the sample associated with the block. The Start block has a twin called End block. The two blocks can be used together or one at a time and they indicate that only the subset of the board that they delimit will be looped to get the sequence. These blocks can be used to break the constant duration of loop, or in conjunction with the store action, to store in a block a subset of the played sequence. The Time Control blocks are four, differentiated by the different times (2 sec, 4 sec, 8 sec, 16 sec), and are used to change the loop duration of the sequencer. The default duration of the entire loop is 2 seconds, but can be changed to 16 seconds with 4 seconds steps. To use the Control blocks the user has to place them in the board and they will change the behavior of the instrument. If the Time Control blocks are covered or removed the last time that has been set up is kept, while if the Start or End blocks are removed the sequence is played completely and not only in the last subset indicated.


Logging System

To enable the remote user study, a new part of the d-touch applications that sends usage logs to a remote server that stores them in a database has been developed. To have as accurate data as possible we developed also a registration and software activation mechanism. The users has to register in our website to download the software and the same activation data are requested from the d-touch instruments that, on startup, interrogate the server to verify the account data. If everything is correct the instrument starts, otherwise it shuts down. The logging part in the application has been developed in C++ using the Qt library, both for the TCP/IP implementation and for the GUI used

to input activation data. To limit the possibilities of shared accounts between users a MAC address check on activation has been implemented. When the user activates the software for the first time it sends to the server an hash of the username, the password and the MAC address of the machine, then the hash is stored. The next time that the application is started the hash is created and sent to the server. If it’s different another activation is required. So if someone shares an account with someone else each time that he starts the application he gets annoyed by the activation. We hoped that this behavior could push the users to register and use their own account in order to preserve usage patterns of single users, as the time span between the first play and the last play, that we’ll see in the next section. With these solutions we noticed, from IP addresses belonging to different countries, only one case of shared account and we removed it from our database. The registration phase on the web has been useful also to gather some background information for the users. We asked to users their age, sex, profession, musical knowledge and practice, TUI knowledge and practice. These data were later used to group logged data and try to understand how people with different background could differently interact with the d-touch instruments. With the applications, as with the registration, we decided to collect only the essential data interferring as little as possible with the users’ privacy. The application registers the marker and calibration position and angle, the timestamp, the user and the IP address of the connection. Every 25 frames the logging part of the application sends data to the remote server that stores it along with the user identifier. We never gathered pictures from the webcam to reconstruct the usage. For the sequencer we decided to log also the registered sounds, otherwise we could not understand what the user was doing but we thought that this was not an invasive practice since the audio should only be recorded during playing. In fact we never registered personal conversations. However we always stated clearly and as better as possible that the application was free because we

were doing a user study and that we were registering user actions during usage. Also the fact that the application could work only with an Internet connection raised attention and understanding on this point. We received a lot of emails and messages of user complaining on the need of Internet connection, but only once a user asked to be removed from our database. From the server point of view, the logging was done modifying an existing system based on TikiWiki4 a PHP based content management system. A PHP script has been developed to get the data arriving from the applications in form of HTTP POST calls. The data is gathered, associated with the user that has sent the data and then stored in a MySql database. The system has revealed itself fairly robust since it gathered simultaneously data from multiple users apparently without losing data.



The low cost and easy to set-up nature of the d-touch instruments were design goals from the beginning of the development. No soldering, particular electronic components nor special handcraft is needed, just a webcam (or a consumer grade DV camera), a computer with Windows, Mac OS X or Linux, an audio card and an ink-jet printer are needed. The complete package with software, images with blocks and board to print is downloadable in less than 10 Mega Bytes. The cross-platform compatibility of the software was needed for the nature of the project, often music makers or artists are Mac users and then the Linux version was done with almost no change, but never publicly released, just given to people that explicitly asked for it. The fact of distributing the software out of the lab on multiple


unknown hardware and software configurations has revealed itself a non-trivial technical challenge. The system has been heavily tested and some deep software bug has been found. We can say that the public release of the software forced us to clean and review a large part of the audio and computer vision code of d-touch. Then for the first time we tested the cross-platform capabilities on OS X and finally we had to prepare installation packages with external libraries for Windows and OS X. To have an easy to setup tangible interface we prepared the ready to cut blocks, visible in Figure 3.5, the instructions to build a webcam stand with the paper outline to be glued on cardboard and the image to print the board. We provided also the plain markers for the lazy users, suggesting an easy and quick way to build the d-touch system: the webcam could be hanged to a lamp, the Figure 3.5: d-touch marker on a ready to build block plain markers could be glued on objects like small chocolate bars or nuts and we demonstrated it with a video that we posted on YouTube5 where we used chocolate and walnuts as marker support. Otherwise we suggested to fill the blocks with lentils to give some weight to the blocks and have a better control of them6 . We did these videos mainly to convey the easy to build and low cost materials needed for the project as well as to get some media attention on the project.
5 6



Diary of the Launch

Since its beginning in 2003 the Audio d-touch instruments have been informally tested by a certain number of people, including real musicians. Given the easy to build nature of the project the situations were numerous: laboratory open houses, friends from our laptop and also live on Figure 3.6: The complete d-touch setup stage. The responses have always been enthusiastic, people loved the playfulness of the system, the simplicity and the low cost nature. Also in the musical field it received good comments. The professional composer and cellist Giovanni Sollima used the d-touch Sequencer in some concert starting from 20067 . The musicians appreciated the computer interaction without the need of a screen and the physical approach to music that reminds the use of audio effects pedals. Some d-touch videos were posted on YouTube at the end of 2006 and in 2 years, without any promotion, they received more than 30000 views overall. From this informal positive feedback we were encouraged to organize a large scale user study, since the appealing audience looked potentially wide enough, thanks also to the success of Reactable [11]. The fact that the instruments were low cost and completely downloadable was the key to push us towards the online publishing. The ongoing large scale user studies done in the HCI community [15, 16, 17] convinced us to go for the remote logging and usage analysis,

sequencer sequencing.phtml


with the collection of informal comments. The first step that we did was the creation of the website to promote and publish the applications. On the site we made available for download the Windows and OS X applications, together with the PDF file of blocks and board, after a registration procedure. In a first phase we made publicly available only the drum machine while we were completing also the revision of the sequencer. To explain to people how d-touch works and to promote the diffusion we did a short video that we published on YouTube8 and we featured it in the homepage. We planned the release day to be on June 28 2009, so the days before we prepared the online material to promote d-touch. We thought that the Do-It-Yourself part of the project might interest someone and so we posted an entry on the popular website, with a complete set of photos and instructions to build it, with a link to our site to download the software. We used Twitter9 , the popular micro-blogging platform, to communicate with other people and on the release day we sent emails to popular blogs telling about the new d-touch project gone online. In a few days we were featured in homepage of, getting more than 5000 view of the instructions, the YouTube video went fast over 20000 views and we started to get followed on Twitter. Then d-touch started to appear in blogs, until and published a post on our system and we received a huge response. By July 5, a week after the launch, we had 671 users registered to our website, 208 started the application and 112 used d-touch successfully (more than 1 minute of usage). During the following 6 weeks, until mid August, we released 4 new updates of the drum machine, fixing bugs that we discovered only thanks to the incredible feedback of our users, while in the meantime we continued working on the sequencer. The website
8 9


continued to be viewed and totally we received 25000 visits and the project was featured in more than 30 blogs including 2 hands-on original reviews10 . On August 17 we officially launched the sequencer, a new version of the drum machine and we renewed all the promotional material. A new video was done11 , with new footage and modified accordingly to the comments we received on the first one. We prepared high quality photos12 and a press kit to contact better as we could the blogs and the magazines. Probably due to the second launch, the project gained less attention than the first time and in the same time span we received 7800 visits on our website roughly a half than the first time. Surprisingly the YouTube videos had more attention than the actual website, as if the talking about the system was more interesting than the system itself. Despite the less visibility, after the second launch we had more time to gather data to analyze, since we decided to stop on December 15 2009. The numbers after the second launch are still interesting. The registered users are 1252, 389 tried the interface at least once and 273 for more than one minute, playing a total of 199 hours.

10 11 12


Chapter 4

Logs collection and analysis
In this chapter we will talk about the interaction logs gathered by the system explained in Section 3.5. The collection and analysis of the User Generated Content collected through Internet sites will be covered in the next chapter.


Data Collection

The data analyzed in the next section is what we collected from users that registered from August 17 to December 15. We could not take into consideration also previous data because we discovered a bug that made useless what we collected before. All the following analysis will be done on the 4 months of correct data that are stored in a database bigger than 3 GB. Each entry in the database is characterized by username, timestamp, time relative to the start of the session for each frame, position and angle of markers for each frame. We will cover two different approaches that we had to data analysis. The first is a statistical approach that brought us quantitative results useful for future design guidelines. The second approach is qualitative towards the interaction videos that we reproduced. The latter is similar to direct observation of users and it has been used

to try to understand the actions taken by the people. The mixture of the two approaches, with the the UGC analysis done in the next chapter will help drawing some conclusion.


Data Analysis

In this section, interaction logs analysis will be described. To elaborate statistics on data, and draw graphs, we used Python1 scripts with the Matplotlib2 library. Another sort of analysis has been done through the playback of logs. We wrote a Python log player Figure 4.1: Example of frame produced by the log player that sent data to the drum machine, exactly as the computer vision engine does during normal usage, so we could play and record all the sounds that users produced. In parallel we draw all the frames that we received with another Python script so that we ended up with a video track and an audio track for each session played. We combined them together and we could playback all the logs that we have collected in form of videos. One example frame can be seen in Figure 4.1.


Quantitative Log Analysis

The quantitative analysis of logs started with the self-reported registration data about demographics. We report percentages from
1 2


Avg session length (min.) Avg no. sessions per user Max no. sessions per user Avg blocks in session Max blocks in session Sessions with 8 blocks or more No. of sessions Minutes of usage

Sequencer 8.75, (σ = 10.12) 3.33, (σ = 3.13) 15 1.80, (σ = 1.71) 5.49 96, (8.35%) 479, (29.42%) 4193.25, (35.10%)

Drum Machine 6.75, (σ = 8.05) 5.45, (σ = 6.35) 30 3.06, (σ = 3.75) 7.11 318, (27.70%) 1149, (70.58%) 7752.62, (64.90%)

Overall 7.34, (σ = 8.76) 5.55, (σ = 6.34) 39 2.69, (σ = 3.33) 6.64 414, (25.43%) 1628, (100%) 11945.87, (100%)

Table 4.1: Audio d-touch usage data, gathered from the interaction logs. In brackets the standard deviation of the average values.

the 273 users that successfully started Audio d-touch and used it for more than one minute, except 7 people that filled forms with useless data. 27% of our users were under 20 years of age, 73% under 30 and 94% under 40. Females were very few, only the 2% of users. Few users reported previous knowledge with TUIs and even less reported previous experience with them. Going further with quantitative analysis of data we tried to understand the engagement of people towards this kind of tangible interfaces. We collected statistical data from sessions, like the average session duration, the number of sessions per user, the number of blocks per session. All the data gathered is summed up in Table 4.1. We used the boxplot graph to understand better the overall data and to make some comparison between sequencer and drum machine usage. From the data in boxplots shown in Figure 4.2 and from the Table 4.1 we can get some insights on user preferences. Even if the drum machine has been used much more than the sequencer, in the same time span, we can see that the sequencer has been used for longer sessions than the drum machine in the large majority of cases. For the number of blocks it’s controversial. The mean is higher for the drum machine as we see in Table 4.1, but from the Figure 4.2a we see that the majority of sequencer sessions have more blocks that the majority of drum machine sessions. In the drum machine we have a high number of outliars, probably due to the “tests” done

(a) Boxplot of maximum number of blocks.

(b) Boxplot of session length.

Figure 4.2: Comparison between instruments. The boxplot used in these pictures indicates: with the box the lower and upper quartile (25% to 75%), in red the median and the whiskers are 1.5 times 50%. Usually in the box will be the majority of data, with the whiskers we see the minimum and the maximum and with blue crosses are marked the outliars.


with the complete A4 paper full of markers that we provided in PDF. We can say that on average the sequencer sessions showed more blocks, probably because of the two available tracks to place markers and to the more interesting instrument that engages more the users. Generally usage of dtouch was infrequent: 21% of users interacted with d-touch over a period longer than 2 days and 11% for more than one week. We can also see, from the histogram in Figure 4.3, that the usFigure 4.3: Histogram of number of sessions in a time age was compressed in slot. The bins are half an hour big in the global image the large majority in less and 2 minutes long in the zoomed part. than two hours. All the quantitative analysis is useful to understand better the pattern of usage that we will later see and the UGC that will be presented in the next chapter. In the final discussion, we will review this data in conjunction with the others to get a complete picture over the experiment. Before doing the video analysis we had to find some criteria to lead it. We could not watch the entire video production because it would be too time consuming and too less informative as the starting sessions were often similar. So we decided to do a quantitative analysis before, to watch only a meaningful subset of all the sessions. To select a user that may be interesting we plotted the number of blocks present in a frame on time, that resulted in graphs like the Figure 4.4a. We selected users that had sessions with a number of blocks greater than 5 or if some particular pattern was evident from the general usage. Then if a user had a lot of sessions, we

(a) Usage graph. The number of blocks in the interface are mapped on the y-axis while the time is mapped on the x-axis.

(b) Usage plot. This is a heat map presenting where markers has been positioned during a session to understand if some movement has happened.

Figure 4.4: Selection principles. The video analysis has been done selecting videos that presented interesting patterns in above pictures.

drew heat maps of usage, based on where the blocks were more present during a session. The plotting resulted in images like the Figure 4.4b. The pixels were more colored if the block was present for a relevant number of second in a certain position. From this plots we can see if a user has moved blocks during a session or if the session was all equal because he forgot the instrument on an he went away. With this previous analysis we found interesting sessions and then we produced the videos regarding the history of these users to see the learning curve and the possibly interesting results. Using this selection approach we managed to analyze around 24 hours of videos, instead of the 90 present at the time of video generation and we were satisfied with the quality of the sessions. The results of the video analysis will be presented in the next section.


Qualitative Video Analysis

Since we had not analyzed all the videos but only a part of them we cannot give percentages on what we have seen, also because we report here mostly observed patterns, so we will give trends of usage that might be useful for the development of d-touch and other TUIs with similar setup. From the people that managed to start successfully d-touch we

saw that everyone managed to produce a basic rhythm with few blocks or that they enjoyed themselves with one or two blocks trying to understand how the system is working. We will refer to this latter kind of behavior calling it exploration of the interface. Almost everyone started the first time the application and explored its possibilities moving one block in front of the camera and trying to calibrate the system showing all the markers of the board to the camera. Only a few minority started immediately with a rhythm creation, probably users with a background in tangible interfaces. The exploration of the interface might also happen with the uncut paper with markers that we provided on the website. This behavior leads to a high number of markers seen (around 30) for a small amount of time and an indistinct noise. This kind of pattern was observed multiple time as it is the easiest way to test if the instrument is able to produce some sound only having printed a paper. We have observed sometime that after this exploration the board remains empty and markers appear slowly in the interface, as probably the user were cutting markers and placing them in the instrument as soon as they were cutted. These exploration patterns are visible only in the firsts sessions of the users. Later sessions start with the already calibrated setup and sometimes with already some blocks in the interface, as if the setup was left untouched from the previous experience in a fixed setup. Beside these exploration patterns and the basic rhythm generation, as we saw in Table 4.1, few sessions contained a significant number of blocks allowing the creation of a complex sequence. Anyway these sessions were often long and musically interesting, demonstrating that the users grasped well the concept of the instruments and that good results are achievable. From observations we noticed another recurring phenomenon, the markers where recognized intermittently indicating difficult lighting conditions. This worsen the user experience because the blocks layout seen by the users doesn’t correspond to the block positions seen

by the applications, leading to incorrect audio generation. Given that the algorithm has been tested for five years and a tracking filter actively smooth this behavior, we suppose that the lighting conditions were really poor in these situations. After having observed this phenomenon, we realized that we didn’t explain clearly how to check if the recognition was happening correctly, beside a small sign on markers visible on screen. Related to this lack we realized also that we didn’t provide a sign on the GUI to indicate that the calibration of the system was acting correctly. In general we observed rhythmical patterns if the sessions were correctly calibrated and with good lighting conditions. In the other cases users could not get the correct responses from the system and they were able only to explore basically the system. Even if they tried with a significant number of markers, since the sound was not coherent with the tangible setup, the users quickly moved from music making, to audio exploration and then they abandoned the instruments.


Chapter 5

User Generated Content: collection and analysis
The focus of our research has been the observation of tangible interface users in their everyday environments. Through interaction logs we gathered information on their actual usage, but probably the most interesting informations arrived from their informal comments and from the videos and photos they shared with us. In this chapter this type of content will be analyzed.


Data Collection

Around the d-touch instruments raised a lot of interest through the Internet promotion. We observed and gathered all data that we were able to find in the period from the pre-launch on June 28 2009 until September 9 2009. In this time we collected more than 120 emails, more than 330 posts on the forum, more than 50 blog posts, hundreds of comments on social web sites and more than 220 Twitter posts. Through Twitter and email we asked our users to share with us photos and videos of their setup to see how they were using the system in order to have more information on the

real usage scenario. Six videos were posted on YouTube1 , one photo on Flickr2 , and one photo was sent to us via email. We found also two hands-on reviews of the drum machine which included various photos of the used setup. The contents that we have found were from all around the world, mostly from USA, Germany, UK, Chile, Brazil, Japan. We tracked the most interesting ones in our website3 .


Data Analysis

All the data that we have collected, as explained before, has been analyzed with an approach inspired to the grounded theory [31, 15], especially for textbased content, while the videos and photos have been analyzed to see how the setup was built and if some recurring usage pattern emerged. In two cases we saw interesting examples of user appropriation of d-touch.

5.2.1 Text

User Generated Content:

To analyze the text-based con- Figure 5.1: A user setup publicly posted on Flickr tents we divided them at the sentence level and we categorized them with the open coding method. The open coding phase consists in categorizing each sentence in an appropriate group and
1 2 3


Axial Codes Actual Usage Feedback Technical Improve and extend Audio Physicality of the interface Applications and real use Field trial related Generic Comments Viral sharing First impressions Personal data sharing Total

Sentences 305 202 40 25 21 10 7 286 153 118 15 591

Table 5.1: Axial Coding and the two main groups.

create a new group if the sentence doesn’t fit a previously created one. In this way from a large amount of data it’s possible to get a significantly smaller number of categories that describe the complexity of all the sentences. The next step is called the axial coding and consists in grouping previously created categories to have a small amount of groups that can be described. Using this grounded approach we collected 591 sentences and we grouped them with 50 open codes. With the axial coding we managed to create 9 broader categories: physicality of the interface, audio, technical, improve and extend, field trial related, applications and real use, first impressions, personal data sharing, viral sharing. To present better the results we divided them in two groups: actual usage feedback, that refers to real usage, real experience on d-touch and generic comments, that contains all the impressions on the system, not grounded with real usage experience, based only on what has been seen on the web. A summary of the categories can be found in Table 5.1, while the contents of each category will be explored later.


Actual Usage Feedback

Almost every comment in this group was directly sent to us via email or on the forum. Some exception come from the hands-on reviews and from some Twitter post sent between users.

The large majority of technical messages are bug reports or problems related to the setup not working. We received also questions on software architecture and suggestions. We tried to reply as much as possible to this kind of questions to help our users to use d-touch. We noticed that people interacting with us for bug reporting, showed a good technical competence and helped us to solve problems that in a normal laboratory condition would never be found.

With d-touch we targeted as a potential audience the Do-It-Yourself people, the ones that enjoy to spend time building things and improving existing objects. For this kind of people, the time spent building the system is the more interesting part of the project as we can see with some comments:
Improve and extend

The system is made mostly of paper and cardboard; software and instructions of how to build the system are given on the website, in pure style Do-It-Yourself (probably the most appealing part of the project, from the user perspective). or I spent a happy half an hour cutting out the shapes and putting the little boxes together.

and Really enjoyed the glue, lentil and paper time though, reminds me how good it can be to get down and dirty with materials. Instead of discouraging people the DIY side of the project pushed people to suggest more complex and advanced setups, as to demonstrate the amusement in this: Currently I made a stand with 2 pvc pipes, 1 pvc L joint, and a metal flange for the base, and an adapter to go from the pvc pipe to the metal flange. I then threaded the camera cord through that, cut notches out at the top to secure the webcam ’clip’, and the height is just enough to accommodate the board being on a 8.5 by 11 inch paper board. The fiducials are just taped onto 1x1x1 inch cubes I had from a board game. We were worried by the fact that people could be discouraged by the time needed in building the interface but we never received comments on that point. We saw only someone, short in time, that tries to convince himself to do the project as soon as possible: I’ll get back to this on the weekend. Pretty much blocked for now - a slightly ironic turn of phrase given the nature of the project.

The audio part of the project received a good number of comments. Someone was about the easiness and playfulness of the project, suggesting enjoyment from the users, as:


Everyone in this house has now put together a radio-worthy beat by pushing little scraps of paper around under a webcam. Other people suggested to use d-touch in schools to teach music or that they used the instruments to play with their children. The audio part of d-touch received the largest part of the criticism. The audio synthesis part was too limited and without enough features for a long engagement and for real musical use. Several users requested the ability of sending MIDI or OSC signal out of the drum machine, or the ability to load custom samples. One post that we received on our forum is explicative of these problems for musicians: it’s just a short-time toy because: 1) can’t send midi 2) can’t even load custom samples or again from the forum: I Could probably live without the Midi Sync if there was a BPM selector in the D-touch program in the video window.

Physicality of the interface

We collected sentences that externalized the understanding of the tangible interface and appreciation of a tangible approach in music creation. One example might be: When I showed my good bands and songwriters the setup, they loved the realness and ability to touch and move something real to make the sound or

Software doesn’t have to mean virtualizing everything and letting go of physical objects. On the contrary, it can create all sorts of imaginative, new ways of mapping musical ideas to the physical world. And that’s how we wind up with a walnut drum sequencer. Others appreciated the ease of use of the interface: The Audio d-touch is a collection of tools and applications which allow you to compose music in real time and the interfaces are extremely user friendly and the simple design of the projects: It’s easy and it’s really fun. All other examples of similar technologies involved being a computer genius and the step by step instructions are just pain awesome for everyone, this is the right way to do this kind of things, by making them available and enjoyable to everyone.

Application and real use

We received some proposals from users on educational projects and ideas on how to market d-touch in the real world. This shows that users were interested in real world applications of d-touch and they proposed real usage scenarios related to their personal projects and ideas.

Field trial related

Regarding our user study we received some positive and negative comments. Some users complained about the registration questions being to invasive or about the logging system, but this comments were very few (only two in total). The majority of issues were related to the necessity of Internet connection when playing the instrument that limited possible use on stage of d-touch.

This was often accepted well as the applications were freely downloadable and the setup was really cheap. We received also proposals of small payments to remove the logging system to allow free usage of the applications.

Generic Comments

In this section we’ll analyze comments not directly related to usage experience but on the informations related to the d-touch applications. The media promotion provoked emotional reactions from users before, or instead of, actual usage of the system. The following material is less interesting than the one presented above, but it shows that tangible interfaces can attract a lot of media attention, especially if they are freely available for download.

Viral sharing (Twitter)

We used the Twitter micro-blogging platform to promote and publicize d-touch. As a consequence many Twitter users posted content related to d-touch and their feelings about it. Often the interest was expressed with just a link to our website or to their blog post related to d-touch. In any case this viral form of communication gathered a substantial number of users that actually tried the system.

First impressions

Often d-touch was surrounded by a wow-effect, especially at the first launch. This provoked a series of emotional comments as: “Sci-Fi??? No more! You can actually build it!” or “This is an awesome UI.” These impressions are representative of the common feelings of the people that saw videos or photos about d-touch, probably even before trying it.


Figure 5.2: This is the first video that we received. It’s done with an old version of the drum machine and the user is hearing wrong sounds caused by a bug in the audio card settings, that we solved with later versions. We can see that the user has a very small space to place the board and it’s in the middle of two different computers. The markers have been done folding the paper blocks that we proposed on our website. This is a typical case of technology interested user, since the music is not reproduced correctly but no comments are done on this.

The personal data sharing refers to the fact that our users often shared with us and between themselves personal informations like age, location or other data. We find this interesting in the measure that around a project might quickly gather a community of users, that can become affectionate to it and help the development. We discovered this especially in our forum that was heavily used during the launch phase and until we provided news, updates to the software or replies to specific questions.
Personal data sharing


User Generated Content: Videos and Photos

The photos and the videos that we received were probably the most informative part of our study, since we could see where the user setup the system and how they used it. To analyze them we initially created a set of questions to answer for each video and then we observed all of them trying to answer our questions, that were: What was used for the blocks? What kind of stand? Near the computer? What kind of room? Are they on table? Audio setup? Operating system? A summary of the analysis and other material can be found on Appendix C. On the DIY side, we observed that some people preferred to stay as easy as possible, without folding the 3D paper blocks, just using

Figure 5.3: This user built wooden blocks to have a polished setup and we can also hear in the video a very good rhythm created in few seconds. The ability and control of the interface are good as if he practiced a bit before doing the video. He is interested more in the musical part, since while speaking in the video he always refers to the drum machine and never to the tangible interface.

Figure 5.4: This is probably the most artistic video done on d-touch. The purpose was to promote a creativity day in Aosta, Italy, where people could interact with novel interfaces in a new multimedia library. In the video is shown all the building process and at the end a bit of playing. As discussed, strong lighting conditions lead sometimes to bad recognition, as happened during this video shooting, leading to a inconsistent music production. In this video we see also sequencer markers, the only appearance up to now.

Figure 5.5: In this video we see two people using the drum machine in a small place, with a quick setup. The markers are cut from paper without folding blocks, or even used in strips. The purpose of the playing is surely for fun as we see the board put in a wrong way, the webcam placed on a music stand and a light that makes the interaction even more awkward. Nevertheless the two users have fun playing, they don’t even try to make a rhythm and at the end they congratulate with us, as if the only purpose of the drum machine was to be a toy.


the flat markers printed on paper. Sometimes they even used the uncut strip of markers to construct repetitive patterns or to test quickly the interface, as we also observed from the interaction logs. On the opposite end the DIY lovers built a polished setup with wooden blocks and professional tripods. We never saw the cardboard stand that we provided as a PDF, users preferred to hung the webcam to shelves or to camera tripods. In some cases, the interactive board was raised, on books, to bring it closer to the camera. We always observed d-touch setups on desks, they never appeared on casual surfaces like sofas, floors or others. The desks were often cluttered with other things making the available space really small and the movements to play were often awkward and unnatural. Sometimes the board was placed in a wrong position respect to the users, because the camera was at the side of the table or the only available space were not in the right position. So that the board resulted diagonally placed or not in front of the user. Obviously the results in these cases were not perfect, but we saw users mostly amusing or technically testing the system more than trying to get a real music out of their playing. The usage settings were very different, we saw small rooms, bedrooms, individual offices, a cubicle and a music studio. The hardware and operating system used was varied: laptops, desktop computer, Windows and OS X. Also audio systems were various, from laptop speakers to professional hi-fi systems. Two videos were of particular interest since we observed two cases of users appropriating the technology in radical ways. The first example is in a university setting, were a user printed the board around 8 times bigger (as she herself explained) than the standard A4 proposed by us. Then she affixed the board vertically on a magnetic whiteboard using office magnets. The markers were also glued on office magnets and used on the board as in the original

(a) d-touch drum machine put vertical and used collaboratively.

(b) The tangible interface subverted and made virtual.

Figure 5.6: Two video documented cases of user appropriation.

idea. Then the video shows her explaining the complete setup, how she was using the classroom hardware to complete the setup and it finishes with two fellow students using collaboratively the setup to build a drum sequence. The other interesting video was, if possible, even a more radical revolution. The user managed to use d-touch without using paper! He pointed the webcam to his computer screen where, using a graphic design software (similar to Adobe Illustrator), he displayed an interactive board with markers drawn on it. He then moved markers with the standard computer mouse, completely subverting the original idea of a tangible interface!


Chapter 6

We have analyzed so far the data separately, now we’ll try to get a comprehensive view of all the data and get some inspirations for future works. From the analysis of reconstructed interaction videos and from the user sent videos we can say that the simple set-up works. The initial idea of spreading an experimental technology to a large user base through a low cost system has been demonstrated successful. d-touch has been used in different settings with different hardware setups showing a good degree of versatility. The basic concept of a board with interactive blocks over it, pointed by a webcam is clear, we didn’t receive questions or comments about it. Anyway from the reconstructed video analysis we have seen that the recognition algorithm is not robust enough in uncontrolled lighting conditions and that the calibration was not always achieved. Viewing these common problems, we realized that we gave almost no feedback about calibration and recognition on the application GUI and the instructions that we delivered with the system were probably not enough. The largest part of the users explored the interface and produced basic rhythms. The creation of advanced patterns was infrequent

as probably the applications were seen as toys more than musical instruments, as someone pointed out with their comments. The lack of important features for musicians has been widely communicated by the users as we reported previously. The trends in interaction logs, summarized in Table 4.1, support this supposition. The more playful drum machine has been used more than the sequencer (65% vs. 35%) and the time spent on the interfaces spans mostly two days. The 21% of our users continued to use the instruments for longer times. The signals that more blocks were used in the sequencer and that sessions were longer on this interface suggest that, thanks to the record and store functions, more complex sequences were possible, being potentially more interesting in a longer time span. From emails and videos sent by our users we encountered several examples of user appropriation. Two persons wanted to use d-touch in their classes, to teach music to young children ant to teach game design in another case. Someone changed the Audio d-touch setup to better fit their purpose, putting it vertical, to allow collaborative work and teaching, or scaling it much bigger to artistic purpose in a festival setting. Someone tried to enhance the block design building wooden blocks and proposing different weight to represent different sounds in another case. Another one managed to build a virtual tangible interface using the computer monitor as the board and moving markers with the mouse in a graphic application. From these examples we see that user grasped the idea of tangible interface and used it to fit their needs in the most different settings. We observed a high number of people registering to the website and a significantly smaller number of users successfully trying out the interface. This may be due in part because of the emotional interest towards the applications and the lack of motivation to build it or try it. In other cases users may have encountered problems with their hardware configuration using d-touch. We suspect that the

open source third party libraries that we used to support webcams and audio cards under different operating systems, together with our software may create some incompatibility. Moreover we discovered bugs related to hardware configurations only delivering the software to our users, that helped us a lot in discovering and fixing them. Generally the d-touch users are medium to advanced level computer users, DIY lovers and musicians. We heard always good critics on the audio part of the application and good reports about technical problems. Instead no one had specific negative comments about the tangible interface. Overall we received few comments about it, even if it’s a novel interface for musical instruments and from registration data we saw that few of them reported previous interactions with tangible interfaces. Nevertheless from interaction logs, videos sent by users and comments we observed a rapid learning curve and no problems approaching the interface. To sum up, the web distribution and user study of a tangible interface has never been done before the d-touch experiment and at the end we can say that it has been a success. Even if the applications were perceived mostly as toys than proper musical instruments we received a huge media attention and users were really involved in the project and helped us a lot to improve the applications. A large number of users managed to use the instruments, even in presence of technical difficulties, and they explored the possibilities of tangible interfaces. About the interface we received very few comments and all of them were positive and encouraging. We interpret this lack of comments as an evidence that users found the interface “obvious” to use, even if very few of them reported previous experiences with TUIs. Maybe this is due to previous informations about TUIs gathered on medias that in the last years reported a lot of works in this field with a lot of emphasis as they did with us. Finally the cases of user appropriation are signals of strong interest and advanced understanding of this novel type of interface.


Chapter 7

Future Work
The top priority work for future development is a visual feedback on the GUI to help users getting a correct calibration and good recognition. We think that this can help users to get generally a better experience removing the annoying fact that what you see in the interface doesn’t correspond to what you hear produced by the software. We could add on the GUI some very visible message saying that the system is uncalibrated and the information to put the four calibration markers visible from the camera to have the correct audio. We thought to add also some simple parameter setting of the computer vision algorithm to improve recognition based on the user current lighting setting. A possible improvement could be to simplify the current adaptive thresholding method with a fixed threshold method that can be tuned with an on-screen slider by the users, accordingly to their lighting setup. We observed that the current algorithm is not perfect with very strong lights, a situation that may happen on stage for example. We are also looking into mechanisms that would allow users to share their compositions with the community of users through the website, to foster the development of a user base and to encourage

them to play more. The compositions could be shared just in form of images that could be downloaded and printed as a musical score. The MIDI, OSC or other output signal is not interesting from the research point of view because it would lead to uncontrollable usage that could not remotely studied. In any case, it’s already possible to create this kind of interface with the d-touch recognition library, freely downloadable from the web in open source1 . To improve the remote user study we should resolve a bug in the logging of calibration markers. Now we are not able to understand if an image is correctly calibrated or not, because we are not logging correctly the four calibration markers. This could be a useful addition for next user studies in synergy with the GUIs modifications. Now with Adobe Alchemy2 it’s possible to port the entire recognition library to Adobe Flash, allowing to reduce problems in hardware compatibility and to make faster the setup of the system. If the performance loss due to the Flash porting shows to be sustainable from the performance point of view, this kind of approach would make d-touch available to an even broader audience, directly from their Internet browser. In the future we are interested to mix a visual feedback directly on the tangible interface to enhance the possibilities of the instruments. This could be implemented with small low cost picoprojectors that could be hung together with the webcam thanks to their light weight. Another possible solution could be the use of a LCD monitor as the base of the interface. This could be problematic for the constant backlight of the screen that might interfere with the recognition algorithm.

1 2


Chapter 8

The work of this thesis focused on the public release of a tangible interface for musical performance and composition to allow a large scale user study in the field of human-computer interaction. We started by creating a logging system to record user movements through Internet, then we packaged the software and released it online. We tried to create interest in the Internet community by publishing videos and content in sites like YouTube and Twitter. We helped and we were helped by our users to get the better applications as possible and to get them work on the broadest type of hardware. Then we developed ways to view the interaction logs gathered through the months of usage and we collected all the material that we have found online regarding d-touch after the public online launch of the musical applications. The contribution of this thesis to HCI is on the novel user study that we have used, even if it has to be fine tuned, has given good results with a very low cost. We cannot achieve the level of an in-situ observation, but still a lot of informations can be retrieved. The time and money spent in these remote studies are dramatically less than what is needed for an in-situ study, making possible to study more products even with a small team and with small economical availability. Probably the best results can be obtained mixing the

two different approaches, to gain significant statistical numbers using the large user base and, on the other side, having the direct exchange with users might be very useful to pose direct questions and to see common behaviors. In our case we had to hypothesize what our statistical analysis meant, while if we had the possibility of asking questions directly to users we could get the precise answers. But a positive consequence of this approach is that, being completely absent from the setting we can assure a low invasiveness and a greater level of ecological validity of the experiments. With both approaches would be possible to relate analyzed patterns, especially on interaction log videos, with in-situ observations explaining much better the gathered data. To sum up, despite some technical difficulties experienced by the users, and the lack of advanced musical features, we received a very good response. Few comments were directed to the interface per-se and all of them were strongly positive, emphasizing the novelty of having “tangible sounds”, improving the creative possibilities. We argue that these observations show that the time is mature to distribute tangible user interfaces out of the laboratories or controlled environments and also in domains different from the audio in an inexpensive and democratic way. It’s time to bring TUIs to the masses!


Appendix A

UGC texts
A part of the analyzed text content with the open code used. Table A.1: Some sentence collected with the related open code. Source Mail Text Category I think with projects like this the educational kids will having a lot of fun to perform handicraft work & making experiments with music. in pure style Do-It-Yourself (proba- DIY Love bly the most appealing part of the project, from the user perspective) Bored with mouse pushing and knob low cost aptwiddling? The d-touch tangible preciation sequencer/drum machine makes a cheap interface (with free downloadable software) for assembling sequences. yes, drums are tangible. We know. tangible What this is, however, is a tangible interface interface that is a drum machine. awareness Continued on next page




Table A.1 – continued from previous page Source Text Category Blog Global economic recession cutting low cost apinto your gear purchases? The team preciation at may have a solution: a drum machine that you print with your computer printer. Mail All the musicians I showed the unit not good deto, commented on how sterile and sign clean the board looked. Mail And i’d say you have to be on the net Internet because the software cost $0. availability issue Mail I just tried registering on the d-touch personal data website and it didn’t like my rather complain too expansive answer to ”how often do you play”. It still went ahead and created the account (username: Dunx) Mail Most of my friends used the heavier tangible objects for the lower sounds instinc- appreciation tively (metal and glass for kicks and toms) and the wood for snare and paper for the hats and cymbals. Forum Had some problem setting the light- lighting probing (some beat inputs skipped) lem Forum I will try to improve my setup and Try-toexperiment a bit with velcro... Will improve keep you posted. Beesh Forum Hi my names Gideon and I’m from social sharing England. I live in a small town called Chichester which is near Brighton.


Appendix B

Open Codes and Axial Coding
The complete open codes produced from collected data. Table B.1: Open codes and axial coding Category Entries Viral sharing 153 Twitter link 103 ReTwit 50 Personal data sharing 15 social sharing 15 First impressions 118 impressed/excited 79 fun 2 skeptical 1 ease of use 2 low cost appreciation 23 like usage 2 fake 9 Applications and real use 10 educational 3 build for others 5 Continued on next page

Table B.1 – continued from previous page Category Entries commercial issues 2 Improve and extend 40 Try-to-improve 8 DIY love 16 source interest 3 camera under table 2 Try-to-do 11 Physicality of the interface 21 tangible interface awareness 12 TUI comparison 3 tangible appreciation 6 Technical 202 activation problem 63 osx problem 26 camera problem 24 audio problem 20 vista problem 9 registration problem 5 forum problem 1 lighting problem 4 linux problem 1 paper issue 1 linux desire 10 sequencer desire 7 lack of documentation 8 general complain 3 problem fixed 11 calibration 1 augmented reality 7 Cross-platform appreciation 1 Continued on next page


Table B.1 – continued from previous page Category Entries Audio 25 MIDI 11 use on stage 4 custom sounds 2 music tool 4 tempo selection 1 not good design 1 audio complexity 1 time issuesv1 Field trial related 7 research interest 1 Internet availability issue 5 personal data complain 1 Total 591


Appendix C

UGC photos and videos analysis
In this appendix are shown all the sources of our analysis on photo and video UGCs. D-touch by cybermolex on 03 July 2009 ( DESCRIPTION: -What was used for the blocks? The blocks have been made with the proposed shape. -What kind of stand? The stand of the webcam without modifications. It’s a nice long stand that can be twisted to point where you prefer, it can also point vertically down. -Near the computer? Near two computers! The laptop with Windows is running d-touch, while on the background we can see a Linux-box. -What kind of room? A dark room... probably the user’s bedroom.


-Are they on table? Yes, but in a very small and awkward position. The board is placed diagonally (probably to fit the webcam position) and the space is very small, between a monitor, a laptop and a multimeter... -Audio setup? Unknown probably laptop speakers. -Operating system? Windows on a laptop. d-touch drum machine on a whiteboard by mutedharmony on 28 July 2009 ( DESCRIPTION: -What was used for the blocks? Plain markers has been glued on magnets. -What kind of stand? The webcam has been duct taped to a music stand that pointed to the magnetic whiteboard.

Figure C.1: The vertical setup of d-touch

-Near the computer? No, the computer is far and poorly or not visible from the interface. -What kind of room? A big classroom is the setting of this video.

-Are they on table? No, the setup is on a vertical magnetic whiteboard. Also the interactive board has been put vertical with magnets. -Audio setup? External M-Audio FireWire connection. -Operating system? Os X on a MacBook Pro 15”. Bumm-Tschak by control617 on 28 September 2009 ( and with photos. DESCRIPTION: -What was used for the blocks? Nothing! -What kind of stand? Cup filled with coffee beans... -Near the computer? Yes, on it!

Figure C.2: The virtual setup of d-touch and it’s setting

-What kind of room? Private space. -Are they on table? No, just on computer -Audio setup?

Internal audio card, or nothing special -Operating system? Windows on a desktop computer. D-Touch by EATYone on 9 October 2009 ( DESCRIPTION: -What was used for the blocks? Wooden blocks with paper glued on it. -What kind of stand? A bracket standing over the table. -Near the computer? Yes, the laptop is just at the side of the table. -What kind of room? Probably a laboratory. -Are they on table? Yes, on a small table with only the board and some blocks. -Audio setup? Laptop speakers? -Operating system? OS X on a MacBookPro 15” Lovebytes D-Touch Promo by presenceelectronique on 30 October 2009 ( DESCRIPTION:

-What was used for the blocks? Paper blocks and simple paper markers. Mostly the simple markers for actual playing. -What kind of stand? A professional camera stand with a DV camera. -Near the computer? Yes, the computer is on the same table as the interface. -What kind of room? Different rooms in a public building. -Are they on table? Yes. -Audio setup? Not seen but probably with an external audio card. We see some brand of musical instruments (Alesis). -Operating system? OS X on a MacBookPro 15”. Musical Experiment Tangible drum machine - tangible musical interface by AlienKarma1512 on 24 November 2009 ( DESCRIPTION: -What was used for the blocks? Plain paper, no block is constructed. -What kind of stand? Musical stand with camera branched on it. -Near the computer?

Yes, on the same table. -What kind of room? It seems a small room, probably used also to play music, a private space. -Are they on table? Yes, a small one. With a strong light pointing to the system. -Audio setup? Computer speakers. -Operating system? Windows on a desktop computer. 3689219303/ DESCRIPTION: -What was used for the blocks? Plain paper. -What kind of stand? A bracket standing over the table. -Near the computer? Yes, the computer is visible in the photo at the left of the board. -What kind of room? Personal room. -Are they on table? Yes, a very small table with a light near the board (probably used to get good recognition).


-Audio setup? A computer speaker is visible in the top of the photo. -Operating system? Windows on a laptop html/image2.jpg.html DESCRIPTION: -What was used for the blocks? Simple paper and some blocks. Some marker is glued on some small object not recognizable. -What kind of stand? A professional camera stand with a consumer grade webcam. -Near the computer? Yes, just in front of it. -What kind of room? A cubicle. -Are they on table? Yes but the space is very small. Nevertheless the setup is good and gives good movement possibilities. -Audio setup? Unknown, probably headphones. -Operating system? Windows Vista.

DESCRIPTION: -What was used for the blocks? Paper blocks and simple paper markers. -What kind of stand? A professional camera stand. -Near the computer? Yes, to the left of the computer, in front of a big speaker. -What kind of room? Probably the personal studio of the user.

Figure C.3: One photo from the hands-on review

-Are they on table? Yes, in a very clumsy table with little or no space. -Audio setup? Probably connected to an external audio card (visible in a photo). -Operating system? Windows Vista.


Appendix D

Riassunto in Italiano
Questa tesi descrive il mio lavoro svolto al Media and Design Laboratory dell’EPFL del Prof. Jeffrey Huang tra Marzo e Settembre 2009, supervisionato da Enrico Costanza. Lo scopo del progetto ` e stato quello di studiare l’interazione di un largo numero di utenti con un tipo di interfaccia ancora poco studiato, l’interfaccia tangibile. Per fare questo abbiamo usato due strumenti musicali, basati su interfaccia tangibile, realizzati nel 2003 da Enrico Costanza e Simon Shelley, d-touch drum machine e d-touch sequencer, li abbiamo resi scaricabili da Internet, facilmente riproducibili dagli utenti e, dopo aver raccolto log di interazione e materiale prodotto dagli utenti e distribuito su Internet, abbiamo studiato come le persone si rapportano a questo genere di interfaccia nel loro ambiente e per un periodo di utilizzo prolungato.



La tesi ` inscritta nell’ambito degli studi sull’Interazione Uomoe Macchina (HCI). Questo ambito ` molto vario ed eterogeneo, fanno e parte di HCI studi di informatica, design, psicologia, e molto altro. HCI si definisce come la disciplina che si interessa della pro69

gettazione, valutazione e implementazione di sistemi computazionali interattivi usati da persone e anche come lo studio di tutti i fenomeni che gravitano attorno a questo [1]. Negli ultimi anni l’importanza di questi studi ` via via cresciuta, e come ` cresciuto l’interesse delle grosse aziende al dibattito accae demico su HCI. Tutte le principali conferenze mondiali sull’HCI, come CHI (ACM Conference on Human Factors in Computing Systems) o UIST (ACM Symposium on User Interface Software and Technology) sono finanziate da aziende come Microsoft, Google, Autodesk, IBM and Nokia1 . L’interesse ` portato dal fatto che sistemi e interattivi che si adattano meglio alle esigenze dell’utente sono percepiti meglio e venduti di pi`, oltre ad essere usati con maggior u piacere. Un particolare campo di ricerca di HCI ` basato sulle Interfacce e Utente Tangibili (TUI). Il concetto principale delle TUI ` che la rape presentazione fisica di un oggetto (come un oggetto fisico manipolabile) ` associata ad una rappresentazione digitale (audio o video), e che porta alla creazione di interfacce mediate dall’informatica ma non identificabili come “computer” [3]. L’interesse accademico verso le TUI ` nato a met` degli anni ’90, e a quando Wellner, Fitzmaurice, Ishii and Ullmer [5, 6, 3, 7] iniziarono a sviluppare queste nuove interfacce. Da allora il dibattito attorno a queste interfacce ` cresciuto costantemente, fino a quando, nel 2007 e ` stata organizzata una conferenza specifica su queste tematiche, e TEI2 , che da allora si tiene con cadenza annuale. A partire da questo interesse nelle TUI sono nati svariati altri esperimenti nei campi pi` disparati, come nell’istruzione, nella muu sica, nell’intrattenimento e in molto altro. L’ambito musicale ha avuto molta crescita e rilevanza mediatica con progetti come Audiopad e Reactable [13, 11].
1 CHI 2 Tangible

2010 web: e UIST 2009 web: and Embedded Interaction Conference.


Le TUI sono sempre state sperimentate e studiate in ambienti controllati come laboratori e musei e sempre in numeri ristretti per le difficolt` tecniche di riproduzione. Con d-touch abbiamo cercato a di risolvere questo problema creando e distribuendo su Internet una TUI a bassissimo costo. Per studiare l’utilizzo di questa interfaccia abbiamo registrato i movimenti prodotti dagli utenti con le applicazioni e abbiamo raccolto materiale trovato su Internet a proposito di d-touch, ispirati da un modo di studio gi` sperimentato in HCI a negli ultimi anni [15, 16, 17]. Alla fine del periodo di osservazione, abbiamo raccolto una quantit` di dati rilevante, molto superiore alla media degli altri studi a su TUI che abbiamo analizzato [28, 12, 8, 10, 14, 29] con un costo sicuramente inferiore, in quanto il nostro team era composto da tre persone, non dedicate a tempo pieno sul progetto, e il materiale di supporto ` stato solo un server web fornito dall’universit`. e a Dall’analisi di questi dati si pu` osservare che l’interfaccia tangibile o ` stata accettata bene dagli utenti che hanno criticato esclusivae mente le possibilit` musicali dell’applicazione, come se fosse ovvio a a tutti il modo di interagire che per` pochissimi avevano sperimentato o prima, come hanno affermato gli stessi utenti in fase di registrazione sul sito.


Audio d-touch

Audio d-touch ` una coppia di applicazioni, d-touch drum machine e e d-touch sequencer, basate su un’interfaccia tangibile per comporre musica e suonare dal vivo.



Come funziona

La configurazione di Audio d-touch ` semplice, come in altre intere facce tangibili basate su tavoli, ` presente una webcam collegata ad e un computer che riprende dall’alto il tavolo sul quale ` posto un e foglio A4 che sar` l’area interattiva dell’applicazione. Per il traca ciamento e il riconoscimento dell’area interattiva e dei blocchi usati per suonare, ` stato usato un algoritmo di computer vision, d-touch, e sviluppato da Costanza et al. [25, 27]. L’algoritmo ` in grado di trace ciare in real-time (a 25/30 Frame Per Secondo) l’area interattiva e i marker che vengono posizionati su questa. Al momento, con un foglio A4 come area di utilizzo, abbiamo sperimentato l’applicazione con 25-30 marker e non abbiamo notato cali di prestazioni. Dal punto di vista musicale, le applicazioni di Audio d-touch sono basate sul concetto di loop, cio` che una sequenza di suoni viene e ripetuta all’infinito mentre possono essere modificate delle parti. Come se ci fosse un cursore virtuale che ripercorre l’area interattiva e quando incrocia un blocco questo produce il suono che rappresenta. Questo modello di strumento ` molto popolare nell’industria della e musica elettronica alla quale ci siamo ispirati. Le applicazioni sono scritte in C++ con il supporto di alcune librerie. Per la gestione dell’audio ` stata usata STK3 , sviluppata e al CCRMA di Stanford. Per la parte di computer vision ` stata e utilizzata libdtouch, liberamente scaricabile da SourceForge4 che permette anche la gestione della camera. Per l’interfaccia grafica e la gestione della rete ` stata utilizzata la libreria C++ Qt5 . Tutte e queste librerie sono liberamente utilizzabili e cross-platform, cosa che ha permesso lo sviluppo e la distribuzione dell’applicazione sotto Windows e Os X, pi` una versione di sviluppo per Linux. u
3 4 5



d-touch Drum Machine

La drum machine ` il primo strumento realizzato e distribuito one line. L’obiettivo di questo strumento ` di essere facile e divertente, e come una prima introduzione alle possibilit` date dalle interfacce a tangibili. L’area interattiva ` molto semplice, divisa in undici righe e e sedici colonne: ogni riga rappresenta un suono diverso, identificato da un testo scritto sulla riga stessa, mentre ogni colonna rappresenta lo spostamento nel tempo relativo al loop. Per i suoni sono stati usati due marker differenti, uno che indica un volume di riproduzione normale e un altro per un volume pi` forte utile per porre l’accento u su alcuni suoni.


d-touch Sequencer

d-touch Sequencer ` un’evoluzione della drum machine. Il concetto e alla base, di riproduzione di campioni nel loop continuo, ` lo stesso. e La differenza sta nel fatto che con il sequencer ` possibile registrare e e riprodurre i propri campioni durante l’utilizzo. In questo strumento sono presenti 18 diversi marker che indicano i diversi suoni possibili. Pi` marker identici possono essere presenti nell’interfaccia u ma riprodurranno lo stesso suono. L’area interattiva ` composta da e una zona dove i marker attivano la registrazione dal microfono, e una dove attivano la registrazione di quello che viene riprodotto, in modo che si possano creare sequenze sempre pi` complesse. In u questa interfaccia l’asse orizzontale ha lo stesso significato che nella drum machine, mentre quello verticale indica un diverso volume di riproduzione. Inoltre la rotazione dei marker fa s` che i campioni ı vengano riprodotti ad una velocit` diversa, da met` fino al doppio. a a



Sistema di Registrazione Remota

Per permettere l’osservazione da remoto ` stato implementato un e sistema di registrazione dei movimenti e di invio ad un server remoto per l’archiviazione dei dati su database. Per avere dei dati precisi ` stato sviluppato anche un sistema e di attivazione del software. Gli utenti devono infatti usare i dati di registrazione sul sito anche per attivare il software, in modo che sia possibile il tracciamento dell’utilizzo attraverso pi` sessioni u dello stesso utente, permettendo la visualizzazione di comportamenti o per capire la curva di apprendimento. Alcuni accorgimenti sono stati presi per evitare il pi` possibile la condivisione di acu count e di software, come il tracciamento del MAC address della macchina o dell’indirizzo IP. I dati di utilizzo che registriamo sono completamente anonimi e riguardano solo gli spostamenti dei blocchi nell’interfaccia. Nessuna immagine ` stata presa dal software. e Durante la registrazione ` stato specificato chiaramente agli utenti e che il software avrebbe registrato l’utilizzo per fini scientifici e solo una persona ha chiesto la rimozione dei propri dati dal database. Altri hanno lamentato l’invasivit` delle domande in fase di regisa trazione o la necessit` di avere una connessione ad Internet per a usare il software, ma il fatto che il tutto fosse gratuito spesso faceva accettare questi limiti. Dal punto di vista server, la registrazione dati ` stata sviluppata e modificando un sistema esistente basato su TikiWiki6 , un gestore ` di contenuti basato su PHP. E stato sviluppato uno script PHP che ogni volta che riceve una chiamata POST HTTP raccoglie i dati (generalmente di 25 frame di utilizzo) e li immagazzina in un database MySql. I dati contengono un timestamp, un riferimento temporale dall’inizio della sessione, l’utente che ha prodotto quei dati, la posizione e l’angolo di ogni marker presente nell’interfaccia. Il sistema si ` rivelato abbastanza robusto, dato che ha sopportato e


l’uso di centinaia di utenti, talvolta anche pi` utenti simultaneau mente, apparentemente senza perdere dati.



La natura semplice e a basso costo del progetto ` stata da sempre e uno degli obiettivi principali del progetto. Nessun componente elettronico da saldare, nessuna particolare abilit` manuale necessaria, a solo una webcam, un computer con una scheda audio e una stampante. Il pacchetto completo di software, immagini dei blocchi e dell’area interattiva ` scaricabile in meno di 10 Mega Byte. e Il fatto di avere un software cross-platform che abbia a che fare con l’hardware su cos` tanti livelli, ha reso necessario del lavoro non ı scontato. La distribuzione su computer non conosciuti di questo genere di software ` complicata ed ` molto facile incontrare problemi e e non previsti. Per avere un’installazione semplice abbiamo preparato le immagini dei blocchi da tagliare, le istruzioni per costruire il supporto per la webcam e delle fotografie che spiegano tutti i passaggi per arrivare al prodotto finale. Per promuovere meglio gli strumenti abbiamo anche girato e montato due video che sono stati pubblicati su Internet7 e che hanno raccolto un certo numero di visite. L’obiettivo di questo materiale era quello di spiegare la natura semplice del progetto e di catturare un po’ di attenzione mediatica, come altri progetti simili prima di noi avevano fatto.



Il Lancio Online

Fin dall’inizio del 2003 Audio d-touch ha ricevuto commenti informali ed ` stato provato informalmente da un ristretto numero e di persone, inclusi alcuni musicisti. I commenti sono sempre stati positivi e ci hanno motivato a pubblicare online gli strumenti credendo che a una certa nicchia potesse interessare questo genere di strumenti. Cos` in seguito ai successi di altri progetti simili, come Reı, actable [11] e alla pubblicazione di ricerche online condotte con l’ausilio di informazioni raccolte attraverso Internet, anche in modo informale [15, 16, 17], abbiamo deciso di pubblicare online Audio d-touch e di raccogliere i dati di utilizzo. In seguito alla creazione del sito web abbiamo pubblicato materiale fotografico e video sulle interfacce tangibili e il software necessario per riprodurre gli strumenti. Il giorno del rilascio del software, il 28 Giugno 2009, abbiamo pubblicato materiale su altri siti per pubblicizzarci e abbiamo usato la piattaforma Twitter8 per comunicare con i nostri futuri utenti. Nell’arco di una settimana abbiamo ricevuto pi` di 20000 visite al video, 671 utenti registrati al u sito, 208 persone hanno attivato l’applicazione e 112 l’hanno usata per pi` di un minuto. L’attenzione mediatica sul progetto ` stata u e notevole e abbiamo raccolto gli articoli principali e pi` interessanti u 9 sul nostro sito . Il 17 Agosto abbiamo lanciato ufficialmente il sequencer ed una nuova versione della drum machine, con del nuovo materiale promozionale di supporto. Un kit per la stampa ` stato spedito ad e alcune interessanti testate online e abbiamo cercato di essere promossi dai siti che gi` ci avevano pubblicato durante il primo lana cio. Probabilmente a causa della minor novit`, l’entusiasmo ` stato a e
8 9


meno di quello del primo lancio, ma dati i tempi pi` lunghi, cio` u e fino al 15 Dicembre 2009, gli utenti registrati sono stati 1252, di cui 389 hanno provato l’interfaccia e 273 hanno suonato per almeno un minuto, risultando in 199 ore di uso registrato.


Raccolta e Analisi dei Log

I dati analizzati in questa ricerca sono stati raccolti dagli utenti che si sono registrati tra il 17 Agosto e il 15 Dicembre. Non abbiamo potuto utilizzare anche gli altri dati a causa di un errore nella registrazione dei dati, che li ha resi inservibili. L’analisi quantitativa dei log ` iniziata con un’indagine demografica e compiuta sui dati inseriti volontariamente alla registrazione. I dati riguardano i 273 utenti che sono stati presi in considerazione in tutte le analisi dei log, tranne quelli di 7 persone che ne hanno inseriti di insensati. Il 27% degli utenti ha meno di 20 anni, il 73% meno di 30 e il 94% meno di 40 anni. Le femmine sono state molto poche, solo il 2% degli utenti. Pochi utenti hanno riportato conoscenza precedente di interfacce tangibili e ancora meno hanno detto di averle usate. I dati statistici principali che abbiamo raccolto sull’uso delle interfacce sono raccolti nella Tabella 4.1, che contiene la lunghezza media delle sessioni, il numero di sessioni per utente e il numero di blocchi per utente. Dai boxplot alla Pag. 32 si pu` vedere che, ano che se la drum machine ` stata usata pi` del sequencer, nel periodo e u analizzato, il sequencer ha avuto sessioni pi` lunghe nella maggior u parte dei casi. In generale l’uso di d-touch ` stato infrequente: il 21% degli utenti e l’ha usato per un periodo superiore ai 2 giorni e l’11% per un periodo superiore ad una settimana e dall’istogramma alla Fig. 4.3 si pu` o

vedere che l’uso ` stato compresso nella maggioranza dei casi in e meno di due ore. Questi dati saranno utili per interpretare meglio i commenti degli utenti che vedremo pi` avanti e saranno ripresi nella discussione u finale per trarre delle conclusioni sull’esperimento. Un altro tipo di analisi che abbiamo percorso ` stato quello ine trapreso analizzando i video ricavati dai log. Abbiamo creato un riproduttore di log, in modo da poter visualizzare a video i blocchi riconosciuti dall’applicazione e affiancare questo video all’audio riprodotto dalle applicazioni alle quali veniva dato come input il log registrato. In questo modo ` stato possibile rivedere tutte le azioni e fatte dagli utenti e riascoltare i loro risultati. Solo una parte dei video ` stata presa in considerazione, dopo e un’analisi statistica delle sessioni, in modo da visualizzare solo quelle pi` interessanti e non tutte, che avrebbero impiegato troppo tempo. u Dato che non abbiamo visto tutte le sessioni daremo solo indicazioni di massima su comportamenti comuni e non percentuali precise che comunque riteniamo utili come linee guida per l’interpretazione dei dati e come suggerimenti per futuri lavori sulle interfacce tangibili. Tra le persone che sono riuscite ad usare d-touch abbiamo notato che tutti sono riusciti a produrre un semplice ritmo con pochi blocchi o hanno passato un po’ di tempo a giocare con gli strumenti, con un paio di blocchi, cercando di capirne il funzionamento. Ci riferiremo a questo secondo comportamento chiamandolo esplorazione dell’interfaccia. Quasi tutti hanno iniziato ad usare gli strumenti con una fase esplorativa muovendo dei blocchi davanti alla telecamera o cercando di calibrare l’area interattiva mostrando i quattro marker agli angoli. Solo una piccola parte degli utenti ha cominciato subito producendo dei ritmi, probabilmente quelli che avevano un background di conoscenze musicali o sulle interfacce tangibili. L’esplorazione talvolta ` stata fatta usando il foglio con e

i marker non tagliati, come se avessero stampato il PDF pubblicato sul sito, senza ulteriore lavoro. Questo comportamento porta ad un alto numero di marker riconosciuti (circa 30) per un breve tempo avendo come risultato un rumore indistinto. Questo comportamento ` stato individuato pi` di una volta, in quanto ` il modo e u e pi` rapido per controllare il funzionamento dell’applicazione prou ducendo qualche suono. Abbiamo anche osservato che dopo questo test spesso si nota un certo tempo l’interfaccia vuota e poi che poco a poco, lentamente, si riempie, come se l’utente stesse tagliando i marker e ponendoli sotto la webcam uno alla volta. Questi pattern esplorativi sono stati visti solo nella prima sessione di ciascun utente. Le sessioni successive spesso iniziavano gi` calibrate o con a dei marker gi` presenti, come se il setup fosse stato lasciato intatto a dall’ultimo utilizzo. A parte questi pattern esplorativi e la generazione basilare di ritmi, poche sessioni contengono un numero significativo di blocchi, permettendo la creazione di sequenze pi` complesse, come si pu` u o vedere nella Tabella 4.1. Anche se queste poche sessioni lunghe dimostrano la possibilit` di creare sequenze complesse e musicalmente a interessanti. Un altro fenomeno interessante ` stato osservato, i marker spesso e sono stati riconosciuti in modo intermittente, tradendo difficili condizioni di illuminazione. Questo fenomeno chiaramente peggiora l’esperienza dell’utente perch` la disposizione fisica dei blocchi non e corrisponde alla disposizione vista dall’applicazione, portando ad una generazione musicale sbagliata. Nonostante l’algoritmo sia stato sperimentato a lungo per cinque anni e nonostante la presenza di un semplice algoritmo di tracciamento, questo fenomeno si ` presentato e pi` spesso del previsto. Crediamo quindi che le condizioni luminose u fossero veramente pessime per dare un cos` difficile riconoscimento ı e che probabilmente avremmo dovuto dare maggiori indicazioni su come illuminare l’area e qualche segnale in pi` sull’interfaccia grafica u dell’applicazione.

In generale abbiamo osservato interessanti sessioni ritmiche quando si aveva la calibrazione e buone condizioni luminose. Negli altri casi gli utenti erano in grado soltanto di esplorare l’interfaccia rapidamente e quindi dopo breve tempo abbandonarla senza ottenere niente di interessante.


Raccolta e Analisi di Contenuti Creati dagli Utenti

L’obiettivo principale della ricerca ` stato quello di osservare gli e utenti di interfacce tangibili nei propri ambienti. Attraverso i log di interazione abbiamo raccolto informazioni sul loro utilizzo, ma probabilmente la parte pi` interessante delle informazioni arriva dai u loro commenti informali e dalle foto e i video che hanno condiviso con noi. Attorno a d-touch si ` levato molto interesse mediatico, anche e grazie alla promozione che abbiamo fatto insieme al lancio delle applicazioni. Abbiamo osservato e raccolto tutto ci` che abbiamo o trovato in Rete tra il periodo dell’anteprima, cio` dal 28 Giugno e 2009, fino al 9 Settembre 2009, pi` alcuni video in date successive. u In questo tempo abbiamo raccolto pi` di 120 email, pi` di 330 post u u sul forum di, pi` di 50 post su blog, centinaia di comu menti su siti sociali e pi` di 220 post su Twitter. Attraverso Twitter u e le email abbiamo domandato ai nostri utenti di condividere foto e video delle loro istallazioni, per capire meglio come il sistema fosse usato in ambienti non controllati. Sei video sono stati pubblicati su YouTube10 , una foto su Flickr11 e una foto ci ` stata mandata via e mail. Abbiamo anche trovato due articoli di descrizione approfondita del sistema dopo una prova sul campo, con foto di descrizione. I contenuti trovati arrivano da luoghi sparsi in tutto il mondo, in particolare da Stati Uniti, Germania, Gran Bretagna, Cile, Brasile,
10 11


Giappone. I pi` interessanti possono essere trovati sul nostro sito12 . u Tutti i dati raccolti, sono stati analizzati con un approccio ispirato alla grounded theory [31, 15], specialmente per i contenuti basati su testi, mentre video e foto sono stati analizzati per capire meglio in che condizioni viene usato d-touch e se alcuni comportamenti ricorrenti emergono. In due casi abbiamo trovato degli interessanti episodi di appropriazione da parte degli utenti.


Analisi dei Testi

Per analizzare i contenuti testuali li abbiamo divisi a livello di frase e abbiamo categorizzato queste ultime con un metodo chiamato open coding. La fase di open coding consiste nel categorizzare ogni frase in un gruppo appropriato e creare un nuovo gruppo se la frase non fa parte di nessun gruppo creato precedentemente. In questo modo una grande quantit` di dati pu` essere significativamente ridotta ad a o un numero inferiore di categorie che descrivano adeguatamente la complessit`. Il passo successivo consiste nell’axial coding, che cona siste nel raggruppare le categorie create precedentemente in gruppi pi` piccoli che possono essere descritti. u Usando questo approccio, fortemente fondato sui dati, abbiamo raccolto 591 frasi e raggruppate in 50 categorie (open code), mentre con l’axial coding abbiamo creato 9 categorie pi` ampie: fisicit` u a dell’interfaccia, audio, tecnica, migliora ed estendi, relative alla prova sul campo, applciazioni e uso reale, prime impressioni, condivisione di dati personali, condivisione virale. Per presentare meglio queste categorie abbiamo creato due gruppi, uno relativo all’utilizzo reale degli strumenti e uno che si riferisce a commenti generici, spesso composto da prime impressioni che gli utenti hanno del sistema, anche senza averlo provato. Un riepilogo delle categorie pu` o essere trovato alla Tabella 5.1 mentre i contenuti pi` interessanti u


possono essere trovati alla Sezione 5.2.1 e altri esempi nelle Appendici A e B.


Analisi delle Foto e dei Video

Le foto e i video che abbiamo ricevuto sono stati probabilmente la parte pi` informativa del nostro studio, dato che ` stato possiu e bile vedere le istallazioni nel loro ambiente e come gli utenti hanno usato d-touch. Per analizzare i materiali ci siamo posti metodicamente delle domande: Cosa ` stato usato per i blocchi? Che tipo di e supporto per la webcam? Vicino al computer? Che tipo di camera? Sono sul tavolo? Che configurazione audio? Che sistema operativo? Un sommario delle analisi e altro materiale pu` essere trovato o nell’Appendice C. Sul lato della fabbricazione, abbiamo osservato che alcune persone hanno preferito restare pi` semplici possibili, senza piegare i u blocchi di carta, come suggerito, ma usando i marker ritagliando soltanto i fogli, a scapito dell’usabilit`. A volte sono state ossera vate delle strisce di carta non tagliata piene di marker per costruire ritmi ripetitivi o per sperimentare rapidamente l’interfaccia, come abbiamo gi` osservato dai log. Altre persone invece hanno prefera ito costruire dei blocchi di legno sui quali hanno incollato i marker, per avere una bella configurazione, comoda da usare. Alcuni hanno usato dei treppiedi professionali per la webcam, altri hanno usato mobili e scaffali, aiutandosi con dei libri sotto la superficie di utilizzo per raggiungere l’inquadratura ottimale. Abbiamo sempre osservato d-touch su tavoli, non ` mai stato e costruito su divani, pavimenti o altre superfici improvvisate. I tavoli erano spesso disordinati e con poco spazio disponibile, portando a movimenti scomodi e innaturali. Alcune volte il foglio di base ` stato posizionato non di fronte all’utente perch´ la webcam si e e trovava in posizioni laterali o storte. Ovviamente in questi casi non

ottimali i risultati musicali erano poco interessanti e neanche troppo ricercati dagli utenti, che puntavano pi` al lato tecnico o divertente u dell’applicazione. Gli scenari di utilizzo erano molto diversi, piccole stanze, camere da letto, uffici e studi musicali. L’hardware e i sistemi operativi erano svariati: laptop, desktop, Windows, OS X. Anche dal punto di vista della configurazione audio i sistemi erano diversi a partire dalle casse del portatile per finire con sistemi hi-fi professionali. Due video erano di particolare interesse, dato che sono stati osservati casi di appropriazione, da parte degli utenti, della tecnologia in modo radicale. Il primo esempio era in un’universit`, dove un a utente ha realizzato d-touch 8 volte pi` grande del normale e l’ha u attaccato in verticale su una lavagna magnetica con dei magneti da ufficio, come ha fatto con i marker, attaccandoli su piccoli magneti. In questo modo d-touch ` stato usato in verticale, in maniera e collaborativa da pi` persone, come si pu` vedere nel video. u o L’altro video interessante era, se possibile, ancora pi` rivoluzionario. u L’utente ` riuscito ad usare d-touch senza la carta! Ha puntato la e webcam sullo schermo dove, usando un programma di grafica, ha mostrato l’area interattiva con i marker e spostava questi ultimi usando il mouse, sovvertendo completamente l’idea di interfaccia tangibile.



Dopo l’analisi separata dei diversi dati, procediamo ad un’analisi complessiva per capire le direzioni di lavoro futuro. Dall’analisi complessiva dei dati possiamo desumere che il semplice format di interfaccia tangibile a basso costo, stampabile, fun83

ziona. Un alto numero di persone ` riuscita a farlo funzionare, senza e il nostro intervento fisico. d-touch ` stato usato in diversi ambienti e e ha dimostrato un elevato grado di versatilit`. Il concetto di webcam a e di area interattiva ripresa dall’alto con blocchi mobili funziona ed ` ben compreso. Un punto negativo ` stato l’algoritmo di riconoscie e mento che non si ` dimostrato sufficientemente robusto. Nonostante e gli anni di sperimentazione, essere usato in ambienti non controllati ` qualcosa che non pu` essere sperimentato prima. Anche la calie o brazione dell’area interattiva non ha dato ottimi risultati e spesso il sistema produceva suoni che non rispecchiavano la configurazione fisica a causa di questi due problemi. Con il presentarsi di questi problemi ci siamo resi conto che il sistema forniva piccoli indizi sul fatto che l’illuminazione fosse corretta o il sistema calibrato, questo perch´ per noi erano fatti scontati dei quali non ci siamo preoccue pati. I dati nella Tabella 4.1 supportano l’idea che gli strumenti siano stati percepiti pi` come dei giochi che come dei veri strumenti musiu cali. Si pu` vedere ci` dal fatto che la drum machine, pi` semplice ` o o u e stata preferita al sequencer (65% contro il 35%) e il tempo trascorso tra il primo e l’ultimo utilizzo raramente supera i due giorni, solo il 21% degli utenti ha usato le interfacce pi` a lungo. Inoltre il fatto u che sul sequencer siano stati utilizzati pi` blocchi spinge a pensare u che lo strumento pi` complesso ` pi` interessante per usi musicali, u e u anche se meno interessante dal punto di vista mediatico. Dal materiale trovato su Internet o ricevuto via email, abbiamo notato diversi casi di appropriazione da parte degli utenti. Due persone volevano usare d-touch per insegnare, musica ai bambini e design all’universit`. Alcuni hanno cambiato il setup di d-touch, a ingrandendolo per un festival, mettendolo in verticale per insegnare e rendere il lavoro collaborativo. Alcuni hanno migliorato i blocchi facendoli di legno o proponendo soluzioni migliori per sfruttare il peso degli oggetti. Un utente ` addirittura riuscito a creare e un’interfaccia tangibile virtuale usando la webcam per riprendere

lo schermo sul quale visualizzava i marker tramite un programma di grafica (simile ad Adobe Illustrator). Da questi casi si pu` capire o che gli utenti hanno capito bene il concetto di interfaccia tangibile e l’hanno adattato ai loro interessi negli scenari pi` diversi. u Come riflessione finale, possiamo sostenere che la distribuzione online e lo studio di un’interfaccia tangibile da remoto ` stato un e successo, anche se ` stato il primo caso di sperimentazione in questo e senso. Anche se le applicazioni sono state percepite come giochi pi` u che strumenti musicali, abbiamo ricevuto un’enorme attenzione mediatica e gli utenti coinvolti nell’esperimento ci sono stati di grande aiuto nel migliorare le applicazioni. Molti commenti sono stati raccolti, ma pochissimi sull’interfaccia e i pochi erano molto positivi. L’appropriazione da parte degli utenti ` stato un segnale incorage giante per capire che l’interfaccia ` stata percepita e capita a fondo. e Si pu` dire che l’interfaccia era “ovvia” da usare anche per chi non o aveva mai provato niente di simile prima.


Lavoro Futuro

La maggiore priorit` al momento, per prossimi sviluppi, ` di aggiuna e gere dei segnali video nella GUI per aiutare gli utenti a calibrare meglio il sistema e per avere un’illuminazione migliore. Crediamo che questo possa essere di grande aiuto per ottenere un’esperienza migliore facendo s` che l’impostazione fisica dei blocchi corrisponda ı con la riproduzione sonora. Due migliorie in questa direzione potrebbero essere un chiaro segnale che indichi che l’applicazione non ` e calibrata al momento e una semplice spiegazione su come calibrare. Sul versante riconoscimento si potrebbe semplificare l’algoritmo di soglia, che ora ` poco funzionale in caso di luce forte e diretta, e sose tituirlo con una soglia pi` semplice calibrabile dall’utente in base u alla luce di cui dispone. Crediamo che queste due migliorie possano incrementare notevolmente le possibilit` di Audio d-touch nel a

prossimo futuro. Siamo interessati anche a creare le condizioni per rendere questo genere di strumenti pi` interessanti sul lungo periodo. Un possibile u spunto potrebbe essere quello di permettere agli utenti di condividere le loro composizioni sul sito e permettere agli altri di scaricarle, stamparle e risuonarle, come se si trattasse di uno spartito musicale. Sempre nell’ottica Web ci sarebbe la possibilit` di fare il porting a dell’applicazione su Adobe Flash grazie ad Alchemy13 , che permetterebbe di ampliare ulteriormente l’audience possibile, rimuovendo anche la necessit` di installare il software, usando semplicemente un a sito web dal proprio browser. Siamo anche interessati ad aggiungere un feedback video direttamente sull’interfaccia per incrementare le possibilit` degli strumenti. a Per restare nel basso costo potrebbero rivelarsi utili i pico-projector che potrebbero essere affiancati alla webcam oppure, se si risolve il problema del costante controluce, si potrebbe usare un monitor LCD come area interattiva al posto del foglio A4.



Il lavoro di questa tesi si compone nel rilascio pubblico di un’interfaccia tangibile per la produzione musicale e in uno studio di utilizzo da parte di un grande numero di utenti nei loro ambienti naturali. Il lavoro ` cominciato con la creazione di un sistema di registrazione dei e movimenti degli utenti, quindi abbiamo pubblicato il software online e, dopo qualche mese di osservazione e di raccolta dati, siamo passati all’analisi, tenendo in considerazione anche tutte le informazioni che abbiamo trovato su Internet condivise dagli utenti.


Il contributo di questa tesi allo studio dell’interazione uomomacchina ` dato dal nuovo metodo di osservazione utilizzato, che e nonostante sia ancora da rivedere e migliorare, ha dato buoni risultati tenendo conto dei ridottissimi costi che ha comportato se lo compariamo ai metodi tradizionali di studio sulle interfacce tangibili. Probabilmente il metodo potrebbe essere migliorato notevolmente se fosse affiancato ad un’osservazione pi` tradizionale degli u utenti, che a noi non ` stata possibile per la dimensione ridotta e del nostro gruppo di ricerca. Con l’uso di entrambi gli approcci si sarebbe potuta eliminare una parte di speculazione sui dati che ha portato a risultati non sicuri, ad esempio quando i dati statistici erano contraddittori. Con alcune interviste a campione avremmo capito meglio dei comportamenti ricorrenti che avrebbero semplificato di molto l’analisi e che avrebbero portato a risultati pi` affidu abili. In ogni caso, a parte i problemi tecnici e la mancanza della complessit` necessaria agli strumenti musicali, abbiamo ricevuto a un’ottima risposta dal pubblico di Audio d-touch e dai media. Una parte ridottissima dei commenti erano diretti all’interfaccia, tutti positivi e che enfatizzavano l’aspetto del “suono tangibile” o delle nuove possibilit` creative date dallo strumento. Crediamo che questo a tipo di osservazioni mostri che il tempo ` ormai maturo per far use cire le interfacce tangibili dai laboratori e dagli ambienti controllati, anche in ambiti diversi da quello musicale in modo economico e pi` u ` tempo di portare le interfacce tangibili alle masse! democratico. E


[1] T. T. Hewett, R. Baecker, S. Card, T. Carey, J. Gasen, M. Mantei, G. Perlman, G. Strong, and W. Verplank. http://old., 1996. [2] M. Weiser, “Ubiquitous computing.” com/ubicomp/, 1996. [3] B. Ullmer and H. Ishii, “Emerging frameworks for tangible user interfaces,” IBM Syst. J., vol. 39, no. 3-4, pp. 915–931, 2000. [4] E. Hornecker, “Tangible interaction.” http://www. interaction.html, 2009. [5] P. Wellner, “The digitaldesk calculator: tangible manipulation on a desk top display,” in UIST ’91: Proceedings of the 4th annual ACM symposium on User interface software and technology, (New York, NY, USA), pp. 27–33, ACM, 1991. [6] G. W. Fitzmaurice, H. Ishii, and W. A. S. Buxton, “Bricks: laying the foundations for graspable user interfaces,” in CHI ’95: Proceedings of the SIGCHI conference on Human factors in computing systems, (New York, NY, USA), pp. 442–449, ACM Press/Addison-Wesley Publishing Co., 1995. [7] H. Ishii and B. Ullmer, “Tangible bits: towards seamless interfaces between people, bits and atoms,” in CHI ’97: Proceedings of the SIGCHI conference on Human factors in computing systems, (New York, NY, USA), pp. 234–241, ACM, 1997.


[8] D. Xu, J. C. Read, E. Mazzone, and M. Brown, “Designing and testing a tangible interface prototype,” in IDC ’07: Proceedings of the 6th international conference on Interaction design and children, (New York, NY, USA), pp. 25–28, ACM, 2007. [9] P. Marshall, “Do tangible interfaces enhance learning?,” in TEI ’07: Proceedings of the 1st international conference on Tangible and embedded interaction, (New York, NY, USA), pp. 163–170, ACM, 2007. [10] T. Drori and M. Rinott, “Pixel materiali: a system for creating and understanding pixel animations,” in IDC ’07: Proceedings of the 6th international conference on Interaction design and children, (New York, NY, USA), pp. 157–160, ACM, 2007. [11] S. Jord`, G. Geiger, M. Alonso, and M. Kaltenbrunner, “The rea actable: exploring the synergy between live music performance and tabletop tangible interfaces,” in TEI ’07: Proceedings of the 1st international conference on Tangible and embedded interaction, (New York, NY, USA), pp. 139–146, ACM, 2007. [12] K. Ryokai, S. Marti, and H. Ishii, “I/o brush: drawing with everyday objects as ink,” in CHI ’04: Proceedings of the SIGCHI conference on Human factors in computing systems, (New York, NY, USA), pp. 303–310, ACM, 2004. [13] J. Patten, B. Recht, and H. Ishii, “Audiopad: a tag-based interface for musical performance,” in NIME ’02: Proceedings of the 2002 conference on New interfaces for musical expression, (Singapore, Singapore), pp. 1–6, National University of Singapore, 2002. [14] G. Zufferey, P. Jermann, A. Lucchi, and P. Dillenbourg, “Tinkersheets: using paper forms to control and visualize tangible simulations,” in TEI ’09: Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, (New York, NY, USA), pp. 377–384, ACM, 2009. [15] M. Blythe and P. Cairns, “Critical methods and user generated content: the iphone on youtube,” in CHI ’09: Proceedings of the

27th international conference on Human factors in computing systems, (New York, NY, USA), pp. 1467–1476, ACM, 2009. [16] A. Kittur, E. H. Chi, and B. Suh, “Crowdsourcing user studies with mechanical turk,” in CHI ’08: Proceeding of the twentysixth annual SIGCHI conference on Human factors in computing systems, (New York, NY, USA), pp. 453–456, ACM, 2008. [17] M. V. Kleek, M. Bernstein, K. Panovich, G. Vargas, D. Karger, and M. C. Schraefel, “Note to self: Examining personal information keeping in a lightweight note-taking tool,” in ACM CHI 2009, 2009. Nominated, best in CHI Award, ACM CHI 2009. [18] M. Weiser, “The computer for the 21st century,” Scientific American, September 1991. [19] M. Weiser and J. S. Brown, “Designing calm technology.” calmtech.htm, 1995. [20] J. Underkoffler and H. Ishii, “Urp: a luminous-tangible workbench for urban planning and design,” in CHI ’99: Proceedings of the SIGCHI conference on Human factors in computing systems, (New York, NY, USA), pp. 386–393, ACM, 1999. [21] J. Y. Han, “Low-cost multi-touch sensing through frustrated total internal reflection,” in UIST ’05: Proceedings of the 18th annual ACM symposium on User interface software and technology, (New York, NY, USA), pp. 115–118, ACM, 2005. [22] P. Bennett and S. O’Modhrain, “The beatbearing: a tangible rhythm sequencer,” in Proceedings of NordiCHI 2008: 5th Nordic Conference on Computer-Human Interaction (electronic proceedings), 2008. [23] H. Kato and M. Billinghurst, “Marker tracking and hmd calibration for a video-based augmented reality conferencing system,” in IWAR ’99: Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality, (Washington, DC, USA), p. 85, IEEE Computer Society, 1999.

[24] M. Gandy, B. Jones, S. Robertson, T. O‘Quinn, and A. Johnson, “Rapidly prototyping marker based tangible user interfaces,” in VMR ’09: Proceedings of the 3rd International Conference on Virtual and Mixed Reality, (Berlin, Heidelberg), pp. 159–168, Springer-Verlag, 2009. [25] E. Costanza and J. Robinson, “A region adjacency tree approach to the detection and design of fiducials,” in VVG, pp. 63–69, 2003. [26] R. Bencina, M. Kaltenbrunner, and S. Jorda, “Improved topological fiducial tracking in the reactivision system,” in CVPR ’05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Workshops, (Washington, DC, USA), p. 99, IEEE Computer Society, 2005. [27] E. Costanza and J. Huang, “Designable visual markers,” in CHI ’09: Proceedings of the 27th international conference on Human factors in computing systems, (New York, NY, USA), pp. 1879–1888, ACM, 2009. [28] M. Fjeld, J. Fredriksson, M. Ejdestig, F. Duca, K. B¨tschi, o B. Voegtli, and P. Juchli, “Tangible user interface for chemistry education: comparative evaluation and re-design,” in CHI ’07: Proceedings of the SIGCHI conference on Human factors in computing systems, (New York, NY, USA), pp. 805–808, ACM, 2007. [29] M. Waldner, J. Hauber, J. Zauner, M. Haller, and M. Billinghurst, “Tangible tiles: design and evaluation of a tangible user interface in a collaborative tabletop setup,” in OZCHI ’06: Proceedings of the 18th Australia conference on ComputerHuman Interaction, (New York, NY, USA), pp. 151–158, ACM, 2006. [30] Y. Jung, J. Blom, and P. Persson, “Scent field trial: understanding emerging social interaction,” in MobileHCI ’06: Proceedings of the 8th conference on Human-computer interaction

with mobile devices and services, (New York, NY, USA), pp. 69– 76, ACM, 2006. [31] H. Sharp, Y. Rogers, and J. Preece, Interaction Design: Beyond Human-Computer Interaction. Wiley, 2 ed., March 2007. [32] E. Costanza, S. Shelley, and J. Robinson, “D-touch: A consumer-grade tangible interface module and musical applications,” in Proceedings of Conference on HumanComputer Interaction (HCI03), 2003. [33] E. Costanza, S. Shelley, and J. Robinson, “Introducing audio d-touch: A tangible user interface for music composition and performance,” in Proc. of the 6th Int. Conference on Digital Audio Effects (DAFX-03), 2003. [34] J. Raskin, The humane interface: new directions for designing interactive systems. New York, NY, USA: ACM Press/AddisonWesley Publishing Co., 2000.