You are on page 1of 17

Interacting with Computers 23 (2011) 4056

Contents lists available at ScienceDirect

Interacting with Computers


journal homepage: www.elsevier.com/locate/intcom

Towards the ubiquitous visualization: Adaptive user-interfaces based on the Semantic Web
Ramn Hervs , Jos Bravo
Castilla-La Mancha University, Paseo de la Universidad, 13071 Ciudad Real, Spain

a r t i c l e

i n f o

a b s t r a c t
This manuscript presents an infrastructure that contributes to ubiquitous information. Advances in Ambient Intelligence may help to provide us with the right information at the right time, in an appropriate manner and through the most suitable device for each situation. It is therefore crucial for such devices to have contextual information; that is, to know the person or persons in need of information, the environment, and the available devices and services. All of this information, in appropriate models, can provide a simplied view of the real world and let the system act more like a human and, consequently, more intelligently. A suitable context model is not enough; proactive user interface adaptation is necessary to offer personalized information to the user. In this paper, we present mechanisms for the management of contextual information, reasoning techniques and adaptable user interfaces to support visualization services, providing functionality to make decisions about what and how available information can be offered. Additionally, we present the ViMos framework, an infrastructure to generate context-powered information visualization services dynamically. 2010 Elsevier B.V. All rights reserved.

Article history: Received 3 February 2010 Received in revised form 22 August 2010 Accepted 24 August 2010 Available online 15 September 2010 Keywords: Ambient Intelligence Information visualization Information retrieval Context-awareness Ontology Intelligent user interfaces

1. Introduction The real world is wide and complicated, and the human brain requires complex cognitive processes to understand it. In fact, we are used to creating models to describe the environment but hiding its complexity in some degree. Computer systems also require models that describe the real world and abstract from these difculties in order to understand it (at least in part) thus acting more like humans. Consequently, numerous applications can be developed to facilitate peoples daily life. A large amount of information from humans everyday lives can be recognized: newspapers, sales, mail, ofce reports, and so on. All this information can be managed by an intelligent environment offering the contents needed, when needed, no matter where we are. The services mentioned above require a high-quality method of visualizing information. Our objective is to offer the desired information at the right time and in a proper way. Advances in the Semantic Web combined with context-awareness systems and visualization techniques can help us accomplish our main goal. Applications capable of managing a model of context, represented by an ontology describing parts of the surrounding world, assist us by offering information from heterogeneous data sources in an integrated way. This will reduce the interaction effort (it is possible
Corresponding author. Tel.: +34 926295300x6332; fax: +34 926295354.
E-mail addresses: ramon.hlucas@uclm.es (R. Hervs), jose.bravo@uclm.es (J. Bravo). 0953-5438/$ - see front matter 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.intcom.2010.08.002

to deduce part of the information needed to analyze the user situation) and generate information views according to the user and the displays characteristics. The generation of user-interfaces based on the users situation requires advanced techniques to adapt content at run-time. It is necessary to automate the visualization pipeline process, transforming the selected raw data into visual contents and adapt them to the nal user interface. This paper is structured as follows: Section 2 is dedicated to the modeling of context-aware information applying advances in Semantic Web languages. Section 3 introduces information visualization services in pervasive environments. Section 4 presents our infrastructure to generate ontology-powered user interfaces dynamically, retrieving information based on the users situation and adapting their visual form to the display. A case study is described in Section 5, analyzing infrastructure functionality for this particular case. In Section 6 we evaluate the infrastructure. Sections 7 and 8 include related work that uses context for generating and adapting user interfaces and the contributions and discussions. Finally, Section 9 concludes the paper. 2. Context-awareness through the Semantic Web Only by understanding the world around us, applications can be developed that will be capable of making daily activities easier. Users actions can be anticipated by looking at the situations they are in Schilit et al. (1994). Context is, by nature, broad, complex and ambiguous. We need models to represent reality or, more

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

41

precisely, to characterize the context as a source of information. These models dene the context factors relevant to the user, his or her environment and situation. At the same time, it is possible to share this real world perception between different applications and systems (Henricksen et al., 2002). Recently, Semantic Web languages have been used for context modeling, for example the CONON model (Gu et al., 2004), that implements mechanisms for representing, manipulating and accessing contextual information; the SoaM Architecture (Vazquez Gmez et al., 2006), a Web-based environment reactivity model that uses orchestration to coordinate existing smart objects in pervasive scenarios in order to achieve automatic adaptation to user preferences; and the COBRA Architecture (Chen et al., 2003), an infrastructure based on multi-agents that share contextual information. In general, there are benets associated with the rich expressiveness of modeling languages such as OWL and RDF and their semantic axioms and standardization. Despite the wellknown benets of these languages, they were not originally designed to model intelligent environments. For this reason, there are some difculties in modeling contextual information: distinguishing between different information sources, allowing for inconsistencies, temporal aspects of the information, information quality, and privacy and security policies. By adapting Semantic Web technologies to context-aware environments, we can implement solutions to these problems. We presented context management strategies based on the Semantic Web in previous publications (Hervs et al., in press). Thus, the following sections of this paper focus on the design decisions, the constraints and capabilities of the user interface generator to realize the design, and prototypes for a particular service: information visualization. 3. Pervasive information visualization In our daily life, we manage and analyze a great variety of personal information such as calendars, news, emails and digital documents. Many actions and decisions are based on information we obtain from various and heterogeneous sources. In fact, information is a ubiquitous part of our everyday tasks. As a result, advances in the visualization of information may be a great support for the development and acceptance of the Ambient Intelligence paradigm (ISTAG, 2001). Visualization in smart environments has been studied from different perspectives. For example, public displays have received considerable interest in recent years. Demand is increasing for ubiquitous and continuous access to information and for interactive and embedded devices. Typically, public displays can enhance user collaboration and coordinate multi-user and multi-location activities. Displayed information may offer an overview of workow, revealing its status, enabling communication between the users and the management of contingencies or unexpected situations (Jos et al., 2006). Toward this end, we can nd signicant contributions in the bibliography. Gross et al. (2006) introduced Media Spaces, a collaborative space with displays that connect two physical spaces through video and audio channels. Other authors have presented proposals based on wall displays (Baudisch, 2006; Vogl, 2002) including interesting advances in interaction techniques for public displays. Most of these proposals include adaptive mechanisms based on contextual parameters. ~ For example, Munoz et al. (2003) developed a display-based coordination system for hospitals, adapting their behavior based on user tasks, their status and environmental contingencies. Another study (Mitchell and Race, 2006) adapts the displayed information depending on the space characteristics (distinguishing between transient spaces, social spaces, public or open spaces, or informative spaces).

The transition from collaborative desktop computers to public displays brings up a wide range of research questions in user interfaces and information visualization areas. Using applications designed for desktop computers in public displays may be problematic. One important difference is the spontaneous and the sporadic nature of public displays, but the main question is how to adapt to a variety of situations, multiple users and a wide range of required services. Focusing on the use of public displays, we can identify several differences to keep in mind:  Wide size ranges and capabilities: element visualization depends on the absolute position in the interface. The visual perception of elements is different at the middle or at the display corner. Moreover, visual capabilities (such as size, resolution, brightness, and contrast) affect the nal interface view.  Interaction paradigms: The classic WindowsIconsMenus Pointers (WIMP) paradigm requires reconsideration if visualization is to operate coherently because this paradigm is fundamentally oriented toward processing a single stream of user input with respect to actions undertaken on relatively small, personal screens. Innovative interaction techniques and paradigms are thus necessary, such as implicit interaction (Schmidt, 2000), touching approaches (Bravo et al., 2006), and gesture recognition (Wang, 2009). It is important to analyze the particular characteristic of available interaction paradigms when developing user interfaces. In particular, interaction ow is an essential issue to take into account. One study presents a classication of interaction according the interaction ow (Vincent and Francis, 2006); the authors distinguish between three types. One-way interaction includes applications that only need to be able to receive content from users. Two-way interaction requires that data can be sent to the display landscape from users and vice versa. Finally, a high degree of interactions occurs when applications require permanent interaction between the display landscape and the users in both directions. Another classication that focuses on the tasks that users wish to perform during information visualization organizes interaction more abstractly, for example, prepare, plan, explore, present, overlay, and re-orient (Hibino, 1999).  Multi-user: multiple parallel inputs and, consequently, multiple parallel outputs may be allowed in public displays. Furthermore, the users social organization is an important information source, distinguishing between single users, group users or multi-group users.  Privacy vs. content: these concepts are sometimes contradictory in public displays. Whenever applications are designed to offer information to users through public displays, the visualization of personalized contents may endanger the privacy of users. However, inexible levels of privacy assurance typically make it difcult to offer a broad set of relevant contents. Nowadays, most public presentation visualization services lack contextawareness, thus making impossible a valid compromise between privacy and personalized contents. On the other hand, it is important to consider the dynamic and continuous evolution of pervasive environments. The kind of users, available displays, and visualization requirements, are only some of the characteristics that may change with time. Consequently, it is necessary to reconsider the design process and to provide proactive mechanisms to generate user interfaces dynamically. By representing context information, the environment will be able to react to situation changes and determinate the services to be displayed. In this section, we have identied and described several characteristics that context-sensitive user interfaces must analyze in selecting which contents should be offered and in which visual

42

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

form. However, these characteristics are not always directly observable, but user interfaces focus on observable behaviors. Advances in context-awareness improve the connection between human behavior and its computational representation, for example, by abstracting or compounding unobservable characteristics from observable ones. The next section introduces our ontological context model in order to enhance implicit communication between the users immediate environment and the generated user interfaces. 4. Ontology-powered information visualization. Our proposal The main challenges to generate context-driven visualization services at run-time are: (a) to determinate relevant changes in the context elements and the correlation between these changes and the reconguration of the displayed user interfaces, (b) how make the heterogeneous visualization services interoperable in order to work together uniformly; several services should share information and complement one another, and (c) how to integrate the set of services in order to display them into a homogeneous view. By modeling the world around the applications and users, in a simplied way, through Semantic Web languages (such as OWL, RDF and SQRL), we can solve these problems.

4.1. User context and visualization model We have dened the context model from two perspectives: Information Visualization and Ambient Intelligence issues. The rst perspective pertains to perceptive and cognitive issues, graphic attributes, and data properties. The second one recognizes environmental characteristics to improve information visualization. On the one hand, the environmental issues are described through three OWL ontologies: (a) User Ontology, describing the user prole, their situation (including location, activities, roles, and goals, among others) and their social relationships, (b) Device Ontology, that is the formal description of the relevant devices and their characteristics, associations and dependencies, and (c) Physical Environment Ontology, dening the space distribution. The principal elements of these ontologies are shown in Fig. 1. These models represent the main elements of the context, the three elements described above and the service model. This formal description is intended to be generic enough to support a variety of services that intelligent environments provide to users. This study focuses on personalized information visualization, which is a particular, required service for the development of many activities, whether they are intended for work or leisure, social or personal

Fig. 1. Principal concepts and properties of the context model.

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

43

use or for daily or infrequent deployment. As such, we also propose an ontological denition of information visualization concepts and properties. Information visualization is a multi-disciplinary area, so it is hard to construct an ontological representation for it. For this reason, we have identied the most important concepts, classifying them according to the criteria for constructing a taxonomy (Hervs et al., 2008) to guide the process of building the corresponding ontology, called PIVOn (pervasive information visualization ontology, shown in Fig. 2). We have organized the ontology elements as follows:  The relationship between information visualization issues and the relevant elements of the context: The visualization of the information process should not be limited to the visual data representation, but should rather be understood as a service offered to one or more users with specic characteristics and capabilities, all immersed in an environment, and presented through devices of different features and functionalities.  Metaphors and patterns: The way in which information is presented should facilitate rapid compression and synthesis, making use of design principles based on human perception and cognition. One-way to achieve these principles is through patterns.  Visualization pipeline: The model represents the main elements involved in the visual mapping. Data sets are transformed into one or more visual representations, which are chosen to be displayed to the user, along with associated methods or interaction techniques.

 Methods and interaction Paradigms: It is possible to interact with the visualization service by many different paradigms and techniques. The model has to represent these two features for providing the needed mechanisms to offer consistent information according to the devices that interact with the environment. Displays and other devices can be involved in the interaction processes through pointers, infrared sensors, Radio Frequency Identication (RFID) or Near Field Communication (NFC) devices, and so on.  Structure and characteristics of the view: Information is not usually displayed in isolation. On the one hand, visualization devices have graphical capabilities for displaying various types of contents at once. Moreover, providing a set of related contents makes the knowledge transmission easier and provides more information than the separated addition of all the considered contents.  Related social aspects: The visualization can be optimized depending on the social groups of its users. At this point, it is possible to observe the relationship between this model and the user model. The latter represents the relationships in the group, specifying the objectives and tasks, individual or grouped. Moreover, the user model reects the fact that the individual users or groups can be located at the same place or in different places.  Data characteristics: Again, talking about the process of transforming the data sets to their visual representation, studying the data characteristics can improve the process: data source, type, data structure, expiration, truth and importance. Data source of information presents challenges mainly because of

Fig. 2. Information visualization ontology: a simplied representation.

44

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

their diversity, volume, dynamic nature and ambiguity. Understanding the nature of the data, we can provide mechanisms that help the visualization process. Regarding the data source, we considerer some data types: text, databases, images, video and contextual data (typically obtained from sensors, but it may be inferred).  Scalability: Another core concept is the scalability. Usually, lter methods are necessary to scale the data, reducing the amount, dening the latency policy or adapting the complexity. These concepts are input variables; we also can analyze the scalability as an output variable that determinates the capability of visualization representation and visualization tools to effectively display massive data sets (called visual scalability (Eick and Karr, 2002). There are some factors directly related with visual scalability: display characteristics such as resolution, communication capabilities, and size; visual metaphors, cognitive capabilities, and interaction techniques (Thomas and Cook, 2005). It is plausible that the amount of information required could be reduced by increasing the number of views and, therefore, growing the interactions. There are various techniques for information scalability. The model describes some of there: zooming, paging, ltering, latency and scalability of complexity. PIVOn conceptualizes a considerable number of concepts and relationships. However, it may not be complete for use in a particular environment. This is why special attention has been paid to avoiding the inclusion of elements that are inconsistent in certain domains. In addition, we offer mechanisms needed to extend the model in order to satisfy the new requirements and integrate them with other ontological models. In general, the context ontologies are adequately generic and have sufcient detail to represent concepts involved in many typical scenarios related to Ambient Intelligence, particularly those that take a user-centered perspective. However, this models generality requires undertaking a specialization process that includes the domain-specic concepts for each concrete application. Thus, our context model includes general concepts and relationships, and as such, it serves as a guide for taking into account relevant aspects of context in order to obtain a specic context model depending on application needs. For example, the user prole (as an expansion of FOAF ontology) includes the concept of cognitive characteristic; depending on the contextsensitive application to be developed, this concept should to be specialized, for example, by relating this concept to an OWL or RDF vocabulary that includes cognitive characteristics. Authors such as Abascal et al. (2008) and Golemati et al. (2006) have studied user cognitive characteristics that affect interaction with Ambient Intelligence services. These kinds of taxonomy can be easily integrated in our context model to enable a denition of adaptive behavior based on them. The same specialization process has been necessary to develop the prototypes described in Section 5. These interaction-related concepts have been expanded to describe characteristics of the different interaction techniques (in this case, touch screens and mediated interaction through Near Field Communication). In addition, the pattern concept in the visualization ontology takes values of specic patterns developed in our prototypes. As described in previous works (Hervs, 2009), the COIVA architecture manages contextual information. This architecture, in addition to providing a specialization mechanism, supports the dynamic maintenance of context information. COIVA includes a reasoning engine that hastens the start-up process, enabling the automatic generation of ontological individuals. Moreover, context-aware architectures tend to generate excessive contextual information at run-time. The reasoning engine can support the definition of updated or deleted policies, thereby keeping the context model accurate and manageable. This reasoning engine is based on the description logics and behavior rules in Semantic Web Rule

Language (SWRL Horrocks et al., 2004) in order to endow the architecture with inference capabilities. To support the highly dynamic nature of Ambient Intelligence, COIVA enables adaptive behavior at two stages: at design-time and at run-time. In anticipation of this requirement, we decided to refactor the mechanism to monitor reasoning rules by dynamic context-event handlers, which develop a reaction to context changes. The active rules are obtained from plain text les, and each gathered rule is handled independently. Moreover, COIVA includes an abstraction engine that ts raw context data into the context model, which is needed to abstract and compound context information and reduce redundancy and ambiguities. An important limitation is that COIVA does not directly manage the sources of raw data, i.e., sensors or data collections to be transformed into ontological individuals in the context model. COIVA has been designed under the premise of generality, and thus, any transformation of the model is highly dependent on the environment in which the services are deployed and the application domain itself. Thus, it is necessary that data collections are annotated with meta-data (or another technique used to associate semantics to data) and that the user context is acquired. Section 5 describes how context and data information is generated and acquired in particular prototypes as well as how this information is transformed to ontological individuals and then used in the user interface generation and adaptation processes. 4.2. Visualization mosaics Our framework called visualization mosaics (ViMos) generates user interfaces dynamically. ViMos is an information visualization service that applies context-awareness to provide adapted information to the user through embedded devices in the environment. The displayed views are called mosaics, because they are formed by independent and related pieces of information creating a two dimensional user interface. These pieces of information are developed as user interface widgets with the principal objective of presenting diverse contents. In this sense, they have several associated techniques of scalability to adapt themselves according to which contents to display and the available area in the user interface for the given piece of information. ViMos includes a library of these pieces in order to display multiple kinds of data (e.g., plain text, images, multimedia, and formatted documents) by using different visualization techniques (e.g., lists of elements and 2D trees) and providing adaptive techniques to t the visual form (e.g., zoom, pagination, and scrolling). Initially, we simply have several sets of information. By analyzing users situations (described in the context model), the best sets are selected. Each item of content has several associated characteristics (such as optimum size or elasticity). These characteristics can be described in the visualization information ontology and make the nal generation process of the mosaic possible, adapting the user interface according to the situation. We can thus make interfaces more dynamic and adaptive and improve quality of content. The mosaic generation process is based on Garrets proposals (Garrett, 2002) for developing Web sites and hypermedia applications by identifying the elements of user experience. Garrets proposals focus on design-time development, while ViMos generates user interfaces at run-time. However, the process has similar steps in both cases: analysis of the user situation and the visualization objectives, content requirements, interaction design, information design and, nally, visual design. The COIVA architecture provides the information needed to generate the user interface dynamically. The principal characteristics of the ViMos framework can be summarized as follows:  ViMos is a framework that can analyze contextual information about users and their environments in order to improve the

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

45

Fig. 3. Generation process of ontology-powered visualization mosaics.

 

quantity as well as the quality of the offered information, at the right time and using the most suitable device. The information and its visual form are auto-described by an ontological model that represents relevant attributes based on knowledge representation and information visualization. The formal model enables interoperability between the heterogeneous services and the combination of diverse application domains. ViMos includes mechanisms to dynamically adapt and personalize the interface views whenever the users need them. Toward this end, high-level controls libraries have been developed, letting the user interface be proactive. Integration of well-known design patterns in order to improve the nal views offered to the user. Pattern selection is driven dynamically by the analysis of the contextual information. Abstract interaction layers to support the diversity of techniques, methods and paradigms applied in Ambient Intelligence. Vimos includes mechanisms to consider important social factors in intelligent environments, switching the traditional individualist interaction to a group communication that is assisted by visualization devices in the environment.

The organization of the visualization mosaics has been designed following these principles, based on the proposals of Norman (1993), Tversky et al. (2002) and Thomas and Cook (2005):  Appropriateness principle: The visual representation should provide neither more nor less information than what is needed for the task at hand. Additional information may be distracting and makes the task more difcult. The contextual situation of the users determines the contents to show in a mosaic view. Every content has an ontological denition about the information that it includes based on the PIVOn model. A matching between this denition and the current users context model offers a quantitative measure of relevance about each item of content.  Naturalness principle: Experiential cognition is most effective when the properties of the visual representation most closely match the information being represented. This principle supports the idea that new visual metaphors are only useful for representing information when they match the users cognitive model of the information. Purely articial visual metaphors can actually hinder understanding. ViMoss view generation is pattern-driven. ViMos relates several design patterns to each view role, an important ontological concept obtained through several situation attributes.

 Matching principle: Representations of information are most effective when they match the task to be performed by the user. Effective visual representations should correspond to tasks in ways that suggest the appropriate action. The selected contents in a ViMos view and their visual design depend on the user-task ontological concept. Combining the previously described mechanisms to achieve the appropriateness and naturalness principles, ViMos matches the task performed by the user to the displayed view.  Congruence principle: The structure and content of the external representation should correspond to the desired structure and content of the internal representation. The pieces of information that include each kind of content are organized in a taxonomic structure that preserves their independence and models the semantic relationships among items of content. This organization is a metaphor for the cognitive representation of the information, easing information assimilation and consciousness.  Apprehension principle: The structure and content of the external representation should be readily and accurately perceived and comprehended. The proactive behavior enabled by the context-aware architecture supports the suitable changes in contents and its organization based on the user and surrounding events. The ViMos architecture comprises several functional modules, implemented in Microsoft.NET. The business logic has been developed using the C# language and the user interface layer with Windows Presentation Foundation.1 The generation of user interface views can be described through a stepwise process (Fig. 3) using the ViMos modules and the COIVA functionalities.  Acquisition of the context: At the start of the service, ViMos obtains the sub-model required for the concrete visualization service from the COIVA architecture. The sub-model includes OWL classes and properties that contain valid individuals; the sub-model is refreshed at run-time by deleting elements with individuals that have disappeared and by including elements that have transformed into new individuals after data acquisition. In this way, the run-time management of the ontologies is optimized. After extracting the sub-model, the context broker maintains it and updates it based on visualization requirements and situational changes that occur when an individual change its value. Moreover, the context broker keeps temporal
1 Windows Presentation Foundation. http://msdn.microsoft.com/es-es/library/ ms754130.aspx.

46

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

references about the requests made by the visualization service. In this way, whenever a service makes a request about context information, the context broker offers an incremental response; that is, it provides newly acquired, modied or inferred individuals based on the request. Selection of candidate data: The signicant items of content to be offered to the user are selected based on the criteria dened for a specic visualization service. The selection mechanism consists of obtaining a quantitative measure of signicance of each item of content based on the context-model instances and retrieving those that exceed a certain threshold. This threshold is determined according to the display characteristics that are described in the devices ontology. Selection of the design pattern: Several factors affect pattern selection, for example, the role of the visualization service, the social and group characteristics of the audience and the quantity of the candidate data. Selection of information pieces: The ViMos broker selects the container widgets (information pieces) that are appropriate for visualizing the candidate data analyzing the characteristics of the data described in the visualization ontology. Mosaic design: All information pieces include adaptability mechanisms in order to adjust themselves to the selected pattern proactively. The adaptability mechanisms consist of zoom policies, latency, pagination and scrolling. Incorporation of awareness elements: ViMos recognizes abstract interactions, that is, general events that cause changes in an information piece or in the general view (e.g., next element, previous element, view element, and discard element). The device model includes interaction techniques available in a specic display. This information enables the inclusion of elements that help users interact with the visualization service.

5. Information visualization services for collaborative groups 5.1. The scenario This scenario involves groups of users that share interests and agendas, working collaboratively and having a dynamic information ow. The prototype supports the daily activities of research

groups by means of information visualization, using the public displays in the environment. This specic prototype can be applied to similar scenarios that involve people working together, for example, in an ofce. The prototype environment is equipped with several public displays, including plasma and LCD TVs with a screen size between 32 and 50 in. and touch screens of 21 in. The interaction with TVs is mediated through NFC mobile phones; displays wear several NFC tags with associated actions that depend on the displayed visualization service at run-time. Thus, tag functionality changes dynamically. Whenever users touch a tag, their mobile device sends the associated information via Bluetooth (if available) or GPRS connection to the context server. The visualization service uses two dedicated service, namely, the COIVA server to manage the context model and the ViMos server to generate user interfaces. The user interfaces are sent using WiFi-VGA extenders that enable the wireless transitions of VGA signals to the public displays. The main objective of the visualization service is to provide quick and easy access for users to share information and to coordinate collaborative activities. Specically, the prototype implements six services: user location and user state, work-in-progress coordination, document recommendation, events and deadlines, meeting support, and group agenda management. We previously commented that neither COIVA nor ViMos directly treat the acquisition of raw data from sensors and content collections. For this reason, we have developed several mechanisms to gather data and transform them into contextual individuals for this prototype. First, we capture the users location and actions through NFC interaction with tagged objects in the environment. Second, contents are annotated with meta-data (e.g., author, title, keywords, and document type) at the moment of inclusion in the repositories. Finally, we implemented two software components: a collaborative agenda to facilitate user activities while we acquire schedule information and a document supervisor that gathers information about which documents a user views or modies with the approval of the user. Table 1 shows some examples of sources of raw data, gathered data and mappings to valid entities in the context model. Fig. 4 shows a user interacting with the visualization service and a personalized mosaic. When the users begin interacting with

Table 1 Examples of data acquired from sensors. Type of sensor Touch screen Hw sensor Gathered data c:Interactive content a:Application t:Tag ID b:Bluetooth address a:Application Meaning Someone is interacting with the a application Someone is performing the task associated with content c in a Generated/updated individuals Pivon:interacting Pivon:Interaction Method User:Task User related to b is interacting with the a application User related to b is performing a task associated with the sensor t Pivon:interacting Pivon:InteractionMethod User:Task User:locatedIn User:userAvailability User related to dv is interacting with application a User related to dv is editing or reviewing document d Pivon:interacting Pivon:InteractionMethod User:Task User:locatedIn Foaf:document User logged in as u is interacting with application a User logged in as u is interested in document d Pivon:InteractionMethod Pivon:interacting User:Task User:locatedIn Foaf:interest

NFC

Hw/Sw sensor

Document monitor

Software sensor

d:Document a:Application dv:device

Document repository

Repository

d:document u:user ID

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

47

Fig. 4. A user interacting with the visualization service via NFC. Ontological individuals and inferred information.

the display (using NFC technology, in this case), the environment recognizes them, analyzes their situation and infers their information needs and current tasks. Additionally, COIVA uses the user interactions to update instances of the context model: for example, assigning the display location to the user location property. All these behaviors and functionalities are dened using SWRL. In the next subsections we detail the ViMos functionalities and mechanisms launched to generate these visualization services. 5.2. Proactive retrieval information The information that is retrieved in order to be displayed to users is selected on the basis of three principal criteria: the expected functionalities of the specic visualization service, the contextual situation of users closer to the visualization device, and the behavioral rules dened for the specic visualization service. The rst criterion is preferable over the others. The second and third criteria generate a collection of contents and a quantitative measure of relevance for each. Additionally, we promote or penalize the selected contents based on user interactions. The nal formula used to set the measure of relevance has been obtained from experimentation. We cannot guarantee that it can be applied to other similar systems; nevertheless, this formula provides encouraging results about the relevance of contents as discussed in Section 6. Focusing on the example in Fig. 4, we can describe in detail the mechanisms to select the displayed contents and the formula (shown in next subsections). 5.2.1. Explicit requirements of the specic visualization service Requirements can be associated with a particular display in order to affect the different visualization services offered to users; these requirements are independent of the context situation and have priority over other factors. These requirements dene the default conguration and general functionalities for inclusion in the service displays. The denition of these conditions primarily depends on the principal functionalities associated with a particular display. In the example, the requirements dene the main role of the service, such as reviewing personal documents individually as well as collaboratively. In addition, there is certain mandatory content, namely, the location of all known users in the environment.

5.2.2. Existence of certain individuals in the ontological context model The Fig. 5 shows the relevant context sub-model in this example and the OWL individuals that inuence the content selection. The context captures the location of the users GoyoCasero (line 5) and AlbertMunoz (line 13), in the same place as the display currently showing the visualization service (lines 1820). All items of content related to these users have been pre-selected and are ltered based on their context. Concretely, GoyoCasero is the user who is interacting with the display (line 9); thus, their items of content are preferred. Additionally, the context denition describes that GoyoCasero supervises AlbertMunozs work (line 8) and consequently, the AlbertMunoz work-in-progress is selected. Finally, the context framework keeps information about the last content looked up by the user (line 11), independently from their location. This information determines the most important content that is shown in the main area of the mosaic. We obtain a quantitative measure of signicance based on the existence of certain individuals in the context model. This measure is inversely proportional to the distance between the content element and the user class, that is, the number of relationships between these two ontological classes. For example, GoyoCasero (i.e., individual in the user class) is the author (i.e., relationship) of CFPucami2010 (i.e., an individual kind of content). The distance between both classes is one. We can see another example related to Fig. 5: GoyoCasero is located in the room 2; this room also includes AlbertMunoz, who is the author of MasterMemoryV2.1. Thus, the distance between the user GoyoCasero and this document is three. This criterion for selecting contents quanties the relevance of the content for a user with the value RC2 in [0, 1]. The distance between the content and the user (Dc) inversely affects the measure of relevance; the b factor determines whatever the distance Dc affects to RC2 and takes a value from 0.1 to 1. In our case, the experimental tests imply that b = 0.5. Fi promotes or penalizes relevance based on previous user interactions. Explicit rejection of a content results in Fi = 0.25, and explicit interaction results in Fi = 1.50. This formula takes into account the relevance of the content in any previous visualization service launched to a specic user, which is RC2T1; this factor models the human tendency to continue a task even though her/his context situation may have changed. The weight of RC2T1 is determined by a, which takes a value from 1 to N. Our prototype sets a = 3.

48

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

Fig. 5. Individuals and sub-model involved in the information retrieval of the scenario.

RC2 a F i 1=DC b RC2T1 =a 1


RF2: Relevance based on criteria 2; RF2T1: Last relevance for this user and content; Dc: Distance between user and content classes; Fi: Interaction factor; a, b: Adjustment factors. 5.2.3. Behavior rules Content personalization cannot only be based on the existence of certain individuals in the context model. It is necessary to include more complex mechanisms to select candidate contents to be offered. Concretely, mechanisms based on SWRL rules, powered by built-in constructors and XQuery operations that enable selection by the particular value of an ontological instance and applying math, Boolean, string or date operations. The listing shows three behavioral rules. The st one is used to offer contents whose deadline is closer to the current date. The view in the Fig. 4 includes information about an upcoming event; concretely, the content is an image that is augmented with contextual data, for example, the deadline date and the representative name of the event. The second and the third rules help to quantify content relevance and select the nal items of content to be shown. The rule in lines 8 and 9 modies content relevance based on the user interacting with the display. The last rule promotes the items of content whose supervisor and author are located in the same place (lines 1215) (see List 1). The rules generate a relevance level in the interval [0, 10]. This criterion for selecting contents based on rules is quantied by the value RC3 and takes a value in [0, 1]. The formula is very similar to the second criterion, but in this case, the distance between classes is not relevant.

other prototypes, for example, visualization services for academic conferences or for a classroom, this pattern would be applied during breaks to offer general information. The document viewer pattern is applied when a user is interacting with the mosaic, and the context model can determine a principal related document. That is the case in this scenario. The criteria for selecting the pattern in this prototype are simple and depend on very few contextual elements; however, it is possible to dene more complex criteria using SWRL rules. Once the design pattern is set, ViMos must create the visual form of the selected content items. The description of the content is retrieved from the PiVon model in order to determine the nature of the information. The ViMos library includes abstract widgets that implements adaptability in order to match the visual form of the content with the design pattern. The example includes four kinds of content: Location component: The data are a set of the locationIn individuals whose range is the User concept and have associated Image items. The most suitable information piece to generate the visual form is the piece called multiImageViewer. This piece shows a set of images vertically or horizontally. It adapts the images to the available area and includes a footnote (the value of the user individual). Reminder component: In this case, the selected datum is an individual of the Event class associated with an Image, a textual description, a deadline and several involved users. The selected piece is called richTextViewer. It implements zoom and ow text techniques to adapt content, typically, arbitrary textual data and associated images. Work-In-Progress component: The nature of the data is similar to the reminder component, but it includes several blocks of data. This abstract widget is known as multiRichTexViewer and it includes an adaptive list of richTextViewer pieces. MainDocument component: This is a simple element that includes a document as original data. The selected piece is a general document viewer that adapts the visualization to the assigned area. Fig. 6 shows the particular data transformation from the ontological individuals to the nal visual form. The last step is to incorporate the awareness elements depending on the technology able to interact with the displays. This prototype is congured to work with touch screens as well as with NFC interaction; the characteristics of these interaction techniques are conceptualized in the device model as part of the ontological specialization process performed in this prototype. For example, the device model includes information about the kind and available number of NFC tags for each display, the kind of interaction that the touch screen accepts (such as single-touch, double-touch

RC3 a F i RC3T1 =a 1
RF 2: Relevance based on criteria 3; RF2T1: Last relevance for this user and content; Fi: Interaction factor; a: Adjustment factor 5.3. Automatic generation of user interface views After ViMos selects the items of content to show, it launches the automatic design process. First of all, the expected functionalities of the service and the context determine the design pattern. The example mainly uses two patterns: the news panel pattern and the document viewer pattern. The news panel pattern is a wellknown design that is used in many web portals. The user interface is divided into columns (typically three of them) and rows. The sizes of the areas are similar because there are not criteria to elevate the signicance of the content over the others. This is the design pattern applied in Fig. 4 (left). This pattern is chosen when no one is explicitly interacting with the display and there are no planned events at this moment; thus, it could be considered the default pattern. In

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

49

List 1. Rules that modify content relevance according to the user context.

and multi-touch) and so on. Based on this information, ViMos includes awareness elements to facilitate user interaction, for example, by framing interactive contents in the case of touch screens or using labels to describe actions associated with each NFC tag. 5.4. Interoperability 5.4.1. Interoperability through instances of the shared model In order to illustrate the services that share contextual information to improve the output, we present the following specic scenario: John is a professor at the university and takes part in a research group. He is at a conference to present a paper, and he has a meeting with his colleague Albert who is also present at this event.

The personal agenda is part of the user model represented in COIVA. It is usual for users agendas to complement each other. The denition of the agenda elements and their visual representation can be very diffuse but the semantic model provides enough shared knowledge to understand it. In our scenario, John has an agenda that is associated with upcoming activities. Furthermore, the congress organizers dene the conference program, which is another kind of agenda. The integration of both these agendas serves not only to offer the user a complete picture of the activities to be carried out with less effort, but to provide additional information to the system; this information would allow new inferences and a set of instances providing a wider context. List 2 shows individuals that have been generated by combining both agendas and including meta-context information (see List 2).

Fig. 6. ViMos visualization pipeline process from the ontological elements to the nal view.

50

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

List 2. Individuals obtained from agenda matching.

5.4.2. Interoperability between heterogeneous visualization services An important contribution of the COIVA and ViMos infrastructure involves the semantics that describe the data sets and the mechanisms for sharing this information. In fact, semantic-based technologies have been surveyed and used as a medium for integration and interoperability (Noy, 2004). So far, we have seen that several visualization services implemented within the ViMos framework can integrate information from other models and pieces of information from other services into their views. This section emphasizes the ability of COIVA to work with other information visualization services different from the ViMos framework, such as Web pages. It is not new that an ontological model serves to enrich the contents and makes a more intelligent Web behavior; this is actually the basis of the Semantic Web. What should be emphasized is that the context models for intelligent environments, especially when formalized with languages of the Semantic Web, can increase the signicance of the information published. To illustrate this, the next scenario is presented: Robert is a colleague from another university. He makes oral sessions but most of the students cannot be present for his lectures. This is why he offers the students the opportunity to view his lectures through his Web page, either in real time or anytime afterwards. The students are connected to the Web and can follow the slides or any other elements that are set out in class. In addition, he puts at their disposal his mobile phone for consultations by telephone, but only when he is in his ofce and not busy. Once again, we can integrate two types of information generated and managed with COIVA, contextual information and data sets described by the model of the service of visualization. Fig. 7 (left) shows an example in which Roberts research group Web site can obtain information from COIVA to dene the current state of each member. Additionally, Fig. 7 (right) shows a Web site displaying the current slide shown in class. 6. Evaluation We evaluated the prototypes through interviews and user studies. Twenty-one users (11 men, 10 women) participated in the experiment during a period of two weeks. The experiments were incorporated into their daily activities to simulate actual situations. The specic time that each user tested our prototypes was on average 35 min per day. The population included seven engi-

neering undergraduates, four Ph.D. candidates, two professors, and eight users that are not linked with the university, between the ages of 20 and 61. The users associated with our university were familiar with the technology and the tasks, while the other users were not familiar with this kind of system and the 50% had no familiarity with the task to perform. The objective was to validate the system from three perspectives: (a) developing a validation metric for retrieved information, (b) agreement with the auto-generated user interfaces, and (c) usefulness of the visualization services in daily activities:  Adaptability of content according to context information: all prototypes implement autonomous mechanisms to adapt views to the context situation. The prototype considers the situation, users prole and their specic needs among other considerations. Users have tested the prototypes and have expressed their agreement or disagreement with the displayed contents, as shown in Table 2. We have applied basic statistical classication to evaluate the relevance of the offered contents; concretely, precision and recall measures (van Rijsbergen, 2004). In our system, Precision represents the number of items of relevant content retrieved in a mosaic view divided by the total number of items retrieved for a particular situation. Recall is the number of relevant documents retrieved in a mosaic view divided by the total number of existing relevant content items at the system. Additionally we include two well-known measures: the Fall-out, that is, the proportion of offered irrelevant items out of all irrelevant items available in the system, and the F-score as a measure that combines precision and recall.

Prec jfRc g \ fOc gj=jfOc gj Rcall jfRct g \ fOc gj=jfRct gj F out jfnRc g \ fOc gj=jfnRct gj F score 2 Prec Rcall =P rec Rcall
Rc: Relevant Contents: Oc Offered Contents; Rct: Total Relevant Con: tents; nRc: Irrelevant Contents; nRct: total Irrelevant Contents Table 2 shows the results from users and in general. The total number of content items in the repositories was 189. Adding the different users tests, out of a total of 188 items to return to the different users, 163 were adequate for them. These results provide a precision of 86.7%, a recall average of 86.2%, a fall-out of only

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

51

Fig. 7. Busy status inferred through COIVA engines and published in a personal web site (left) and current class slide showed in the subject Web site.

0.7% and an F-score up to 86%. Based on these values, we can determine that the general measurement of relevance was on average up to 86%.  Performing tasks using prototypes or using traditional methods. We have measured the time and interaction effort required for queries of shared documents through the visualization service. We evaluated two dimensions of this problem. The rst dimension is about the quantitative effort for tasks. The experiments consist of accessing a personal and random document using the traditional procedure (the document may be stored in the local host or in the network) and using the visualization service. In this test, we do not analyze context-based adaptation; rather, we measured the effort needed to interact with a public display using NFC and using a personal computer. Fig. 8 shows the results for the ten users. Moreover, we counted the time at the moment of accessing the particular document and at the moment of navigation down to the nal page. Concretely, the time to search and access documents has been 4050% faster via this service due to automatic information retrieval based on context. However, reviewing the document content was made slower and more difcult, mainly due to the interaction
Table 2 Results of the adaptability of content according to context information. Oc User 1 User 2 User 3 User 4 User 5 User 6 User 7 User 8 User 9 User 10 User 11 User 12 User 13 User 14 User 15 User 16 User 17 User 18 User 19 User 20 User 21 Total 12 11 12 8 9 14 9 10 15 11 7 9 6 6 10 6 9 5 6 9 5 188 Rc 9 11 9 7 9 11 8 9 12 11 7 8 3 6 9 5 8 5 4 7 5 163 Rct 10 11 11 9 9 12 9 11 14 11 8 8 9 7 9 6 8 7 5 9 6 189 Prec 0750 1000 0750 0875 1000 0786 0889 0900 0857 1000 1000 0889 0500 1000 0900 0833 0889 1000 0667 0778 1000 0867 Rcall 0900 1000 0818 0778 1000 0917 0889 0818 0857 1000 0875 1000 0333 0857 1000 0833 1000 0714 0800 0778 0833 0862 Fout 0017 0000 0017 0006 0000 0017 0006 0006 0011 0000 0000 0006 0017 0000 0006 0005 0006 0000 0011 0011 0000 0007 Fscore 0818 1000 0783 0824 1000 0846 0889 0857 0857 1000 0933 0941 0400 0923 0947 0833 0941 0833 0727 0778 0909 0865

technique used: NFC mobile phones. The user has to touch a tag with the cell phone to advance a page in the document, and the NFC device has a lag of 1.4 s due to Bluetooth communication. The second dimension focuses on user experience related to productivity. These items are dened in the MoBiSQ questionnaire (Vuolle et al., 2008). The users evaluated the use of the visualization service in daily collaborative tasks, during 7 days and requiring access to personal and shared documents. The users gave high ratings to the control and gathering of information, coordinating and ubiquitous work and system satisfaction. They gave lower ratings to ease of task performance and the reduction of time for complex collaborative tasks. We divided the population based on technologic familiarity and on the daily performing the evaluated tasks. Focusing on groups with and without technological knowledge, we analyzed the variance based on this independent factor through a one-way ANOVA model with a = 0.05. We observed that there is a statistically signicant difference between the groups divided according to technological familiarity with respect to most of the questions; higher P-values were obtained with regard to reductions in time and the ease of performing tasks as well as regarding general satisfaction. However, the comparisons between groups divided by task familiarity do not yield signicant results when a = 0.05 or a = 0.01.  Agreement with inference and usability issues. It is known that a user interface able to adapt itself to the current situation of the user to better suit user needs and desires improves usability (Dey, 2001). However, we consider it necessary to test the general aspects of usability in our prototypes. Thus, we have adapted some of Shackel (2009) proposals as well as the MoBiS-Q questionnaire in order to design an opinion poll with 16 questions on the visualization characteristics of the mosaics, allowing us to evaluate user experiences with the information visualization prototypes. Fig. 9 shows the evaluation and results. The average agreement was 80.77% and the rating average was 4.09 out of 5. Again, we analyzed groups of users divided by technological and task familiarity and applied a one-way ANOVA. We have rejected the hypothesis that the groups were equivalent regarding technological familiarity and task experience; we have obtained signicant high P-values for questions related to scalability techniques, the minimization of user memory needs and the ease of navigation in the groups divided according the technological experience as well as in groups divided according the task familiarity. In general, the evaluation of usability is inuenced by these two factors, as

52

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

Fig. 8. Summarized results about using ViMos for tasks.

Fig. 9. Summarized results about the user experience with ViMos.

we observed that the group of users with low levels of technological experience and task familiarity provides the highest ratings. 7. Related work and contributions There are three general bodies of research relevant to our work: the design of context-sensitive user interfaces, the automatic generation of context-aware user interfaces and the development of run-time adaptive user-interfaces based on context. We provide an overview of the most relevant ndings in these areas and summarize the differences between them and our work, including the primarily contributions of our approach. Our system determines which contents are appropriate for a user and automatically generates a pattern-driven user interface. Both, the selection of context and user interface generation, use ontological contextual information to enhance these processes. We are not aware of any readily available system that generates and adapts complex user interfaces by means of contextual information at run-time. However, there have been a number of prior systems and proposals that partially use contextual information for user interface creation, most of which use this information during the design-time process. Jung and Sato (2005) introduce a conceptual model for designing context-sensitive visualization systems through the integration of mental models in the development process. Clerckx and Coninx (2005) focus their research on integrating context information the user interface in early design stages. They take into account the distinctions between user interface, functional application issues and context data. We agree with the need for such distinctions; however, their proposal is weak in

that it has difculty predicting all possible contextual changes in the design-time process. This problem also inuences the usefulness of Luyten et al. (2006) proposal, which is a model-based method for developing user interfaces for Ambient Intelligence. This method leads to a denition of a situated task in an environment and provides a simulation system to visualize context inuences on deployed user interfaces. There also have been several attempts to establish general languages for describing context-aware interfaces such as UsiXML (Limbourg et al., 2005), which is based on transforming abstract descriptions that incorporate context into user interfaces, and CATWALK (Lohmann et al., 2006), which was designed to support the denition of various graphical user interface patterns using XSLT templates and CSS style sheets. These approaches are complex and difcult to use as they require specialized tools for user interface designers; in some cases, modeling user interfaces and associated context-aware behaviors is more difcult than coding them. Moreover, systems that make use of style sheet transformations, such as XSLT and CSS, are not rich enough to support a wide range of media and content characterization (Sebe and Tian, 2007). In summary, prior research on context-sensitive user interfaces was principally motivated by the desire to improve existing development processes. This approach, at least in todays design context, makes the designers work difcult by requiring a large amount of upfront effort. Moreover, as we noted previously, a context-aware user interface must address changes in user behavior dynamically because it is difcult to predict adaptive requisites at design-time. With respect to related works on the automatic generation of context-aware user interfaces, there are relevant studies that should be noted. The Amigo project (Ressel et al., 2006) includes

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

53

methods to personalize the logic of the menu structure in intelligent home environments by means of an ontological description of the available services. The functionalities of software components for different devices are bound into one operation environment to give the user the feeling of interacting with a solid system. News@hand (Cantador et al., 2008) is a news system that makes use of semantic technologies to provide online news recommendation services through the ontological description of contents and user preferences; recommendations are displayed in an autogenerated user interface that contains a paginated list of items. Abascal et al. (2008) discuss adaptive user interfaces oriented to the needs of elderly people living in intelligent environments and propose an interface based on coherent multimedia text messages that appear on a TV screen. Gilson et al. (2008) make use of domain knowledge in the form of ontologies to generate information visualizations from domain-specic web pages. Multimedia retrieval and control interfaces for Ambient Intelligence environments have also been widely studied. The Personal Universal Controller (Nichols et al., 2002) represents one result of such studies. This system builds a real-user interface at run-time in a mobile device in order to unify the control of complex appliances such as TVs and DVD players. Wang et al. (2006) design and implement a personalized digital dossier to present media-rich collections, including video, images and textual information, each of which are generated as independent windows that are simultaneously displayed. The Huddle system (Nichols et al., 2006) provides user interfaces for dynamically assembled collections of audio-visual appliances. These prior studies consider the autonomous generation of user interfaces but assume static behavior at run-time. Additionally, some of these systems focus only on particular aspects of user interfaces, for example, menus (Ressel et al., 2006), lists of items (Cantador et al., 2008) and message boxes (Abascal et al., 2008). Other studies analyze user interfaces in very limited application domains, such as multimedia control (Nichols et al., 2002), multimedia retrieval (Nichols et al., 2006), social information (Vazquez and Lpez de Ipia, 2008), and digital dossiers of artworks (Wang et al., 2006). The SUPPLE system (Gajos, 2008) is a notable exception; it can automatically generate complex interfaces adapted to a persons device, tasks, preferences, and abilities through formally dening the interface generation process as an optimization problem. We have found many interesting similarities between this system and our approach, especially with respect to the contextual aspects that must be taken into account. However, SUPPLE requires a formal and articulate description of each widget in an interface. As a result, despite the automatic generation process, the nal creation of these model-based user interfaces requires a large amount of upfront effort. Additionally, the generated interfaces are focused on dialog box-like interfaces, as this is a style that may be inappropriate for Ambient Intelligence user interfaces. In general and in contrast to our adaptive approach, all previously described works only consider input data and static contextual information. Very few systems consider autonomous run-time adaptation based on context, and most of them apply run-time adaptability to specic and delimited domains and/or basic user interfaces. For example, Ardissono et al. (2004) apply recommendation techniques in personal program guides for digital TV through a dynamic context model that handles user preferences based on user viewing behavior. The ARGUR system (Hartmann et al., 2008) is an exception because it is motivated by the desire to create multi-domain interfaces. ARGUR is based on mapping context elements to input elements in the user interface; for example, the users agenda may suggest a date and time for departure as the input of a travel agency web page. Typically, it is very difcult to establish a one-to-one relationship between input elements and contextual characteristics. In fact, we have found a many-to-many relationship to be more common in our adaptive system. That is,

several contextual elements affect several user interface components. In addition, there are also adaptive user interfaces that base their behavior on particular components of the context. The adaptability of interfaces to different kinds of device is a common challenge. Butter et al. (2007) developed an XUL-based user interface framework to allow mobile applications to generate different screen resolutions and orientations. The SECAS project (Chaari et al., 2007) includes a generic XML user interface vocabulary to provide adaptive multi-terminal capabilities based on the description of each panel of the interface, visualization adaptation and navigation among panels. In summary, our context-aware system for autonomous generation and run-time adaption of user interfaces takes into account the above-described work and makes the following contributions. Automatic user interface generation to reduce design effort. The ViMos framework automatically generates user interfaces at runtime and thus reduces design effort. The designer does not need knowledge on programming or design. Only detailed knowledge on the application domain is required to specialize the context model for visualization; in addition, the user needs to dene the dynamic behavior through SWRL. For this goal, there are many available tools, including, for example, Protg.2  Dynamic multi-modal complex user interfaces. The ViMos framework generates complex user-interfaces based on different design patterns and provides a complete set of components to visualize data of different natures as well as include several kinds of scalability strategies. Although ViMos does not provide a mechanism to generate user interfaces for any specic purpose or task, it offers interfaces for a wide user interface subtype: information presentation. The implementation of ViMos has been explored in several domains, including collaborative groups, medical environments (Bravo et al., 2009a) and public services (Bravo et al., 2009b).  Adaptive user interfaces with respect to context changes at runtime. ViMos provides mechanisms to readapt visualization services across the run-time life of the applications. Adaptive context-aware user interfaces should implement a mechanism to enhance their behavior based on user needs due to the impossibility of detecting all needs at design-time. These mechanisms can be automatic or human-generated. Advances in machine learning can improve context-aware systems; such advances (e.g., incorporating a learning mechanism into ViMos as a functional engine) comprise a signicant area of future study. At present, ViMos provides a human-generated mechanism to adapt visualization service behavior by changing the set of SWRL rules at run-time thought re-factoring techniques.  Complex ontological-powered context modeling. ViMos includes a context model compounded by four ontologies: users, devices, environment, and visualization services. The proposed model is detailed and can be consider complete, though it must be specialized to particular application domains before implementation. A general model that encompasses any application domain requires a complex structure, but even with such a structure, it still cannot be universal. Our strategy is to isolate the principal elements across all user-centered context-aware application and provide mechanisms to rapidly specialize the context model and prototype applications. This approach provides mechanisms to formalize the conceptualization of context aspects, thereby enhancing semantic cohesion and knowledgerepresentation capabilities. Moreover, we provide a high-level abstraction model that relates the ontologies. This taxonomical

2 The Protg Ontology Editor and Knowledge Acquisition System. http://protege.stanford.edu/.

54

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

organization enables exchange between adaptive services and specialization in particular domains. Overall, these contributions help address the three main challenges discussed in Section 4. First, the formal model and the mechanisms to specify application behavior at run-time enable the identication of relevant changes in context, the correlation between these changes and the subsequent reconguration of user interfaces. Second, the ontological description of visualization services make possible the interoperation of information presentation in user interfaces; contents managed by an application can be shared, as can include descriptions and information about how contents should be visualized. Finally, ViMos offers model and pattern-driven mechanisms to adapt contents to different user interfaces. 8. Discussion The rst core question raised by this paper is whether systems like ViMos are practical. Our proposed automatic context-aware user interface generator focuses on a particular kind of interface, namely, information presentation. Thus, the primary requisite for practically using ViMos is that the principal task to be performed by users interacting with ViMos involves obtaining personalized information. It is important to keep in mind the two main high-level components of our system: the semantic-powered representation of context and behavior and the mechanisms to generate and adapt user interfaces. The potential of ViMos emerges whenever we exploit these two characteristics in highly dynamic environments that greatly affect an applications behavior as well as when using complex and heterogeneous information sources that require the adaptation of user interfaces at run-time. The prototype described in this paper illustrates the use of this system. However, ViMos increases the level of design effort under static user interfaces and interfaces that are not highly inuenced by context. For these cases, we have shown that the two high-level components described above can provide interesting functionalities for certain kinds of applications by using them independently. The proposed automatic user interface generation mechanism can be applied for rapid prototyping purposes in dynamic and non-contextualinuenced user interfaces. In addition, the context model and the context management system can provide effective information sourcing to improve external user interfaces that were not created by ViMos or other kinds of services requiring contextual information. In addition, in the introduction of this paper, we focused on the challenge of visualizing the right information to the right person in the right place. Our proposal offers a partial solution for achieving this goal. In the development and testing of ViMos, we highlighted several primary problems that affect this kind of system. First, it is difcult to model the relationship between user context and information needs. This is a many-to-many relationship and depends on factors that may not be directly observable. For example, users usually change the performed task whenever they feel tired, though this fact may remain unnoticed under ViMos. As a result, our system may generate inappropriate contents. Last-minute changes in the user planning, personal circumstances and the general unpredictability of human nature are unobservable factors that make the denition of application behavior difcult. In order to address this problem, ViMos includes a historical record that uses a meta-context engine to detect repeated information content that is rejected by the user. In this case, the rules that induced the visualization of these contents incur penalties. In the same way, this historical record temporally promotes the rules that generate appropriate content because we have observed that when some information needs are satised, other similar needs emerge. This

promotion and penalization mechanism considers the dynamic and evolving task of searching for information; however, we believe that this mechanism can be improved through machine learning techniques that automatically change the behavior rule set instead of promoting and penalizing existed rules. Second, a future challenge involves bridging the semantic gap between the extraction of the underlying raw feature data and the need for a semantic description of contents in order to retrieve and generate their visual form. In fact, ontology population (that is, transforming data into ontological instances when new content is retrieved from the Web or other kinds of sources) is an open research challenge. This issue motivated our decision to develop the context model through Semantic Web languages. Thus, future advances in the semantic description of content can be easily combined with our current context model. Meanwhile, we also plan to analyze how to enhance the semantic description of textual content in ViMos through language analyzers that extract and categorize relevant document terms and then compare these terms with ontological individuals generated by the context model using fuzzy metrics. This is another future work with respect to ViMos. Finally, we started from the general idea that the automatic generation of user interfaces creates less aesthetically pleasing interfaces than those created by human designers. However, the user interfaces generated using Windows Presentation Foundation technology have obtained high evaluations from users. We do not intend to replace human designers, as handcrafted user interfaces are always more desirable and attractive because they reect the creativity and experience of designers. However, we believe that ViMos generates sufciently attractive interfaces from the users perspective. 9. Conclusions This paper presents an infrastructure to support information adaptability for users by describing context information with Semantic Web languages. The context information is represented by a general model, which describes the world around the applications and users, in a simplied way. Also, we presented a specic model to describe how the raw data are transformed into a view, as well as their scalability, interaction and relationships. This approach allows the initial automatic generation of user interfaces at run-time with the necessary dynamism for adapting users needs according to their context. A simple language, such as SWRL, to dene which context changes should create new views in the display, has proven sufcient. Users have expressed a highlevel of acceptance of the manner in which they can access information. In general, the inference mechanisms have selected the required documents with a general rate of acceptance of 80.77%. The visual representation of the items of content and their integration into mosaics has been also positively evaluated. ViMos successfully adapts different types and amounts of data to the user interface through scalability techniques at run-time. Additionally, the application of well-known design patterns to display the mosaics helps to use the service, giving a similar view that provided by common desktop applications. Apart from the content and its visualization, this work emphasizes the way in which information can be associated with its visualization, supporting interoperability and sharing of data between different applications and domains. In the introduction, we stated the ideal scenario in which users receive exactly the information they need, at any moment and throughout their activities. This ideal is far from reality. This work offers a limited contribution towards that end; nevertheless, retrieved information goes with specic scenarios, in which there are limited sets of information and of relevance criteria for the modeled context. To generalize further, an exhaustive context

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056

55

modeling is not enough, but rather, a specication of the contents semantics is also needed. In this direction, in the context of the Web, work is being done in the Semantic Web community. If this line is followed in parallel with efforts to describe information semantics and the integration of reasoning mechanisms, these goals may be achieved by combining advances in context-awareness and visualization techniques with intelligent environments. To summarize, this paper contributes to the automatic generation of user interfaces focused on the visualization of information in intelligent environments. There are a great number of issues to consider when designing a user interface: for example, the user prole, the task to perform, and collaborative issues. ViMos manages these issues and the general user context to generate the user interface at run-time. This objective is achieved by combining Semantic Web languages, adaptability techniques and well-known design patterns. Previous studies (described in Sections 3 and 7) adapt their displayed content depending on context attributes. However, changes in the selection criteria or the user interfaces are not suitable. ViMos makes available Web Semantic-based mechanisms to modify relevant context attributes, information retrieval criteria and visualization services to adapt itself to changes in the environment. Acknowledgements This work has been nanced by PII1I09-0123-27 and HITO-0950 projects from Junta de Comunidades de Castilla-La Mancha, and by the TIN2009-14406-C05-03 project from the Ministerio de Ciencia e Innovacin (Spain). References
Schilit, B.N., Adams, N, Want, R., 1994. Context aware computing applications. In: Proceedings of 1st International Workshop on Mobile Computing Systems and Applications, Santa Cruz, USA. Henricksen, K., Indulska, J, Rakotonirainy, A., 2002. Modeling context information in pervasive computing systems. In: Proceedings of Pervasive Computing and Communication, Zurich, Switzerland. Gu, T., Wang, X.H., Pung, H.K., Zhang, D.Q., 2004. An ontology-based context model in intelligent environments. In: Proceedings of Communication Networks and Distributed Systems Modeling and Simulation, San Diego, USA. Vazquez Gmez, J.I., Lpez de Ipia, D., Sedano, I., 2006. SoaM: a web-powered architecture for designing and deploying pervasive semantic devices. Int. J. Web. Inform. Syst. 2, 34. Chen, H., Finin, T., Joshi, A., 2003. Semantic web in a pervasive context-aware architecture. At the workshop on ontologies and distributed systems. In: Proceedings of IJCAI, Acapulco, Mexico. Hervs, R., Bravo, J., Fontecha, J., in press. A context model based on ontological languages, a proposal for information visualization, J. Univers. Comput. Sci. doi:10.3217/jucs-016-12. ISTAG, 2001. Scenarios for Ambient Intelligence in 2010. http://www.cordis.lu/ist/ istag.htm (retrieved February 06). Jos, R., 2006. Beyond application-led research in pervasive display systems. In: Proceedings of Workshop on Pervasive Display Infrastructures, Interfaces and Applications. Dublin, Ireland. Gross, T., 2006. Towards a cooperative media space. In: Proceedings of Information Visualization and Interaction Techniques for Collaboration across Multiple Displays, Workshop associated with CHI06, Montreal, Canada Baudisch, P., 2006. Interacting with wall-size screens. In: Proceedings of Information Visualization and Interaction Techniques for Collaboration across Multiple Displays. Workshop associated with CHI06, Montreal, Canada. Vogl, S., 2002. Coordination of Users and Services via Wall Interfaces. PhD Thesis, Johannes Kepler Universitt Linz, Linz, Austria. ~ Munoz, M.A., Rodriguez, M., Favela, J., Martinez-Garcia, A.I., Gonzalez, V., 2003. Context-aware mobile communication in hospitals. IEEE Comput. 36, 3846. Mitchell, K., Race, N.J.P., 2006. Oi! Capturing user attention within pervasive display environments. In: Proceedings of the Workshop on Pervasive Display Infrastructures, Interfaces and Applications, Dublin, Ireland. Schmidt, A., 2000. Implicit human computer interaction through context. Pers. Technol. 4, 191199. Bravo, J., Hervas, R., Snchez, I., Chavira, G., Nava, S.W., 2006. Visualization services in a conference context: an approach by RFID technology. J. Univers. Comput. Sci. 12, 270283. Wang, D., 2009. GIUC: A gesture interface for ubiquitous computing. In: Proceedings of WRI International Conference on Communications and Mobile Computing, Kunming, China.

Vincent, V.J., Francis, K., 2006. Interactivity of information & communication on large screen displays in public spaces through gestures. In: Proceedings of n Information Visualization and Interaction Techniques for Collaboration across Multiple Displays. Workshop Associated with CHI06, Montreal, Canada. Hibino, S.L., 1999. Task analysis for information visualization. In: Proceedings of Visual Information and Information Systems. Amsterdam, The Netherlands. Hervs, R., Nava, S.W., Chavira, G., Villarreal, V., Bravo, J., 2008. PIViTa: taxonomy for displaying information in pervasive and collaborative environments. In: Proceedings of 3rd Symposium of Ubiquitous Computing and Ambient Intelligence, Advances in Soft Computing. Springer, Salamanca, Spain. Eick, S., Karr, A., 2002. Visual scalability. J. Comput. Graph. Stat. 11, 2243. Thomas, J.J., Cook, K.A., 2005. Illuminating the Path: The Research and Development Agenda for Visual Analytics. IEE CS Press. Hervs, R., 2009. Context Modeling for Information Visualization in Intelligent Environments. PhD Thesis. Castilla-La Mancha University, Spain. Garrett, J.J., 2002. The Elements of User Experience User-centered Design for the Web. New Riders Publishing. Norman, D.A., 1993. Things that Make us Smart. Addison-Wesley, Massachusetts, USA. Tversky, B., Morrison, J.B., Betrancourt, M., 2002. Animation: can it facilitate? Int. J. HumComput. St. 57, 247262. van Rijsbergen, K., 2004. The Geometry of Information Retrieval. Cambridge Press. Vuolle, M., Tiainen, M., Kallio, T., Vainio, T., Kulju, M., Wigelius, H., 2008. Developing a questionnaire for measuring mobile business service experience. In: Proceedings of International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI). New York, NY, USA. Shackel, B., 2009. Usability context, framework, denition, design and evaluation. J. Interact. Comput. 21, 2131. Jung, E.-C., Sato, K., 2005. A framework of context-sensitive visualization for usercentered interactive systems. In: Proceedings of 10th International Conference on User Modeling. Edinburgh, UK, July 2429 2005. Clerckx, T., Coninx, K., 2005. Towards an integrated development environment for context-aware user interfaces. In: Davies, N., Kirste, T., Schumann, H. (Eds.), Mobile Computing and Ambient Intelligence: The Challenge of Multimedia, Dagstuhl Seminar. Limbourg, Q., Vanderdonckt, J., Michotte, B., Bouillon, L., Lpez-Jaquero, V., 2005. USIXML: a language supporting multi-path development of user interfaces. In: Bastide, R., Palanque, P., Roth, J. (Eds.), Engineering Human Computer Interaction and Interactive Systems. Springer, pp. 89107. Lohmann, S., Kaltz, J.W., Ziegler, J., 2006. Dynamic generation of context-adaptive web user interfaces through model interpretation. In: Proceedings of Model Driven Design of Advanced User Interfaces, Genova, Italy. Luyten, K., den Bergh, J.V., Vandervelpen, C., Coninx, K., 2006. Designing distributed user interfaces for ambient intelligent environments using models and simulations. J. Comput. Graph. 30, 702713 (Special issue on Mobile Computing and Ambient Intelligenc). Ressel, C., Ziegler, J., Naroska, E., 2006. An approach towards personalized user interfaces for ambient intelligent home environments. In: Proceedings of the 2nd IET International Conference on Intelligent Environments, Athens, Greece. Nichols, J., Myers, B.A., Higgins, M., Hughes, J., Harris, T.K., Rosenfeld, R., Pignol, M., 2002. Generating remote control interfaces for complex appliances. In: Proceedings on the 15th Annual ACM Symposium on User Interfaces Software and Technology (UIST 02), Paris, France. Sebe, N., Tian, Q., 2007. Personalized multimedia retrieval: the new trend? In: Proceedings of the 9th ACM SIGMM International Workshop on Multimedia Information Retrieval, Special Session on Personalized Multimedia Information Retrieval, Augsburg, Germany. Abascal, J., Fernndez de Castro, I., Lafuente, A., Cia, J.M., 2008. Adaptive interfaces for supportive ambient intelligence environments. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (Eds.), Computers Helping People with Special Needs LNCS, vol. 5105. Springer, Heidelberg, pp. 3037. Wang, Y., Eliens, A., van Riel, C., 2006. Content-oriented presentation and personalized interface of cultural heritage in digital dossiers. In: Proceedings of the Ist International Conference on Multidisciplinary Information Sciences and Technologies, Merida, Spain. Gilson, O., Silva, N., Grant, P.W., Chen, M., 2008. From web data to visualization Via ontology mapping. J. Comput. Graph Forum. 27 (3), 959966. Hartmann, M., Zesch, T., Mhlhuser, M., Gurevych, I., 2008. Using similarity measures for context-aware user interfaces. In: Proceedings of 2nd International Conference on Semantic Computing, IEEE, Los Alamitos, USA. Cantador, I., Bellog, A., Castells, P., 2008. Ontology-based Personalised and contextaware recommendations of news items. In: Proceedings of the Conference on Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM, Sydney, Australia. Gajos. K.Z., 2008. Automatically Generating Personalized User Interfaces. PhD Thesis, University of Washington, Seattle, WA, USA. Nichols, J., Rothrock, B., Chau, D.H., Myers, B.A., 2006. HUDDLE: automatically generating interfaces for systems of multiple connected appliances. In: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA. Ardissono, L., Gena, C., Torasso, P., Bellifemine, F., Chiarotto, A., Dino, A., Negro, B., 2004. User modeling and recommendation techniques for personalized electronic program guides. In: Ardissono, L., Kobsa, A., Maybury, M. (Eds.), Personalized Digital Television. Targeting Programs to Individual Users. Kluwer Academic Publishers.

56

R. Hervs, J. Bravo / Interacting with Computers 23 (2011) 4056 Dey, A.K., 2001. Understanding and using context. Pers Ubiquit Comput. 5 (1), 47. Bravo, J., Fuentes, C., Hervas, R., Villarreal, V., 2009. Enabling NFC technology in hospital wards. In: Proceedings of the International IEEE Conference EUROCON 2009. Saint-Petersburg, Rusia. Bravo, J., Hervas, R., Casero, G., Pea, R., Vergara, M., Nava, S.W., Chavira, G., Villarreal, V., 2009b. Enabling NFC technology to public services. In: Mikulecky, P. et al. (Eds.), Ambient Intelligence Perspectives. IOS Press, pp. 5865. Horrocks, I., Patel-Schneider, P.F., Boley, H., Tabet, S., Grosof, B., Dean M., SWRL: A Semantic Web Rule Language Combining OWL and RuleML. W3C May, 2004. <http://www.w3.org/Submission/SWRL/> (accessed 17.11.2009). Vazquez, J.I., Lpez de Ipia, D., 2008. Social devices: autonomous artifacts that communicate on the Internet. In: Proceedings of the 1st Conference on Internet of Things. Zurich, Switzerland.

Butter, T., Aleksy, M., Bostan, P., Schader, M., 2007. Context-aware user interface framework for mobile applications. In: Proceedings of the 27th IEEE International Conference on Distributed Computing Systems Workshops (ICDCSW-2007), Toronto, Canada. Chaari, T., Laforest, F., Celentano, A., 2007. Adaptation in context-aware pervasive information systems: the SECAS project. Int. J. Pervas Comput Comm. 3 (4), 400425. Golemati, M., Halatsis, C., Vassilakis, C., Katifori, A., Peloponnese, U., 2006. A context-based adaptive visualization environment. In: Proceedings of the Conference on Information Visualization, IEEE Computer Society, Washington, USA. Noy, N.F., 2004. Semantic integration: a survey of ontology-based approaches. SIGMOD Rec. 33 (4), 6570.

You might also like