You are on page 1of 92

SmartProducts

Proactive Knowledge for SmartProducts

SmartProducts
D.5.1.3: Final Description of Interaction Strategies
and Mock-Up UIs for Smart Products

WP 5
Deliverable Lead: VTT
Contributing Partners:
CRF, EADS, PRE, TUD, OU

Delivery Date: 31.01.2011


Dissemination Level: Public
Version 1.0

Copyright SmartProducts Consortium 2009-2012

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Deliverable Lead
Name

Organisation

e-mail

Jani Mntyjrvi

VTT

Jani.Mantyjarvi@vtt.fi

Name

Organisation

e-mail

Elena Vildjiounaite

VTT

Elena.Vildjiounaite@vtt.fi

Ilkka Niskanen

VTT

Ilkka.Niskanen@vtt.fi

Marcus Stnder

TUD

staender@tk.informatik.tu-darmstadt.de

Aba-Sah Dadzie

USFD

a.dadzie@dcs.shef.ac.uk

Jerome Golenzer

EADS

jerome.golenzer@eads.net

Vanessa Lopez

OU

v.lopez@open.ac.uk

Boris de Ruyter

PRE

boris.de.ruyter@philips.com

Julien Mascolo

CRF

Julien.mascolo@crf.it

Name

Organisation

e-mail

Andreas Budde

SAP

andreas.budde@sap.com

Contributors

Internal Reviewer

Disclaimer
The information in this document is provided "as is", and no guarantee or warranty is given
that the information is fit for any particular purpose. The above referenced consortium
members shall have no liability for damages of any kind including without limitation direct,
special, indirect, or consequential damages that may result from the use of these materials
subject to any liability which is mandatory due to applicable law. Copyright 2011 by VTT, TUD,
USFD, EADS, PRE, CRF, OU.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 1

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Table of Contents
LIST OF FIGURES ................................................................................................................................................ 4
LIST OF TABLES .................................................................................................................................................. 7
EXECUTIVE SUMMARY .................................................................................................................................... 8
1

INTRODUCTION ........................................................................................................................................... 9

INTERACTION STRATEGIES .................................................................................................................. 11


2.1
2.2
2.3

2.4

DESCRIPTION OF INTERACTION STRATEGIES ........................................................................................ 11


INTERACTION TYPES ............................................................................................................................ 13
A MODEL FOR GENERATING UIS FROM WORKFLOWS .......................................................................... 15
2.3.1 Displaying Uis ....................................................................................................................... 17
2.3.2 MUI/SUI based Mock-Up ...................................................................................................... 18
FORMAL MODEL FOR USING INTERACTION TYPES ............................................................................... 24
2.4.1 Definition of Sets and the States ............................................................................................ 24
2.4.2 Definition of Functions.......................................................................................................... 25
2.4.3 Definition of Operations ........................................................................................................ 27

MOCK-UP UIS .............................................................................................................................................. 30


3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9

DEFAULT FUNCTIONALITY STRATEGY ............................................................................................... 31


GUIDE THE USER STRATEGY .............................................................................................................. 34
ASK THE USER FOR CONFIRMATION STRATEGY ................................................................................. 44
ADVICE/ NOTIFY STRATEGY ............................................................................................................. 51
RESPONSE TO THE USERS REQUEST STRATEGY ................................................................................. 55
EXPLAIN PRODUCT ACTIONS STRATEGY ............................................................................................ 58
ACKNOWLEDGE TASK STRATEGY ...................................................................................................... 67
SHORT-TERM CUSTOMISATION STRATEGY ........................................................................................ 68
LONG-TERM CUSTOMISATION STRATEGY .......................................................................................... 70
3.9.1 Manual acquisition of user profile ........................................................................................ 70
3.9.2 Learning ................................................................................................................................ 72
3.9.3 Ask for the users feedback strategy ................................................................................. 74

REQUIREMENTS ........................................................................................................................................ 77

CONCLUSION AND OUTLOOK ............................................................................................................... 84

ANNEX .................................................................................................................................................................. 85

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 2

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

A GLOSSARY ................................................................................................................................................... 86
B LIST OF ACRONYMS ................................................................................................................................. 88
REFERENCES ..................................................................................................................................................... 89

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 3

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

List of Figures
Figure 1: Categories of Interaction Types from [Stnder-2010] ................................................ 13
Figure 2: Master and Slave Uis from the Cocktail Companion demonstrator ........................... 16
Figure 3: Sequential SUIs........................................................................................................... 17
Figure 4: Parallel SUIs ............................................................................................................... 17
Figure 5 : Cocktail Companion MUI/SUI usage ........................................................................ 19
Figure 6: Cocktail Companion MUI of the login activity .......................................................... 19
Figure 7: The MUI of the login screen in the real setting in the Cocktail Companion .............. 20
Figure 8: Cocktail Companion MUI welcome screen ................................................................ 21
Figure 9: Cocktail Companion MUI of the cocktail selection activity ...................................... 21
Figure 10: Cocktail Companion MUI for a cocktail recipe without any SUIs ........................... 22
Figure 11: Cocktail Companion MUI of the cocktail preparation with the steps as SUIs ......... 22
Figure 12: Warning when too much vodkas has been added in the Cocktail Companion ......... 23
Figure 13: The Cocktail Companion SUI for measuring the amount of filled-in vodka in real
setting ......................................................................................................................................... 23
Figure 14: The Menu showing the sub-menu for browsing and searching the origami folds
database ...................................................................................................................................... 31
Figure 15: Task selection in origami application for a large screen........................................... 32
Figure 16: Task selection in the cooking assistant for a large screen ........................................ 33
Figure 17: User login for a small screen in the car assistant ...................................................... 34
Figure 18: Recipe guiding in a large screen in the cooking assistant......................................... 35
Figure 19: Recipe guiding in a small screen in the cooking assistant ........................................ 36
Figure 20: Guiding for a vehicle component mounting in automotive domain ......................... 36
Figure 21: Snow chain mounting Step x in automotive domain ................................................ 37
Figure 22: Snow chain mounting context sensing in automotive domain.................................. 38
Figure 23: Dual visualisation and synchronisation of displays in automotive domain .............. 38
Figure 24: Guiding for replace wiper task in the car assistant ................................................ 39
Figure 25: Guiding in aircraft assembly ..................................................................................... 40
Figure 26: Steps overview in aircraft assembly ......................................................................... 41
Figure 27: Guiding via images and text in origami application ................................................. 42
Figure 28: Guiding via videos in origami application ................................................................ 42
Figure 29: Beginner / Expert process sequence in aircraft assembly ......................................... 43
Figure 30: Instructions step Beginner mode in aircraft assembly ........................................... 44
Figure 31: Notification regarding detection of new snow chains on-board and asking for eLUM
update confirmation .................................................................................................................... 45

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 4

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 32: Asking the user for confirmation in the car assistant................................................ 45
Figure 33: Instructing the user to place a cup at the coffee dispenser in the Cocktail Companion
.................................................................................................................................................... 46
Figure 34: Details of instructing the user to place a coffee cup in the Cocktail Companion ..... 47
Figure 35: Cocktail Companion actively trying to get feedback from the user ......................... 47
Figure 36: Tools and material collection in aircraft assembly ................................................... 48
Figure 37: Smart Tool problem report in aircraft assembly ....................................................... 48
Figure 38: Abort procedure in aircraft assembly .................................................................... 49
Figure 39: Retry procedure in aircraft assembly..................................................................... 49
Figure 40: First reminder in the cooking assistant ..................................................................... 50
Figure 41: Repeated reminder in the cooking assistant .............................................................. 50
Figure 42: Asking the user to confirm profile update in the origami application ...................... 51
Figure 43: Work assignment in aircraft assembly ...................................................................... 52
Figure 44: A car servicing advice for cold climate in the car assistant ...................................... 53
Figure 45: A cooking advice for a hypertonic user in the cooking assistant .............................. 53
Figure 46: A cooking advice for weight watchers in the cooking assistant ............................... 54
Figure 47: GUI-based notification that meal is ready in the cooking assistant .......................... 54
Figure 48: Response to the user request for detailed information in the car assistant ............... 55
Figure 49: Different views on the task in aircraft assembly ....................................................... 56
Figure 50: Highlighted entries in the History Log browser in origami application, based on a
user request to View Log, in order to decide whether or not to accept the system prompt to
update their profile ..................................................................................................................... 57
Figure 51: Recommendation for the user after changing presentation modality preference to
Video Only in origami application .......................................................................................... 57
Figure 52: Question answering tool interface ............................................................................ 58
Figure 53: Introducing a sensor-augmented spoon in the cooking assistant .............................. 60
Figure 54: Explaining and reminding the availability of context-based support in automotive
domain ........................................................................................................................................ 60
Figure 55: Explanation regarding transition to the next step of instructions on a large screen in
the cooking assistant ................................................................................................................... 61
Figure 56: Explanation regarding transition to the next step of instructions on a small screen in
the car assistant ........................................................................................................................... 61
Figure 57: Explanation regarding disabling of audio output in the cooking assistant ............... 62
Figure 58: Explanation regarding message triggering in the cooking assistant ......................... 62
Figure 59: Explanation regarding a danger to disable reminders in the cooking assistant ........ 63
Figure 60: Explanation regarding a way to combine preferences of multiple users for a large
screen in the cooking assistant ................................................................................................... 63

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 5

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 61: Detailed explanation regarding a way to combine preferences of multiple users for a
large screen in the cooking assistant .......................................................................................... 64
Figure 62: Detailed explanation regarding a way to combine preferences of multiple users for a
small screen in the cooking assistant .......................................................................................... 64
Figure 63: The Menu showing the sub-menu for browsing the history log in origami
application .................................................................................................................................. 65
Figure 64: Browsing a users history; the detail is shown for the initial (bottom row) and two
subsequent entries for the users profile in origami application................................................. 65
Figure 65: Recommendation in origami application, based on system settings only ................ 66
Figure 66: Recommendation in origami application, based on system settings only ................ 66
Figure 67: Acknowledging that the smart wrench is ready for fixing in aircraft assembly ....... 67
Figure 68: Acknowledgment of eLUM updating performed in automotive domain ................. 67
Figure 69: The Menu showing the sub-menu for setting user and system options in origami
application .................................................................................................................................. 70
Figure 70: Eliciting the users interaction preferences in origami application .......................... 71
Figure 71: Recommendation in origami application, based on user profile............................... 72
Figure 72: Dialog for updating system defaults, showing options for long-term customisation
for user interaction in origami application ................................................................................. 73
Figure 73: Dialog for updating system defaults in origami application, showing options for
influencing long-term customisation for user interaction, by setting how the system is to log
user actions. ................................................................................................................................ 74
Figure 74: Recommendation feedback form in origami application .......................................... 75
Figure 75: Asking for the users feedback and consequent configuration in the cooking
assistant ...................................................................................................................................... 76

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 6

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

List of Tables
Table 1: Sets of the Interaction Model ....................................................................................... 25
Table 2: Functions of the Interaction Model .............................................................................. 27
Table 3: Operations of the Interaction Model ............................................................................ 29
Table 4: Fulfilment of requirements from [D5.1.1] ................................................................... 83

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 7

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Executive Summary
This document presents final interaction strategies and mock-up user interfaces for
smart products. First we describe interaction strategies: a conceptual framework for
describing behaviour of smart products in different situations. The main interaction
strategies for smart products are the following:
Provide default functionality: allow users to give a task to a smart product
Guidance: help to achieve the task that consists of multiple steps
Ask the user for confirmation: ask the user to confirm his/her intentions or
situational changes
Advice/ notification: informing the user about detected situational changes or
providing possibly useful task-related information
Response to the user request: provide the user with the information stored in the
smart product
Acknowledge received task: confirm to the user that smart product knows its task
Explain product actions: provide the reasons for product behaviour
Short-term customisation: allow the user to quickly modify the product behaviour
for the current interaction session
Long-term customisation: allow the user to modify the product behaviour in a long
term; if needed, with the help of Ask for the users feedback strategy: asking how
did the user like behaviour of the smart product
Then we describe the Interaction Types: patterns of conveying messages to the users.
They are characterised by their level of visibility to the users, by the type of required or
desired user response, and by urgency of the expected user response. Then, on the
example of guiding interaction strategy, we describe how user interfaces are built
and how interaction types are selected.
Next we present multimodal user interfaces, illustrating how different interaction
strategy can be realised, and list main interface elements, required for their realisation.
All interfaces, presented in this document, are parts of application prototypes, and the
majority of them were tested in the user studies and accepted by the test subjects.
The mock-ups were implemented in several domains: cooking, automotive, aircraft
assembly and entertainment; some of the mock-ups were designed for large screens,
some for small screens and some for both types. As interaction with the smart
products depends on specifics of application domain and devices, the presented
mock-ups are not aiming at instructing how interface layout should look like; instead,
the mock-ups aim at presenting required interface elements and their functions.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 8

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

1 Introduction
The initial version of interaction strategies and mock-ups was presented in [D5.1.2]. That time
the mock-ups were not parts of functional application prototypes; they were just visions of how
these applications should behave and how their interfaces should look like. After that
application prototypes were implemented and tested in several user studies. The feedback from
the first study was used for updating the prototypes, and the updated prototypes were tested
with the users again. Findings of the user studies confirmed the feasibility of interaction
strategies, listed in [D5.1.2], and allowed us to add three more strategies: explanations of
product actions, short-term customisation and long-term customisation. Consequently one of
the initially proposed interaction strategies, ask for the user feedback, became part of the
long-term customisation strategy because asking for the user feedback is needed only for
learning of user preferences. The results of the mock-ups development and user studies
[D5.5.1] reinforce the importance of customising the interaction to the users and their contexts
the aim of SmartProducts, either using the same base interface or different interfaces and
(underlying) systems, each of which corresponds to the user and task requirements.
Interaction strategies can often be used in a combination with each other, for example, during
guiding it is often feasible to explain actions of the smart products, and it is feasible to provide
users with the means to customise (in a short-term or in a long-term) the behaviour of smart
products along with the explanations. Explanations can be also provided upon user request; in
this case response to the user request gets combined with explain product actions strategy.
Work on explanation strategy was done in close cooperation with the work on SmartProducts
Monitor [D3.4.1].
This deliverable is not aiming at exhaustive presentation of all interaction modalities and
strategies in all possible use cases; instead, it lists main interface elements required for
realisation of different interaction strategies, and presents implementation examples. The
mock-ups demonstrate the difference between usage (selected parts of PRE and CRF scenarios)
and manufacturing (selected parts of EADS scenario) stages of the smart products lifecycle at
the usage stage it is necessary to provide various customisation options, including options to
satisfy preferences of several family members or friends, involved into a same task. At the
manufacturing stage it is necessary to ensure efficiency and correctness of operations, and
provided customisation functionality is much more limited and not aiming at satisfying
preferences of multiple users because every aircraft assembly operator uses own device.
The document is organised as follows. The next chapter describes the main interaction
strategies and interaction types, and then generation of interfaces from workflows that utilise
interaction types. Chapter 3 presents the mock-ups for each interaction strategy (the
screenshots of applications in different domains, used for testing the realisation of the

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 9

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

interaction strategy in this domain and on this device), and last chapter conclusions. The
mock-ups, presented in Chapter 3, do not have exactly same functionality, as they were
developed for studying different aspects of interaction between users and smart products. For
example, one of the mock-ups is focused on manual acquisition and learning of user
preferences: initial manual acquisition, comparison of these initial preferences with the
choices, made by the users in the course of interaction with the smart products, and updating
the preferences in the course of interaction. Other mock-ups are focused on helping the users to
achieve practical tasks and to customise the smart products, but do not necessarily employ
learning of user preferences. These mock-ups also have different functionality, sometimes due
to specifics of their domains (for example, mock-ups in cooking domain must allow for users
desire to relax or to be creative during cooking, while in aircraft assembly domain relaxation or
extra creativity may be dangerous). Sometimes differences between the mock-ups are due to
their purposes: for example, we present two mock-ups in automotive domain. The first mockup in automotive domain is focused on realisation of interaction features, described in
SmartProducts scenario, while another mock-up was developed for studying differences
between user perception of various interaction features in cooking and automotive domains. It
has exactly same UI as the corresponding cooking mock-up, because otherwise differences in
GUI appearances could affect user opinions. However, we present here all existing mock-ups,
because all of them are successful examples of how interaction strategies can be implemented.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 10

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

2 Interaction Strategies
As stated in Description of Work in SmartProducts project, one focus of WP5 lies on
concepts how humans can interact with proactive knowledge and their technical
implications. Thus a twofold approach has been chosen to describe interaction strategies. The
first concept describes a set of different meta Interaction Strategies (IS), like guiding or
notifying the user. Different strategies have impact on the used components of the platform, as
it will be described in Section 2.1. The second concept, the concept of Interaction Types (IAT),
mainly refers to Adaptive interaction based on proactive knowledge problem (also stated in
Description of Work) and will be explained in Section 2.2. Finally, the theoretical and practical
results of how the concept of IATs are formalized and later realized in the platform will be
shown in Section 2.4.
These building blocks together form a conceptual approach for understanding and handling the
interaction between users and products.
2.1

Description of Interaction Strategies

In [D5.1.2], the initial set of ISs has been introduced. These strategies resulted from the
analysis of the scenario and interaction requirements and formed the scope of our architecture.
In the following, we will provide the final set of Interaction Strategies and describe, how they
are reflected in the SmartProducts platform.

Guide the user: explain which actions the user should perform in order to achieve his/
her goal (for example, explain how to assemble snow chains). This is the most central
IS. Analyses of the scenarios revealed, that this strategy can be realized by procedural
knowledge, more exactly, workflows. Thus, interactive (context-aware) workflows
[Stnder-2011] have been becoming fundamental building blocks of the smart products.

Ask the user for confirmation: ask the user if he /she has performed some action (for
example, when it cannot be determined from context data) or offer a help and ask
whether the user wants it (for example, ask the user whether he/ she wants the product
to execute certain task). This is product-initiated interaction and since this type of
interaction may be annoying, a three tier approach for automation in smart
environments was proposed [Stnder-2010]. If a product has to recognise user actions
or to execute a task, it should first try to do it by itself. If this is not possible, it should
try to find related products, which could execute the task. If this also fails, the user has
to be approached.

Advice / Notify: inform about situational changes relevant to the users tasks or
interest, as well as about problems: for example, the smart product can warn the user

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 11

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

that the plate in the kitchen is getting too warm. This is also product-initiated
interaction, but usually less annoying because it does not require user response.

Acknowledge task: inform the user that the smart product understood what it should do
and will perform the task. This IS is based on traditional user interface theory, which
has shown, that users prefer immediate responses, if a program - or product, in this case
- understood what it should do and if it will perform the task [Spiekermann-2007].

Response to the users request: provide the user with the information stored in the
smart product, e.g., to give more details regarding user task, or to answer when the
coffee machine or a car was serviced last time.

Default Functionality: the set of possible actions, a product can usually provide. In
general, all these functions should be always available at any time. For example, if the
user is approaching a coffee machine, he /she should be able to get all available types of
coffee and tea even if the product knows that this user does not like coffee.

Explanations of product actions: provide the user with the reasons for product
behaviour, for example, to explain that interface change was caused by recognised
event.

Short-term customisation: allow the user to quickly configure certain features of


smart products behaviour for the current interaction session, for example, to
temporarily disable audio output

Long-term customisation: allow the user to configure various features of smart


products behaviour for the current and future interaction sessions, for example, to
disable audio output until the user explicitly permits it. Long-term customisation may
utilise learning, and for learning one more interaction strategy may be useful:
o Ask for the users feedback: if the user has performed a task, the smart product
may ask her for feedback to find out whether its current behaviour is suitable or
the user is unsatisfied. This feedback can be used for updating the user
preferences.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 12

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

2.2

Interaction Types

Warning
N/A

Predicate
Request

Request

Notification

Predicate
Prompt

Prompt

Phrase

Optional
Predicate
Prompt

Optional
Prompt

None

Simple (Yes/No)

Complex

Expected User Response (Product Input)

Figure 1: Categories of Interaction Types from [Stnder-2010]

While the ISs form a conceptual framework for describing different types of situations, the
IATs are much closer to the actual interaction design. The focus of interaction of the
SmartProducts platform does not lie on free chat applications where the user talks with the
environment about random topics. Instead, products shall support the user in fulfilling his
goals, which often results in guidance, which is realized by a workflow model in the
background. However, to enable the smart product to figure out when and how to approach the
user, the concept of IATs was introduced. As described in [D5.1.2], they have been derived in
accordance with the interaction models in the speech act theory [Austin-2000] and
communicative acts [FIPA-2000].
For the readers convenience, we now provide a copy of the IATs from [D5.1.2], which has
been slightly enhanced for a better understanding.
This instantiation focuses on practical product-initiated computer to human interaction to be
used in smart environments. The different types of interaction elements can be arranged
according to the importance of the main message for the user and the expected user response as
shown in Figure 1. Concerning the importance, we differ between interaction that can be
omitted, that can be deferred and that cannot be deferred. The expected feedback can be split
up into no expected feedback, simple predicate feedback (yes/no) and complex feedback. Please

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 13

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

note that one of our core premises is that this feedback cannot only be explicit feedback as
provided by e.g. the user filling out visual forms or pressing buttons. Moreover, we incorporate
implicit feedback in the form of context. In these cases the definition of simple and complex
feedback holds as well. Example: when the user has to pick up a certain tool and the Smart
Products platform highlights it, e.g. by letting the tool use a blinking LED, the platform can
expect simple yes/no feedback by recognizing if the user picked the tool or not. In the
corresponding complex case, the platform tells the user to pick up some tool of a category and
try to figure out which tool he /she picked up.
These types can then be used to determine the modalities for interacting with the user in the
most suitable way. Further, it might be necessary to allow changing the type during runtime.
The subject of a notification, which is disregarded for too long, might become a critical issue
after some time.
Below is a brief description for these types and examples for the case of a smart coffee maker
(its description can be found in [Aitenbichler-2007]):
Phrase Phrases only convey interaction of low importance, which can also be easily ignored,
like greetings or wishes.
Example: Welcome back Charly, see Figure 8
Optional Predicate Prompt Optional Predicate Prompts can be used to get  ehavio values. If
no response is recognized, a predefined default value will be used.
Example: Figure 52 shows an example, where the user can select to use a sensoraugmented stirring spoon. If no reaction is received, the default behaviour is not to use it.
Optional Prompt Optional Prompts allow more complex feedback. They also contain default
values that are used, if no user feedback is received for some time.
Example: One example is shown in Figure 17, where a list of users can be selected. If no
additional user is selected, the default, in this case the recognized user Lena will be
used in the further progress.
Notification The content of Notifications is related to all general information available and can
be deferred for some time.
Example: Figure 44 shows a notification about how to enhance the lifetime of whipers.
While this is not important for the overall progress, it is valuable knowledge for the user.
Predicate Prompt Predicate Prompts are deferrable and expect simple yes/no feedback.
Example: Figure 40 shows such a case. First the question is not that urgent and can be
deferred, if it gets more urgent, its type is changed to a Predicate Request.
Prompt Prompts can be used to ask for more complex feedback. This type is also deferrable.
Example: The further process in Figure 21 depends on the type of used snow chains. Thus, it is
important to find out, which snow chains the user wants to use. Such kind of information

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 14

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

could be asked explicitly or implicitly. In such cases, the system not only needs to find
out that the user picked up the snow chains; it also needs to sense and process the correct
type. Thus, the expected feedback is much more complex.
Predicate Request Predicate Requests describe the situation where a smart product asks for
non deferrable, simple feedback. This type differs from the Predicate Prompt only in the
urgency.
Example: The difference in the urgency can be communicated by using audio messages
and by visual means, compare Figure 34 (Details of instructing the user to place a coffee
cup in the Cocktail Companion) and Figure 35 (Cocktail Companion actively trying to
get feedback from the user).
Request As the Predicate Request, the Requests require timely feedback. The expected
feedback can be more complex. This type differs from the Prompt only in the urgency.
Example: Figure 16, shows the selection of the next task. This can be important/urgent
for the further progress. Thus, the task selection can be realized using a request. In this
case, the feedback consists of a concrete selected task that shall be executed next.
Warning It is crucial that Warnings are recognized by the user as soon as possible and thus
cannot be delayed.
Example: Warnings can be used for different strategies, e.g. see Figure 41. Ignoring the
question for too long might be dangerous.
2.3

A Model for Generating Uis from Workflows

Workflows are one way to formalize procedural knowledge about tasks which can be
processed by or with the help of smart products. In 1999, the Workflow Management Coalition
(WfMC) defined a workflow as [t]he automation of a business process, in whole or part,
during which documents, information or tasks are passed from one participant to another for
action, according to a set of procedural rules [WfMC-1999]. As the user study from [D.5.5.1]
shows, users feel more comfortable while being guided, if they have an overview of the steps
they already processed or have to process in the future. W.l.o.g., the following methods will be
depicted with GUIs but the statements also hold for other modalities like speech interfaces. The
easiest solution to create the required overview might be just showing all activities of the
workflow in a big list. However, this might cause serious confusion, if the workflows describe
a more complex task containing many activities. Another approach might be to exactly
describe the UI to show. Since the Uis should represent the current state of the progress, e.g. by
highlighting the currently active steps, every predefined UI would need to contain the set of
steps and a mark, which ones have to be highlighted. This looks maybe nice during runtime but
is not maintainable for the developers of workflows.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 15

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Thus, the concept of Master and Slave Uis (MUI/SUI) has been created. A MUI describes the
scope of a UI, while the SUIs describe certain sub steps. In the SmartProducts demonstrator
Cocktail Companion for example, the user has to select a cocktail that she wants to mix. The
scope could in this case be the concrete cocktail, like the Sweet Dreams. The SUIs are then
attached to the different steps like adding ice or adding 5cl pineapple juice. The result is
then a screen as shown in Figure 2.

Slave UIs
Master UI

Figure 2: Master and Slave Uis from the Cocktail Companion demonstrator

So far, basic sequential and parallel combinations for MUIs and SUIs have been examined. If a
MUI is followed by a sequence of SUIs, the currently active activity can highlight its
representing UI (see Figure 3). If a MUI is followed by a parallel set of SUIs, all of them are
active and the execution order does not matter. Thus, all SUIs are marked as active (see Figure
4). In both cases, if the next MUI in a sequence is added, it replaces the old MUI together with
its SUIs.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 16

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

M
Add Coffee

S
SelectTool1

M
Add Sugar

M
SelectTools
S
Place Cup

Wait For
Coffee

Add Coffee
Only
One
Active

S
Add Coffee
To Cocktail

Figure 3: Sequential SUIs

2.3.1

Select Tools

Add Sugar

Add Coffee

Start
Mounting

S
SelectTool3

Start Mounting

Select Tool 1

Place Cup
Wait for Coffee

S
SelectTool2

All
Active

Select Tool 2
Select Tool 2

Figure 4: Parallel SUIs

Displaying Uis

The Interaction Manager (IAM) handles incoming and outgoing interaction as described in
[D5.4.1, D5.4.2]. It uses the loop from Listing 1 to select Uis from the UI queue that shall be
displayed next. In our case, a UI consists of three parts: (i) the ID of the UI to show, (ii) a
deadline for the UI and (iii) an Interaction Type. While the ID is used by later instances in the
UI processing chain (like the Multimodality Manager or the UI Adapter) to identify suitable
Uis, e.g. graphical or audible Uis, the deadlines and Interaction Types are directly used by the
Interaction Manager to select the next UI to display.
Please note, that the sets, functions and operations are explained in Section 2.4. Further, it is
important to understand, that the Interaction Manager does only select the UI that shall be
provided towards the user, not the exact modality. As explained in [D5.4.1] and [D5.4.2], the
UI is only the abstract description and will be concretised by the Multimodality Manager and
the UI Adapter. Thus, operations like display(Z) are only responsible for selecting and
forwarding the UI to the components, responsible for finally preparing and providing the UI.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 17

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Q={};
D={};
AA={};
while (running) {
// Select elements, which shall be displayer,
// from the queue, if available
if (|Q| >= 1) {
display(Z);
}
SLEEP UNTIL {
a workflow activity with a UI changed its state
OR Q changes (maybe a UI is queued from outside
workflows, thus, recheck what to display)
OR registered deadline for a UI expires}

the

// Trigger the set changes and add all required UIs to the queue
if (workflow activity Ai is started) {
onActivate(Ai, Z);
} else if (workflow activity Ai is completed) {
onDeactivate(Ai, Z);
} else if (deadline expired for UI Ui) {
onDeadlineExpired(Ui, Z);
}

Listing 1: Main Processing Loop of Interaction Manager

As long as the Interaction Manager is running, it selects the most urgent UI from the queue and
waits for either a workflow to continue, for a UI being triggered externally or a deadline of a
UI to expire, which also influences the real-time display order. After such a trigger occurred,
the Interaction Manager takes the appropriate actions and rechecks which UI shall be shown
for the new situation.
2.3.2

MUI/SUI based Mock-Up

As already briefly introduced, the demonstrator of the Cocktail Companion was partly created
as a mock-up for representing and testing the MUI/SUIs. In this section, we provide a short
description of the Cocktail Companion and provide some screenshots and images during usage.
The purpose of the Cocktail Companion is to assist the user in executing tasks, preparing
cocktails, in this case. The tasks are modelled as workflows, see Figure 5. When the user logs
in, the Cocktail Companion greets the user and offers a set of cocktails. So far, every activity is
assumed to provide a MUI, thus, fully replacing previous Uis. The user selects the desired
cocktail and the Cocktail Companion guides her through a step-by-step preparation process.
These step-by-step descriptions can be realized by the combination of MUIs and SUIs attached
to the activities of the workflows, as depicted in Figure 5. We realized a set of different
cocktails to make the UI more convincing.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 18

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 5 : Cocktail Companion MUI/SUI usage

The following figures show some of the central Uis of the Cocktail Companion, followed by
some pictures, showing the UI during usage.

Figure 6: Cocktail Companion MUI of the login activity

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 19

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 7: The MUI of the login screen in the real setting in the Cocktail Companion

The login UI is realized by a single MUI (of type Prompt), as shown in Figure 5 to Figure 7.
When the workflow is activated and the Login activity is started, the UI for logging in is
queued in the Interaction Manager. The IM checks which UI to display and sends a request to
the responsible components (here, the Multimodality Manager, which selects a GUI). Once
logged in, the workflow completes the login activity and switches to the next activity: Select
Cocktail. This activity is also annotated with a MUI and thus, the IAM replaces the previous
UI in the queue, which means, that the old UI is removed and the new UI is added. Figure 9
shows the UI of the new active activity, which is again of the type Prompt.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 20

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 8: Cocktail Companion MUI welcome screen

Figure 9: Cocktail Companion MUI of the cocktail selection activity

The next figures show the usage of combined MUIs/SUIs. While the previous Uis consisted of
one description, this UI is assembled from the MUIs/SUIs. Figure 10 shows the MUI of such a

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 21

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

recipe. Note, that the steps of the recipe are not listed inside the MUI itself; they will be
generated from the different steps, the user has to process. Beneath technical details like the ID
of the UI, which is used for internal management, the description contains data like a short
and/or longer and detailed description or media, like pictures. Figure 11 shows a sequential
ordering of five SUIs, providing the list of steps for making a certain cocktail. In this case, one
activity is marked as active, having black font, while others are greyed-out.

Figure 10: Cocktail Companion MUI for a cocktail recipe without any SUIs

Figure 11: Cocktail Companion MUI of the cocktail preparation with the steps as SUIs

For the current version of the Cocktail Companion, only a very limited handling of errors has
been realized. If e.g. the user adds too much of a certain ingredient, the visualization changed

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 22

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

its colour, like shown in Figure 12. Optionally, also a more obtrusive warning could be
initiated, depending on the importance of the activity. However, more complex error handling
would then need to be reflected in the workflow of the different recipes.

Figure 12: Warning when too much vodkas has been added in the Cocktail Companion

Figure 13: The Cocktail Companion SUI for measuring the amount of filled-in vodka in
real setting

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 23

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

2.4

Formal Model for Using Interaction Types

So far, only verbal descriptions of the principles of operation of the IAM have been presented.
The model that will be introduced in this section is the mathematical founded theory behind it
and covers commonly used Uis, but if for example in Figure 4, the activity SelectTool3
would consist itself of a MUI followed by some SUIs, this is not described yet in the formal
model. Currently, this MUI would be treated like a SUI. The model describes the functionality
of the Interaction Manager as a finite state machine with states Z. The model consists of a
definition of required sets, functions and operations, whereby operations are functions, which
cause side effects and thus, directly manipulate Z. The basic idea is to use a queue for Uis that
shall be shown and select the most appropriate UI during runtime. Following this approach, the
following formulas define three kinds of rules:
- Rules describing conditions when Uis should be queued in the general UI-queue
- Rules describing when UI conditions may change, e.g. by exceeding a deadline
- Rules describing when and how to select Uis from the queue
2.4.1

Definition of Sets and the States

The following sets describe the state of workflows the interaction is based on, and the Uis that
shall be displayed either because of active workflows or other external reasons. The interaction
can thereby be caused by different activities, e.g. in parallel.
A
T

Set of workflow activities as defined in [WfMC-1999]

W = ( A, T ) 

A workflow is a tuple of A and T

A* A

Set of all activities with attached UI

AA A

Set of all activities, which are currently active

Set of all possible Uis

Q U

UI queue that contains Uis to display

DQ

Currently displayed Uis, e.g. Uis visible at some screen

UM U

Set of all MUIs

US U

Set of all SUIs


A MUI must not be a SUI

Set of workflow transitions as defined in [WfMC-1999]

UM US =
Every UI is either a MUI or a SUI

UM US = U
U S ,i U S

Set of all SUIs of master UI

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 24

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

U j U S isMUI (U j ,U i ) U j U S ,i
U X U | X IAT

IAT = {W , PHR, PR, R, PP, P, OPP, OP}


Whereby IAT = InteractionType, W = Warning, PHR = Phrase,
PR = PredicateRequest, R = Request, PP = Predicate Prompt, P = Prompt,
OPP = Optional Predicate Prompt, OP = Optional Prompt

U IAT = {UW ,U PHR ,U PR , U R ,U PP , U P , U OPP ,U OP }

UX

Set of all (master or slave) Uis with a certain IAT X

UW

Set of all Warning Uis

U PHR Set of all Phrase Uis


U PR

Set of all Predicate Request Uis

UR

Set of all Request Uis

U PP

Set of all Predicate Prompt Uis

UP

Set of all Prompt Uis

U OPP

Set of all Optional Predicate Prompt Uis

U OP

Set of all Optional Prompt Uis

The sets of interaction types are pair wise disjunct

X , Y IAT X Y U X U Y =
Table 1: Sets of the Interaction Model
Theinteractionwithworkflowscanthusbedescribedasafinitestatemachinewithstates:

Z = ( Q, D, A A ,U IAT )

2.4.2

Definition of Functions

Functions are either mappings or assertions. They are mainly used to support the expression of
operations, which are listed in the next section. First the functions will be declared, together
with a short explanation, remarks and facts. Then the mathematical formula will express the
concrete semantics.
ui : A* U

Returns: the user interface of a given activity


- Bijektive
-

Notation: let w.l.o.g. U i be the UI of Ai

ui ( Ai ) = U i
follows : A A bool

Returns: true, if the first passed activity logically follows the second

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 25

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

one
-

Transitive

A j follows Ai :
Ai is required to be finished to start A j

In a sequence of activities, there can only be one activity


active

Ai , j A follows ( Aj , Ai ) follows ( Aj , Ai ) Ai A A Aj A A

t T .t ( Ai ) = Aj
true, if
follows ( Aj , Ai ) =
or Ai +1,, j 1 A . follows ( Ai , Ai +1 ) follows( Aj 1 , Aj )

false, else
concurrent : A A bool

Returns: true, if the order of the two activities is not determinable


during design time
- Two activities are concurrent, if there is no sequence relation
between them

true, if ! follows ( Ai , Aj ) follows ( Aj , Ai )


concurrent ( Ai , Aj ) =
false, else

followingMUI : U M U M bool

Returns: true, if the UI given as first parameter is a direct successor


master UI of the second UI; this means no other master UI is allowed
in between

Ai , j A* . follows ( Aj , Ai )
true, if
followingMUI (U j , U i ) =
Ak A* .ui ( Ak ) U M follows ( Ak , Ai ) follows ( Aj , Ak )

false, else

isMUI : U S U M bool

Returns: true, if the UI given as first parameter is a direct successor


master UI of the second UI; this means no other master UI is allowed
in between

Ai , j A* . follows ( A j , Ai )
true, if
isMUI (U j , U i ) =
Ak A* .ui ( Ak ) U M follows ( Ak , Ai ) follows ( A j , Ak )

false, else

subUI : U S U M bool

Returns: true, if the first passed UI is a slave UI of the passed MUI

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 26

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

A A* . follows ( Aj , Ai )
true, if i , j
subUI (U j ,U i ) =
Ak A* .ui ( Ak ) U M follows ( Ak , Ai ) follows ( Aj , Ak )

false, else

activeUI : U bool

Returns: true, if the activity of this UI is active

true, if Ai A A
activeUI (U i ) =
false, else

chooseDisplayUI : U P (U )

Returns: the (set, since there might be slave Uis) of Uis that are of the
same IAT as the given one but should be displayed before U i

{U j } , ifU i U S U X IAT U j U S U X U k U S U X .queued (U k ) queued (U j )

chooseDisplayUI (U i ) = {U j } U S , j , ifU i U M U X IAT U j U M U X U k U M U X .queued (U k ) queued (U j )

{U i } , else

Table 2: Functions of the Interaction Model


2.4.3

Definition of Operations

Operators are functions that map from some preimage Z to an image within Z. They are
manipulating any of the sets of Z, e.g. they are queuing Uis in Q. This section is structured
similar to Section 2.4.2. First an overview of the operations, together with explanation, remarks
and facts is given, followed by the mathematical description.
queueUI : U Z Z

Modifies Q

U i is added to Q (including all sub Uis, if U i U M )

Q, if U i U Phrase Q \ U Phrase

. j U Phrase
{U i } , if U i U Phrase U j QU

queueUI (U i , Z ) = Z ', where Z ' = ( Q ', D, A A , U IAT ) and Q ' =


Q ({U i } U S ,i ) , if U i U Phrase U i U M

Q {U i } , else

unqueue :U Z Z

Modifies Q

U i is removed from Q (including all sub Uis, if U i U M )

Q \ ({U i } U S ,i ) , if U i U M
unqueueUI (U i , Z ) = Z ', where Z ' = ( Q ', D, A A , U IAT ) and Q ' =
Q \ U i , else

replace : U U Z Z

Modifies Q

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

If a master UI is replaced by another master UI, all its sub Uis are
removed as well

Dissemination Level: Public

Page 27

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

replaceUI (U i , U j , Z ) = Z ', where Z ' = queueUI U i , unqueueUI (U j , Z )

onDeadlineExpired : U Z Z

Modifies U IAT

If the deadline of a Predicate Prompt (PP) expires, it becomes a


Predicate Request (PR)
If the deadline of a Prompt (P) expires, it becomes a Request (R)
If the deadline of a Predicate Request (PR) expires, spawn a new
Warning (W), pointing at the Request
If the deadline of a Request (R) expires, spawn a new Warning (W),
pointing at the Request
If the deadline of an Optional Predicate Prompt (OPP) expires, it is
removed from the queue
If the deadline of an Optional Prompt (OP) expires, it is removed
from the queue

move(U i ,U PP ,U PR , Z ), if U i U PP U i U P

queueUI (U j , Z ) , if U i U PR U i U R U j UW

onDeadlineExpired (U i , Z ) = Z ', where Z ' =

unqueueU
I
U
,
Z
,
if
U
U

U
U

( i )
OP
i
OPP
i

Z , else

move : U U IAT U IAT Z Z

Modifies U IAT

Moves an element U i from set one set of IATs to another set, thus
changing its type

move(U i , U X IAT , U Y IAT , Z ) = Z ', where Z ' = ( Q, D, A A ,U 'IAT ) and U ' X = U X \ U i and U 'Y = U Y U i

ignore : U Z Z

ignore (U i , Z ) = Z
setActive : A Z Z

Does not influence Z

Drop the UI without adding it to Q

Modifies A

This method is used to add an activity to the set of active activities

setActive ( Ai , Z ) = Z ', where Z ' = ( Q, D, A A ',U IAT ) , where A A ' = A A Ai

onActivate : A Z Z

Modifies A , maybe Q

Used, when an activity Ai is started

setActive Ai , replaceUI (U i , U j , Z ) , if Ai A* U j . followingMUI (U i , U j )

onActivate ( Ai , Z ) = Z ', where Z ' =


setActive ( Ai , queueUI (U i , Z ) ) , else if Ai A* U i Q

setActive ( Ai , Z ) , else

setInactive : A Z Z

Modifies A

This method is used to remove an Activity from the set of active


activities A

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 28

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

setInactive ( Ai , Z ) = Z ', where Z ' = ( Q, D, AA ',U IAT ) , where AA ' = AA \ Ai

onD eactivate : A Z Z

Modifies A , maybe Q

Used, when an activity Ai is finished

If U i U M , the UI and its sub Uis are removed

setInactive ( Ai , unqueueUI (U i , Z ) ) , if Ai A* U i U M
onDeactivate ( Ai , Z ) = Z ', where Z ' =
setInactive ( Ai , Z ) , else

display : Z Z

Modifies D

Display the most highly rated UI (preferred UI)


Also shows sub Uis, if U i U M

If there is a W instantly show it


If there is a PR within the stack & no higher rated UI, show it
If there is a R within the stack & no higher rated UI, show it
If there is a N within the stack & no higher rated UI, show it
If there is a PP within the stack & no higher rated UI, show it
If there is a P within the stack & no higher rated UI, show it
If there is a OPP within the stack & no higher rated UI, show it
If there is a OP within the stack & no higher rated UI, show it
If there is a PHR within the stack & no higher rated UI, show it

chooseDisplayUI (U i ) , if

chooseDisplayUI (U i ) , if

chooseDisplayUI (U i ) , if

chooseDisplayUI (U i ) , if

A
display ( Z ) = Z ', where Z ' = ( Q, D ', A , U IAT ) and D ' = chooseDisplayUI (U i ) , if

chooseDisplayUI (U i ) , if
chooseDisplayUI U , if
( i)

chooseDisplayUI (U ) , if
i

chooseDisplayUI (U i ) , if

U i Q UW
U i Q U PR U j Q.U j UW
U i Q U R U j Q.U j UW U PR
U i Q U N U j Q.U j UW U PR U R
U i Q U PP U j Q.U j UW U PR U R U N
U i Q U P U j Q.U j UW U PR U R U N U PP
U i Q U OPP U j Q.U j UW U PR U R U N U PP U P
U i Q U OP U j Q.U j UW U PR U R U N U PP U P U OPP
U i Q U PHR U j Q.U j UW U PR U R U N U PP U P U OPP U OP

Table 3: Operations of the Interaction Model

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 29

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

3 Mock-Up Uis
This section presents mock-up Uis, developed in the SmartProducts project for testing various
interaction strategies and types in different application domains: cooking (guiding through
recipe preparation steps), automotive (guiding through mounting snow chains steps and wiper
changing procedure steps), aircraft assembly and entertainment (guiding through origami
folding steps) domains. Origami is an art of folding beautiful figures from paper, and guiding
for origami resembles guiding for smart products scenarios. Origami application was
developed for research purposes because arranging user tests in realistic origami scenario is
easier than arranging user tests in realistic car servicing scenario. Origami application was
found very useful for studying various generic interaction problems, despite entertainment
domain is not present in SmartProduct project scenarios.
The tested input modalities of the mock-ups included various sensors (accelerometer, RFID,
audio, camera) and GUI, and output modalities include audio and GUI (text and images/
videos). Workpackages 2 and 3 also contributed to mock-ups development: first of all, all
presented mock-ups utilise ontologies, developed in cooperation with WP2; second, question
answering tool, developed in WP3, and SmartPoducts Monitor, also developed in WP3,
directly implement certain interaction strategies.
Implementation of the mock-ups depends on application domain and on device, but all of them
contain elements for enabling SmartProducts interaction strategies, although not necessarily for
enabling all strategies. The presented screenshots often illustrate two or more strategies,
because for example it is natural to combine explain user actions strategy with short-term
and/ long-term customisation strategy: users may get very annoyed if they are not allowed to
correct smart products behaviour after they understand the reasons for it. Often different
implementations of one strategy are presented, for example, various guiding options, because
implementation details depend on application domain and device. We do not aim at comparing
various interfaces; we only aim at pointing out main interface elements, required for enabling
each interaction strategy. However, we would like to point out that the majority of the
presented mock-ups was tested in several user studies [D5.2.1.Annex, D5.5.1, Vildjiounaite2011] and accepted by the test subjects. The preliminary mock-ups for aircraft assembly, that
are slightly different from that presented in this document, were tested recently in the user
study at EADS. The study confirmed the feasibility of the planned interfaces, presented in this
document. The Cocktail Companion was presented to several subjects in a course of
development and at the ICT 2010 [ICT] and iTEC 2010 [iTEC] events.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 30

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

3.1

Default functionality strategy

Default functionality is the functionality which should be always possible to access, and to do
it fairly quickly. The absolutely minimal default functionality is task selection. As smart
products provide user-adapted interaction, default functionality includes also user login.
Depending on device size and domain-dependent probability of how frequently the default
functionality may be needed, these elements may be included in the main application screen or
accessed by activating a separate screen.
Figure 14 illustrates, that non-personalised task selection in origami application is available via
main menu, which opens a separate screen with the list of tasks and their descriptions,
presented in Figure 15. Main menu in origami application provides also personalised task
selection option; its menu opens if Get Recommendation menu item is selected or
corresponding shortcut key is pressed (Each menu item has available shortcut key (using the
Alt-Key mask) and a mnemonic, allowing keyboard interaction in addition to the use of a
mouse. Disabled menu options are indicated using grey-out.)

Figure 14: The Menu showing the sub-menu for browsing and searching the origami
folds database

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 31

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 15: Task selection in origami application for a large screen

Personalised task selection is enabled only after user login. Figure 51 and Figure 71 present
personalised task suggestions for different user preferences. Figure 16 presents an example of
personalised task selection in the cooking assistant (i.e., selection of favourite recipes), which
does not require opening a separate screen, but presents only names of frequently cooked
recipes. Selection of one of favourite recipes in the large screen of the GUI can be performed
fairly quickly by pressing select task button, which opens recipe list. Selection of a recipe
which is not so well-known would require providing at the same time more information about
this recipe. This may be achieved by providing additional information upon a click on a recipe
name, or by displaying most important information about recipe immediately, as it is done
in Figure 15 for origami selection task. Naturally what is important to display is different for
recipe and for origami fold selection, for example, time is more important for recipe selection
than for origami fold selection: even a person in a hurry may want to cook and eat, but a person
in a hurry will hardly start folding origami.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 32

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 16: Task selection in the cooking assistant for a large screen

Figure 16 also shows that user login in the laptop-based version of the cooking assistant is
available via the main application screen, because new family members or guests may join
cooking process any time, and because screen is large enough. User login is performed via
selecting users names from the list (when multiple users login simultaneously, all their names
are highlighted in the list). Figure 17 shows (on the example of a car servicing application,
running on N900 mobile phone) how GUI-based login is implemented in the small screen of
the phone: the user list is opened in a separate window after the button with the user name(s)
was pressed. Task selection for a small screen would also require opening another window, due
to small screen size. In aircraft assembly mock-up task selection is also done in a separate
window via main menu, and Figure 25 shows that a button, allowing getting to this main menu
(button (1)), is always visible.
Figure 6 and Figure 7 show the sensor-based login and Figure 9 presents a task selection screen
in the Cocktail Companion mock-up. The user login in the Cocktail Companion utilises a
SmartProducts Authentication Device (AD): an RFID reader for chipcards. The Cocktail
Companion does not allow skipping the login step, since non-personalised task selection is
infeasible: the list of cocktails, which is displayed in the next step (see Figure 9), depends not
only on the available ingredients, but also on the identified user. If the user is under 18, she
would not get cocktail recipes with alcohol, for example.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 33

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 17: User login for a small screen in the car assistant

3.2

Guide the user strategy

This strategy is the core of most of application scenarios. Guiding through a task in the mockups requires interaction elements for:
showing overview of the task steps
step-by-step presentation of instructions
navigation between the steps
recognition of user actions
Elements for recognition of user actions are optional as they depend on availability and type of
sensors; presenting an overview of the task steps is convenient for the users, but may be
presented in a separate screen, depending on complexity of instructions and a device. Steps
overview may be also skipped: for example, in aircraft assembly correctness of procedures is
very important, while allowing operators to freely navigate between steps may be dangerous.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 34

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

In the cooking domain guiding through recipe preparation steps is the common case. Figure 18
presents a guiding mock-up for the large screen (laptop) for Halloween sausages recipe: the
steps overview is presented in the right part; instructions for the current step are presented to
the left from the steps overview; navigation between the steps can be done via pressing next
and previous buttons or by selecting a step in the overview list. Additionally, next and
back speech commands can be used for the navigation between the steps, and sensor-based
recognition of user actions can be used for triggering transition to the next step. Controls in the
lower part of the GUI will be explained below, while presenting short-term customisation
strategies. The guiding mock-ups of the Cocktail Companion are presented in the previous
section.

Figure 18: Recipe guiding in a large screen in the cooking assistant

As users generally appreciate polite behaviour of computers and as it useless to instruct the
users to do certain step if the users are currently distracted from the task by a conversation
between each other or a phone call, the mock-ups were made capable of delaying audio
instructions until a break of a conversation would be detected (for more details, see
D5.2.1.Annex [D5.2.1.Annex] and the paper [Vildjiounaite-2011]). Interaction via GUI looks
same as above.
The guiding mock-up for a smaller screen in cooking domain (for Nokia N900 phone) is
presented in Figure 19. Mobile version of the cooking assistant was developed because humans
may cook also outside own homes, for example, in a courtyard or while travelling.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 35

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Due to smaller screen size, only next button was placed in the GUI, while backwards
navigation is possible only via selecting a step in the overview list. Other controls will be
explained while presenting short-term customisation strategy.

Figure 19: Recipe guiding in a small screen in the cooking assistant

Figure 20 presents the mock-up of the guiding strategy in the automotive domain for the
mounting snow chains procedure.

Figure 20: Guiding for a vehicle component mounting in automotive domain

The figure presents the main interface to the user, with capabilities to:

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 36

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

switch from proactive to manual mode: in the manual mode, the system awaits for user
instructions, whether the pressing of a button or a vocal command, while in the
proactive mode it understands the current status of the procedure, by using data coming
from the vehicle. For example the aperture the hood is detected and the procedure goes
on automatically in Figure 21 and Figure 22;

use video/ text/ voice communication channels based on the preferences and usage
history;

navigate through procedure steps and increase visibility of the information displayed.

Figure 21: Snow chain mounting Step x in automotive domain

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 37

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 22: Snow chain mounting context sensing in automotive domain

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 38

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Inside the vehicle it is essential that the instructions are displayed both on the display
integrated into the dashboard (Figure 23 left) and in the portable device (Figure 22 right) used
for outdoors procedure (e.g. mounting snow chains). The user can interact with the dashboard
display via the steering wheel buttons and the vehicle voice interface. Alternatively he/ she can
use the interfaces of the portable device. To enable the user to alternatively use one or the other
interaction channel, the two displays are synchronised and display the same current status of
the procedure. To achieve that the very first step, in the settings mode, is to pair the two objects
(the pairing procedure is described in [D9.2.1]).

Figure 23: Dual visualisation and synchronisation of displays in automotive domain

Figure 24 presents another mock-up of the guiding strategy in the automotive domain. This
mock-up was developed for studying differences between user perception of various
interaction aspects in cooking domain and car domain, and it has the same UI as the cooking
assistant (see Figure 19) because we were afraid that otherwise differences between UI
appearances could affect user opinions. This mock-up guides through changing wiper task.
Same as the cooking assistant mock-up for a small screen, the car assistant mock-up is
implemented for the Nokia N900 phone.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 38

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 24: Guiding for replace wiper task in the car assistant

The guiding mock-up for aircraft assembly domain is presented in Figure 25. This mock-up
runs on nomadic device (ultra-portable computer, equipped with a camera on its back). Unlike
cooking and car servicing mock-ups, here overview of the task steps is not shown in the main
screen, but it can be accessed via a separate screen (see Figure 26). Navigation between steps is
possible via using next and back buttons (buttons (10) and (3) in Figure 25). Guiding
instructions are split and presented on top of an image captured by a camera, because of the
task complexity. Unlike cooking and car domain mock-ups, in aircraft domain sensor-based
recognition of user actions can be configured by the operator because different torque is
required for different screws and a smart wrench is capable of recognising torque (in cooking
and car domain mock-ups sensor-based recognition of user actions is configured by application
developers because it may be too tiring for ordinary users). Interaction with the smart wrench is
done via coloured spherical icons (such as (4)), located on the image of the screws used to fix
the electrical harness: the user is able to settle the wrench by clicking on the icons, and the
smart wrench response is presented via changing the colour of these icons or by erasing them.
The other parts of the instructions are delivered via GUI or audio text, sounds and displayed
status information. The mock-up is explained in detail below.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 39

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 25: Guiding in aircraft assembly

(1) Main Menu: This button displayed in semi transparency allows interrupting the assembly
procedure in augmented reality and going back to the main menu (options, preferences,
procedure loading, etc...), that is, to access default functionality.
(2) Fixing Material: This tooltip displays the list of parts implied in the fixing.
(3) Previous: This button displayed in semi transparency allows replaying the previous step of
the current procedure (for training purpose). It is disabled in effective procedure.
(4) Selected fixing (green blinking spot): If the assembly procedure has not yet started, a
green blinking spot marks the location of a screw concerned by the procedure so that the
operator can notice their location and frame his camera according to the working area.
If the assembly procedure is already running (an instruction (7) is displayed), a green blinking
spot marks the location of a screw concerned by the current step and whose corresponding
torque is currently loaded into the smart tool.
(5) Reverse selection (red blinking spot): A red blinking spot marks the location of a
defective fixing potentially concerned by an unscrew action. This implies that the smart tool is
set to the Reverse mode.
(6) Defective fixing (red spot): A red spot marks the location of a defective fixing (feedback
of the smart tool).
(7) Elementary instruction: The text of the current procedural step is displayed in augmented
reality.
(8) View: This toggle button allows switching between the Global view, the Locator view,
the Zone view and the view in Augmented Reality.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 40

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

(9) Freeze: This toggle button allows freezing / un-freezing the camera view displayed in
Augmented Reality.
(10) Next: This button displayed in semi transparency allows accessing the next step of the
current procedure.
(11) Non-selected fixing: A green spot marks the location of a screw implied in the current
step but that has not been yet processed.
(12) Partial fixings counter (R x/y): This counter shows the total number of screws (y)
concerned in the current step by the torque program currently loaded into the Smart Tool. It
also shows the total number of remaining screws (x) that should have to be processed with this
torque program in this step.
(13) Total fixings counter (T x/y): This counter shows the total number of screws (y)
concerned by the current step. It also shows the total number of remaining screws (x) that
should be processed in this step.

Figure 26: Steps overview in aircraft assembly

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 41

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 27 presents guiding for origami folds: in this mock-up all instructions are shown via
GUI only. Many steps are presented at once, because, first, this way decreases user effort for
checking previous and next steps and second, because it is not necessary to know at which step
a user is now (unlike car and aircraft assembly domains, where moving to a next step without
completing a previous step may cause a disaster, and unlike cooking domain where food may
burn if a user gets distracted). The scrollbar in the right allows moving to next steps. Figure 28
shows, however, that when guiding for origami tasks is based on animation, instructions are
shown step-by-step because it does not make sense to view many videos simultaneously.

Figure 27: Guiding via images and text in origami application

Figure 28: Guiding via videos in origami application

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 42

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

All mock-ups are capable of interacting with the users via different modalities and at different
level of details. Typically, level presentation details depends on user expertise (novice users
require more detailed instructions), but the users should be allowed to choose level of details in
a same way as they should be allowed to choose modalities. Customization of the mock-ups
will be presented in more details below. However, aircraft assembly domain presents an
exception: novice operators can not be allowed to choose non-detailed instructions because of
the high cost of mistakes in this domain.
Aircraft assembly instructions are displayed at 2 levels of granularity: one for novice operators
and another one for experts. In expert mode, the buttons Next/Previous give access to the
sequence of step titles (however, experts can request the list of attached instructions at step
level). In beginner mode, the buttons Next/Previous give access to the sequence of
instructions. This mechanism is illustrated in Figure 29.

Figure 29: Beginner / Expert process sequence in aircraft assembly

Each instruction may contain sub-instructions which sequence is not fixed. Thats the case of a
screwing instruction that refer to several screws but the operator is free to choose the order of
the screws to fix by selecting each of them directly on the screen. For example, an instruction
may be presented in a form fix element X, where X is supposed to be fixed with 2 screws Y
and Z. The operator is free to start with screw Y or Z.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 43

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 30: Instructions step Beginner mode in aircraft assembly

3.3

Ask the user for confirmation strategy

This strategy can be used for several purposes. First, a smart product may need to ask the user
to confirm that the smart product is allowed to do perform certain action, for example, to
update guiding instructions (for example, user confirmation for updating eLUM contents is
required for security purposes, see [D9.2.1]). Second, when some important user action (such
as collection of tools in aircraft assembly or reducing the heat during cooking) cannot be
detected via analysis of sensor data, or when analysis of the sensor data shows that the user did
not perform an important action (for example, a smart wrench reports wrong torque; no motion
of a stirring spoon indicates that the user forgot to stir a meal), it may be feasible to ask the
user about the status of this important action and/ or further user actions. Third, smart products
may also ask the users to confirm the outcome of machine learning algorithms, for example,
that the smart product has correctly learnt user preferences.
Ask the user for confirmation messages are usually delivered on top of guiding interface, in a
form of message boxes in a GUI and/ or via audio. Visibility of these messages should depend
on importance and time-criticality of the missed action, for example, origami application does
not need to remind users about anything. Visibility of these messages can change with time,
depending on users reactions and on whether message importance changes with time. As
humans appreciate polite behaviour of computers [Bickmore-2007], this strategy should be
implemented in a polite way.
Figure 31 presents the interface for asking confirmation to the user for including new contents
in the eLUM, which is needed for security purposes (see [D9.2.1], i.e. the procedure for
mounting, specific for the make and type of snow chains.)

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 44

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 31: Notification regarding detection of new snow chains on-board and asking for
eLUM update confirmation

The cooking and the car assistant mock-ups ask the users to confirm, that certain action took
place. Figure 32 shows, how the car servicing mock-up asks the novice user to confirm that a
wiper is properly fixed.

Figure 32: Asking the user for confirmation in the car assistant

Similarly, the Cocktail Companion asks the user to confirm that the cup is placed in the coffee
machine: the Done button could be used, if the platform did not recognize the users action,

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 45

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

e.g. because a sensor was malfunctioning or disconnected. Figure 33, Figure 34 and Figure 35
show a UI of an activity where the user is told to use the coffee machine for making an
espresso, which finally has to be added to the cocktail.

Figure 33: Instructing the user to place a cup at the coffee dispenser in the Cocktail
Companion

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 46

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 34: Details of instructing the user to place a coffee cup in the Cocktail
Companion

Figure 35: Cocktail Companion actively trying to get feedback from the user

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 47

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

In aircraft assembly it is also needed to ask the users to confirm that they performed certain
action. The car servicing and cooking mock-ups illustrate the case when confirmation is
required in the middle or in the end of the step, while the aircraft assembly mock-up illustrates
the case when the user is asked for confirmation in the beginning of the step. For example,
during the preparation stage, the operator is invited to check the list of tools and materials
needed for the operation, and to confirm their availability, as shown in Figure 36. Figure 36
shows that in this case the message to the user is also polite: please collect tools.

Figure 36: Tools and material collection in aircraft assembly

In case of troubles during the task fulfilment, the aircraft assembly mock-up shows, which
screw caused the trouble, and asked the operator whether he wants to fix the problem or to
ignore it (see Figure 37).

Figure 37: Smart Tool problem report in aircraft assembly

For example, if a wrong torque was applied on the screw because of human error (trigger of the
wrench released too soon) or a screw was damaged screw, the operator can choose to retry the
operation or to ignore the faulty fixing. If the operator chooses to ignore the faulty fixing (case
of large panels fixed with several dozens of screws where 1 or 2 faulty fixing have no

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 48

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

consequence), the fixing (spherical icon) becomes red. This change of colour serves as not-sovisible reminder that something is wrong.

Figure 38: Abort procedure in aircraft assembly

If the operator decides to retry the operation, the unscrew program is automatically selected.
The fixing (spherical icon) starts blinking red. When the operator does not need anymore the
reverse program, he must click again on the blinking icon to select again the right screwing
program. Then the icon blinks back green and the operator can proceed again.

Figure 39: Retry procedure in aircraft assembly

Figure 38 shows, that in aircraft assembly the reminder is more visible when it is delivered first
time, then when the operator decides to ignore wrong fixing.
On the other hand, reminders in cooking often refer to time-critical actions and thus next
reminders should be more visible than the first ones if the user does not respond. The default
behaviour of the cooking mock-ups is first to ask the user for confirmation via GUI only, in a
form of a fairly unobtrusive message (Figure 40 shows this message for a large screen GUI).
The mock-ups use interrogative form (remind the users via asking Have you done? )
instead of instructive form please do it!. This inquiring form allows the users to reply
immediately that he/ she had not forgotten to do this action, which is important because sensor
data analysis may be wrong.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 49

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 40: First reminder in the cooking assistant

If the user does not react, the message is delivered in a more visible form: brighter colour and
blinking of a message box (this kind of a message is shown in Figure 41 for a large screen). If
the user does not react even then, the mock-ups deliver audio message Sorry, may I remind?
Have you done? And/ or beep. The messages are repeated at user-configured intervals until
the user reacts.

Figure 41: Repeated reminder in the cooking assistant

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 50

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 42 illustrates the case when a smart product asks the user to confirm its conclusions
regarding user preferences: in origami application the user initially declared her preferences for
presenting instructions via text and images, but in practice viewed video instructions. If
automatic leaning-based profile update is undesirable (for example, some persons want to
always stay in control), smart product asks user permission to update her profile.

Figure 42: Asking the user to confirm profile update in the origami application

3.4

Advice/ Notify strategy

In some cases it may be feasible to deliver to the users certain information, but not necessarily
to wait for the users response. Delivery of advices and notifications may require dedicated UI
elements, such as text boxes, audio messages, beeps, status bars, etc. It may be also achieved
via modification of appearance of existing UI elements, for example, colour changes or
changes of states of control buttons. Information delivery may serve several goals. First of all,
when several persons work together on same task, it may be needed to inform them about
choices and actions of others. Second, single and multiple users may want to be notified about
application state, e.g., that audio output is disabled or how soon a meal will be ready. Third,
single and multiple users may need advices on different subjects, for example, which lubricant
has the best reputation for cars operating in a cold weather.
For example, when multiple users work for aircraft assembly and each one uses own device,
each operator is able to choose on his/ her device a task among a list of tasks directly assigned
to him or filtered by skill level at team level. If some operator selects in the list a task that
requires several operators, all the other available tasks are no more selectable on the Nomadic
Devices of the other operators until the required number of operators is reached for this task.
Non-selectable tasks are shown in grey, as Figure 43 illustrates.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 51

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 43: Work assignment in aircraft assembly

In this example response of just one user is needed, not of all users: when the required number
of operators is reached for this task, the greyed tasks change colour back to black, so that other
free operators can select them. In many other cases user response to notifications is not needed
at all. For example, if an aircraft assembly operation is performed by several operators, the
status of the displayed fixings on Nomadic Device 1, used by one person, may change
depending on the action of the operator using Nomadic Device 2 (and vice versa):
Situation 1: The user of Nomadic Device 1 selects a fixing, i.e., takes a task by clicking
on a green spherical icon above a screw. If previously displayed on Nomadic Device 2,
the spherical icon should disappear from Nomadic Device 2
Situation 2: The user of Nomadic Device 1 fails to fix a screw and chooses to ignore it.
If previously in the view field of Nomadic Device 2 camera, a red spherical icon should
appear also on Nomadic Device 2 on the location of the faulty screw.
Similarly, when interaction modalities and other functionality are automatically adapted to
preferences of a single or multiple users, smart products notify the users by displaying the
corresponding states of control buttons: Figure 17 illustrates such notification for a single user
(buttons in the top) and for multiple users (buttons in the bottom).
Advices and notifications may be delivered in case of a single user too. For example, smart
products may advise users how to adjust a recipe to ones diet or recommend buying certain
tools for common car servicing tasks. In the cooking and car servicing mock-ups health and
tool tips were tested: Figure 44 presents a tip for a car owner, Figure 45 presents a health tip for

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 52

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

a hypertonic for a small screen of the cooking guide, and Figure 46 presents an advice to eat
garlic to a girl who wants to control her weight.

Figure 44: A car servicing advice for cold climate in the car assistant

Figure 45: A cooking advice for a hypertonic user in the cooking assistant

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 53

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 46: A cooking advice for weight watchers in the cooking assistant

Cooking timer illustrates notifications strategy too: if some tasks are time-critical (for example,
a meal would be burnt if it is cooked too long), users may be notified by showing them, for
how long time a meal is being cooked and whether this time was exceeded. Figure 46 presents
a timer which shows, how much time is left until the meal is ready, and Figure 47 shows, how
the interface looks like when meal is ready: the timer shows, for how long time the meal was
overcooked. The mock-up may also notify the users that meal is ready via beeping.

Figure 47: GUI-based notification that meal is ready in the cooking assistant

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 54

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

3.5

Response to the users request strategy

This strategy is present when smart products response to user commands, and its
implementation requires UI elements for user input and corresponding handlers. First of all,
users requests may initiate workflow execution. For example, after recipe selection, shown
in Figure 16, the corresponding recipe guiding workflow will start. Second, in the smart
products mock-ups users may request for more detailed information via corresponding buttons.
This functionality is especially important for small screens because detailed information should
be either delivered in a smaller font or with extra scrolling compare with the low-detailed
information, and thus starting from low-detailed information may be feasible if a user is not an
absolute novice. Figure 48 shows, how request for more detailed car servicing instructions and
disabling images affects visibility of instructions.

Figure 48: Response to the user request for detailed information in the car assistant

In the aircraft assembly mock-ups the users are also allowed to request for more detailed
information: an expert operator may request the beginners view (the difference between these
views is presented in Figure 29). The operators can also switch between a global view, a
locator view, a zone view and the view in Augmented Reality, presented in Figure 49, by
clicking on the toggle eye button (button (8) in Figure 25).

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 55

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 49: Different views on the task in aircraft assembly

In the Origami Task Selector the end user may ask to view more information when prompted to
update their profile, as shown in Figure 42. In the event the user opts to View Log, they are
able to review their past actions (see Figure 50) and confirm or reject the profile update. The
rows highlighted in the log browser highlight the metadata for the last five tasks selected by the
user, all of which were Video Only. The user has however previously selected Text Only
and Text Accompanied by Images, but in the more distant past. The user therefore accepts the
profile update.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 56

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 50: Highlighted entries in the History Log browser in origami application, based
on a user request to View Log, in order to decide whether or not to accept the system
prompt to update their profile

The next request for a recommendation, after the profile update, is as shown in Figure 51;
compare with the recommendation when the users preferred modality was set to Text
Accompanied by Images, in Figure 71.

Figure 51: Recommendation for the user after changing presentation modality
preference to Video Only in origami application

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 57

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

More sophisticated example of response to the user request strategy is implemented in WP3:
a question answering tool, allowing end users to ask various questions in natural language
[D3.5.1]. A screenshot of this application is presented in Figure 52. In this example the user
asks for recipe ingredients in natural language and gets a list of ingredients. If the user clicks
on any of terms, he/ she gets more detailed information. For example, if an ingredient is some
sauce, the detailed information may list sauce ingredients, e.g., whether this sauce contains nuts
or not.

Figure 52: Question answering tool interface

3.6

Explain product actions strategy

It was shown that understanding the logic of applications increases their acceptance [Cheverst2005]. The smart products mock-ups can explain reasons and results of automatic interface
adaptation to detected events, as well as to indicate the reasons for recommendations. The way
to deliver explanations depends on what is needed to explain. For example, if smart products
behaviour is triggered by some event, the explanation is fairly short and can be delivered via
text box as they are least obtrusive, or via corresponding audio message. If smart products
behaviour is caused by analysis of application logs, conclusions from these logs can be also
delivered via fairly short messages. If it is not easy to explain logs analysis in few words (for
example, if logs are processed using neural networks or other machine learning methods, the

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 58

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

resulting models cannot be easily explained to humans) or if there is a chance that logs
themselves may be incorrect, explanations may be provided via visualisations of log files.
It is desirable to provide together with the explanations also means to correct system
behaviour, usually via ask user for confirmation and customisation strategies. This requires
addition of dedicated buttons and corresponding handlers to the interface, which are usually
shown only together with the explanation. Naturally experienced users of smart products do not
need as detailed explanations as novice users, but they may be even more willing to correct
system behaviour.
Explanations may refer to various aspects of smart products capabilities. First of all, smart
products are capable of cooperating with each other and with augmented objects, and it may be
needed to explain these capabilities and their consequences to a novice user (Figure 53 shows a
message box explaining that certain stirring spoon is capable of detecting user actions, and that
this spoon may blink to introduce itself). Second, analysis of data sent by other smart products,
augmented objects or environmental sensors may trigger automatic transition to the next step,
choice of modality or reminders, and the users may need explanations regarding the triggers.
The Figure 50 below explains to the user that the system has detected a change of context and
intends to engage an interaction with the user, i.e. presenting the snow chains mounting
procedure. Figure 55 presents a message box explaining that recognition of a tilt of sensoraugmented cutting board served as indication that first recipe step was completed, and Figure
56 presents an explanation regarding triggering of a transition to the next step on a small screen
for a car servicing task. Figure 57 presents an explanation regarding interaction adaptation to
user activity (listening music): audio presentation of instructions was disabled in order not to
interrupt the user. Figure 58 presents a message box explaining that movement of a salt shaker
triggers a reminder not to add too much salt. Third, interface can be adapted to preferences a
single user or multiple users, and the users may want to know, what affected the result and
what are the consequences of the adaptation. Figure 59 presents an explanation that reminders
were disabled based on user preferences, but this setting causes a danger of
overcooking. Figure 60 and Figure 61 present short and more detailed explanations regarding a
way to combine preferences of multiple users on a large screen. Figure 62 presents a detailed
explanation on a small screen. This explanation is needed mainly for novice users, because
multi-user mode can be selected also anytime via GUI: see the rightmost control in the large
screen GUI in the figures below, and the rightmost control in the bottom part of the Figure 17
for a small screen.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 59

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 53: Introducing a sensor-augmented spoon in the cooking assistant

Figure 54: Explaining and reminding the availability of context-based support in


automotive domain

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 60

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 55: Explanation regarding transition to the next step of instructions on a large
screen in the cooking assistant

Figure 56: Explanation regarding transition to the next step of instructions on a small
screen in the car assistant

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 61

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 57: Explanation regarding disabling of audio output in the cooking assistant

Figure 58: Explanation regarding message triggering in the cooking assistant

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 62

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 59: Explanation regarding a danger to disable reminders in the cooking


assistant

Figure 60: Explanation regarding a way to combine preferences of multiple users for a
large screen in the cooking assistant

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 63

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 61: Detailed explanation regarding a way to combine preferences of multiple


users for a large screen in the cooking assistant

Figure 62: Detailed explanation regarding a way to combine preferences of multiple


users for a small screen in the cooking assistant

Figure 42 illustrates that when the user is asked to confirm profile update, the user is also
provided with the possibility to view history log and thus to understand the reason for the
system question (for example, the user may realise that her manually set preferences are
incorrect, or to find out that learning error occurred because somebody else was lazy to login
under own name and used the smart product under different name). Figure 63 shows, that
history log can be also accessed any time via the main menu, for example, if the user wants to
understand reasons for recommendations.
Figure 64 present the origami application log. Figure 42 illustrates the case when the smart
product updates interaction preferences. Update of task preferences can be performed in a same
way, for example, origami application may update preferred level of task difficulty, while
cooking assistant recipe or diet preferences.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 64

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 63: The Menu showing the sub-menu for browsing the history log in origami
application

Figure 64: Browsing a users history; the detail is shown for the initial (bottom row) and
two subsequent entries for the users profile in origami application

Figure 66 illustrates the results of requesting a recommendation based on the system settings
only, where the users can see the reason for recommendations, while Figure 71 illustrates the
result, based on the users profile.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 65

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 65: Recommendation in origami application, based on system settings only

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 66

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

3.7

Acknowledge task strategy

When the interface of a smart product is not immediately affected by starting a new task, the
user may start wonder about the status of the task. Acknowledge task strategy is applied, for
example, to demonstrate that a smart product activated communication with augmented
everyday objects and thus became capable of recognising user actions. In aircraft assembly,
when the operator selects a fixing (via green spherical icon), the right program is loaded into
the wrench. When the smart wrench is ready, the green spherical icon starts blinking.

Figure 67: Acknowledging that the smart wrench is ready for fixing in aircraft
assembly

The Figure 68 illustrate how the smart product acknowledges the eLUM updating: the
instructions, related to the snow chains, will be included in the eLUM.

Figure 68: Acknowledgment of eLUM updating performed in automotive domain

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 67

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

3.8

Short-term customisation strategy

Short-term customisation requires UI elements, allowing users to quickly adapt interaction to


their current wishes. This adapted configuration is not necessarily applied when the users start
next task: during the user study some users expressed a desire to launch the applications with
default settings every time.
First, short-term customisation strategy is required for allowing users to quickly disable/ enable
potentially most annoying functionality, such as notifications or health tips: for example, a
stressed user may want to ignore her diet for a moment, but she will be again interested in
health tips during next cooking session. Second, short-term customisation strategy is required
for potentially privacy-threatening information: for example, if somebody applies dictator
strategy to combine preferences of multiple users, he/ she automatically reveals own
preferences. If this person does not want to reveal own preferences in some company, he/ she
may choose default strategy instead. Third, user preferences regarding interaction modalities
may be task-dependent or situation-dependent (e.g., audio interaction is unreliable in noisy
conditions), and thus it may be feasible to allow short-term customisation of interaction
modalities too. For example, aircraft assembly mock-up allows choosing modalities via the
main menu.
Choosing presentation modalities for each task is possible also in origami application: when
users select origami folds from the menu, they can choose between a bird box animated
instructions (as selected in Figure 15) and a bird box instructions represented by text and
images (the next row). Selecting a modality that does not match the users profile amounts to
short-term customisation; it will not influence (immediately) the next recommendation or
modify the users profile. However it will contribute to long-term customisation, as the system
may suggest amending the users profile if the alternative modality is repeatedly selected.
Cooking and car servicing mock-ups can be quickly customised via GUI controls: Figure 18
shows controls for a large screen, Figure 19 and Figure 20 controls for a small screen for
cooking and car servicing mock-ups respectively.
In Figure 18 buttons in the instructions tab allow to disable/ enable audio output for cooking
instructions, disable/ enable showing images for instructions, and to set high/ low level of
details for GUI text and audio output together and for audio output separately.
The tab reminders allows customising realisation of ask the user for confirmation strategy.
Figure 41 shows, how the reminder icon (the upper button in the reminders tab) and the
repeated reminder look like in the GUI if reminders are fully enabled. In this case the
reminders are repeated after certain time interval until the user reacts. Lower left button in
reminders tab allows to disable/ enable reminding via audio text, lower right button via
beep. If both buttons look like in Figure 41, reminders will be delivered also via audio
messages and beeping. If reminders tab looks like in Figure 46 (both audio text and beep are

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 68

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

disabled, but reminders are not disabled), reminders will be delivered via GUI only. If audio
text for reminders is disabled, but beep is enabled, reminders will be delivered via beeping
only, and vice versa. As computer-initiated interaction can be annoying, the upper button in the
reminders tab allows disabling reminders completely, in this case the GUI will look like
in Figure 47, and audio reminders will be disabled as well. It is also possible to enable just
once reminder (it will not be repeated even if the user did not react at it), in this case the GUI
will look like in Figure 40. If audio messages and/ or beeping for reminders are enabled, just
once reminder will be delivered via audio too.
Upper button in the sensors tab allows to disable/ enable speech commands, and lower
button sensor-based activity recognition. (The desire to have such button was expressed by
the test subjects during the first user study: the subjects stated that they do not always want to
be spied by everyday objects). Upper button in the health tips tab allows to completely
disable/ enable delivery of health-related information, and lower button to disable/ enable its
delivery via audio. Upper button in the explanations tab allows to disable/ enable delivery of
explanations of the mock-up functionality, and the lower button to enable/ disable their
delivery via audio (the default is to disable audio explanations).
Figure 18 also shows that the users can explicitly choose languages of interaction if they are
not satisfied by the automatic adaptation.
The rightmost part of the Figure 46 shows the timer. It is not visible always, it is only visible
when current cooking step has time limit (baking time, for example). The time limit is taken
from the recipe, but the users can easily correct it using the scroll bar below the timer.
The control multi-user mode allows changing a way to combine preferences of multiple
users, interacting simultaneously with the mock-up. Figure 16 and Figure 18 show that
configuration of the mock-up was automatically adapted to the preferences of multiples users:
configuration for the first user (Katya) is shown in the Figure 16, while the resulting
configuration for 3 users is shown in the Figure 18. Similarly, Figure 17 shows, how
configuration was adapted to preferences of multiple users in the car assistant.
Figure 17 and Figure 19 show, that the GUI for a small screen has less immediately accessible
controls for configuring the mock-up functionality: the controls for quick disabling/ enabling
potentially most annoying functionality (such as audio output and application-initiated
interaction reminders and health/ tool tips) are visible always, but for example for changing a
multi-user mode it is needed to open another screen (shown in Figure 17). Disabling/enabling
of audio separately for instructions, reminders and tips would also require opening of another
screen, unlike the GUI for a large screen, allowing quick customisation of this functionality
aspects separately.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 69

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

3.9

Long-term customisation strategy

Smart products adapt interaction to the users, based on user profiles. Long-term customisation
strategy requires interaction elements for
explicit editing of the users preferences by end users
prompting the users to provide explicit feedback or to correct the system behaviour
setting system defaults and stereotypes by application developers.
If user preferences are unknown, interaction adaptation is based on stereotypes (e.g., young
children may not be able to read well and thus audio interaction should be used if possible) and
other system defaults (e.g., in UK default language is English). Adaptation methodology is
described in [D5.2.1].
3.9.1

Manual acquisition of user profile

When a user account is first set up the user is required to set their initial preferences, before
making use of the application. Figure 69 illustrates how user preferences can be accessed in
origami application, and Figure 70 presents an interface for explicit acquisition of user
preferences. Preferences for other applications can be acquired in a similar way. The initially
acquired profile may subsequently be updated based on interaction with the system; while the
system may prompt update of the profile it MUST be edited by the user.

Figure 69: The Menu showing the sub-menu for setting user and system options in
origami application

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 70

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 70: Eliciting the users interaction preferences in origami application

In origami application user name is a default way to create the user account; however another
user ID can be provided by the user if he/ she prefers more personal method for addressing
him/ her. Language, level of detail (which maps to user expertise) and presentation modality, in
that order, influence customisation of the interaction for the current version of the origami
application. Presentation medium contributes to system customisation too.
The default system behaviour recommends the next task to carry out based on the users
settings the recommendation shown in Figure 71 is as a result of the users profile on the top,
left. Contrast this with the recommendation in Figure 66, where the system default is set to
Expertise = Novice.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 71

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 71: Recommendation in origami application, based on user profile


3.9.2

Learning

Learning is based on logs of interaction between users and smart products. Figure 64 illustrates
how system log for origami application looks like: they contain users actions, specifying log
time, type of action and the result of the action taken. Logs for other applications may contain
other information (e.g., cooking assistants logs contain information regarding enabling/
disabling reminders and health tips for each task and context, such as time of day). Learning
can be based on implicit user feedback only, i.e., on choices made by the user, and/ or to utilise
explicit feedback: users evaluation of success of automatic adaptation.
From a technical point of view, the history log allows the refinement of the proactive behaviour
of the system. Timestamping when each action occurs allows the influence of the users
activity to be weighted by recency. Weighting parameters assume that a users recent activity is
more indicative of their actual preferences, as this takes learning of the system into account and
changes in the users environment (e.g., access to specialised equipment), in addition to their
perception of their own preferences. This also aims to account for the probability that users
may not have accurately captured their preferences, or updated them as they evolve.
The result of the long-term customisation strategy is communicated, in this prototype, to the
user via the recommendation of tasks to carry out (see for example Figure 71 where, based on

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 72

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

the users profile the folding task for Dragon has been recommended). The task attributed can
be seen in the entry at the top in the history browser Figure 64, which shows that the
expertise level to be AdvancedIntermediate with modality Text with Images). Before the
users history is factored into system recommendation, however, a minimum threshold must be
reached for the amount of information held in the log.
Default thresholds are set by the smart products developers, but may be edited by the user. For
example, if the history size threshold is set to one, user preferences would be updated after
each task, which was considered convenient by some persons in our study: several test subjects
stated that smart products should launch applications with last used settings.
As Figure 42 shows, the user permission is required for updating user profile in cases when the
users actions conflict with their specified preferences. While these prompts are useful for
communicating system actions to the user, the option to disable them is important in reducing
annoyance, especially for repeated prompts. Alternatively, the option to set thresholds allows a
finite number of prompts to be displayed before allowing the system to autonomously carry out
repeated tasks, based on the users history. Figure 72 and Figure 73 shows the interface for
customising learning and system prompts.

Figure 72: Dialog for updating system defaults, showing options for long-term
customisation for user interaction in origami application

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 73

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 73: Dialog for updating system defaults in origami application, showing options
for influencing long-term customisation for user interaction, by setting how the system
is to log user actions.

When automatic adaptation of interaction is based on proactive knowledge, different types of


knowledge may have different importance. For example, explicitly collected user preferences,
preferences learnt from history logs, stereotypical preferences and system defaults may be
weighted differently in ranking of available interaction options. As these weights may be userdependent, it is feasible to allow the user to edit the default settings, for example, to increase
the weighting of their previous activity. The users may also request the system to alert them
regarding each conflict between their profile and their actions; in this case the user controls
how the conflict is resolved. Further, the user must specify whether they wish this to feed into
long-term customisation or apply to only the current case, i.e., short-term customisation.
3.9.3

Ask for the users feedback strategy

User profiles may be used for adapting informational content (e.g., to tailor health tips to users
diets) as well as its form (e.g., modality and level of details). In both cases learning can be
more accurate if user opinion regarding adaptation is known. Figure 74 illustrates, how origami
application asks for the user feedback. Origami application requests feedback from the user
after presenting each recommendation. In Figure 74 user feedback is asked about
appropriateness of presentation modality and level of detail (inverse mapping to expertise) for
the sub-class of the instruction type. Applications capable of interacting with users in different
languages may also ask feedback on presentation language and presentation medium.
Applications capable of delivering reminders and health/ tool tips may ask user feedback on
modality and content of this information. Requests for feedback may also depend on context,
SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 74

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

e.g., the system should not suggest editing the users preference for audio if the user switches it
off because of environmental noise.

Figure 74: Recommendation feedback form in origami application

Figure 74 shows that the users may choose not to enter feedback (by selecting cancel). Figure
72 shows, that users can also disable the prompt for feedback on recommendations completely.
Ask for the users feedback strategy can be used in combination with explanations: as
Explain product actions strategy section shows, many explanations-related messages are
delivered along with the buttons, allowing to change the current settings. This approach is used
when settings refer to potentially very annoying functionality (reminder not to add too much
salt, for example, as in Figure 45); when automatic adaptation causes risks (for example, when
it is dangerous to disable all reminders, as in Figure 59) or when automatic adaptation is
applied to multiple configuration settings (for example, states of all controls in Figure 17
depend on the applied multi-user adaptation mode). Another approach is not to suggest the user
to change long-term settings immediately, because configuring the smart products may distract
users from their primary tasks. For example, an explanation regarding audio output adaptation
in Figure 57 is not accompanied by any button allowing to change the adaptation rule, because
if the users do not like the adaptation results, they only need to press always available button
for enabling audio output. However, the smart products observe users behaviour and gather
implicit feedback: if a user indeed enables audio output by pressing a button, in the end of the
current cooking session the product asks the user whether he/ she wants to modify the
adaptation rule. Figure 75 illustrates how it is done.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 75

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Figure 75: Asking for the users feedback and consequent configuration in the cooking
assistant

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 76

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

4 Requirements
The requirements relevant for the components described in this deliverable have been
explained in [D5.1.1]. Table 4 shows how these requirements are addressed by the mock-ups.
The following keys are used:
X: demonstrated
-: not demonstrated
In the comments we also give examples of screenshots, illustrating the fulfilment of
requirements. These examples are not exhaustive because we do not list all figures, related to
the corresponding requirement just one-two examples.
Many requirements, listed in Table 4, describe not only what an end user shall see, but also
internal functionality of smart products. For example, fulfillment of the requirement
SP.WP5.ACT.2 means 1) that a smart product shall communicate its ID and capabilities in

machine-readable form to another smart product, so that the latter product may invoke the
former one in a workflow if the former product has required capabilities; 2) that a smart
product shall communicate its capabilities to an end user in a human-friendly form. The mockups, presented in this deliverable, only aim at demonstrating what an end user will see. In case
of the requirement SP.WP5.ACT.2, the result of communication between smart products is
demonstrated by the Cocktail Companion that invokes a coffee machine in a workflow and
shows it in the interface of the Cocktail Companion, while communication with end users can
be handled by the question answering tool. Fulfillment of other cooperation-related
requirements was not demonstrated in the presented mock-ups because the resulting UI shall be
similar to that of Cocktail Companion, using the coffee machine. Fulfillment of some
requirements, for example, SP.WP5.PERS.9, was not demonstrated to the test subjects because

Name

Comments

mock-ups

this kind of interaction is fairly trivial.

Description
A Smart product shall be able to X

SP.WP5.ACT.1

interactwithauser.
ASmartproductshallbeabletostate X

SP.WP5.ACT.2

All mock-ups

information about itself (its ID, its

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Communications with the users can


utilise question answering tool,
developed in WP3 and illustrated

Dissemination Level: Public

Page 77

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

in Figure 52

tasksetc.)
A smart product should be able to X
SP.WP5.ACT.3

initiateinteraction.
A smart product should be able to X
drawausersattentiontoanotherSP

SP.WP5.ACT.4

oraplace.
X

A smart product should be able to


SP.WP5.ACT.5

adviseauser.
A smart product should be able to X
drawausersattentiontoanotherSP

SP.WP5.ACT.6

oraplace.
X

When a user is starting a known


process / workflow, a smart product
should consider whether to start
SP.WP5.ACT.7

interactionornot.
A

smart

product

stop X

should

All mock-ups except for origami


application it is only reactive
E.g., cooking mock-ups draw
attention to coffee machine (Figure
34) and to smart spoon (Figure 53)
E.g., cooking and car servicing mockups deliver tips (Figure 45
and Figure 44) and reminders
(Figure 41 and Figure 32)
E.g., cooking mock-ups draw
attention to coffee machine (Figure
33 and Figure 34) and to smart
spoon (Figure 53)
We demonstrated how smart
products enable product-initiated
interaction or disable it (e.g., Figure
47 illustrates the case when all
reminders are disabled, but productinitiated delivery of health tips is
enabled)
All mock-ups

interaction whenthe userisstopping


SP.WP5.ACT.8

theprocess.
X

A smart product should interrupt


interaction
SP.WP5.ACT.9

when

the

user

is

interrupted,e.g.,byaphonecall.
A smart product should be able to X
authenticate, i.e. check a claimed

SP.WP5.ID.1

identity.
Smartproductsshouldbeabletouse

Mainly refers to cooperation between


smart products;
mock-ups used own TTS

E.g., reminders in the cooking


assistant mock-up can be delivered
via beeping (Figure 41), and smart
spoon can blink (Figure 53)
For example, smart spoon can blink
(Figure 53) and smart wrench can
report wrong torque by itself (not
illustrated in this document) or via
other devices, as in Figure 37
All mock-ups, see for example ask
the user for confirmation chapter

a TextToSpeech service in the


SP.WP5.INT.1

environment.
Smart products shall be able to draw

SP.WP5.INT.2

Demonstrated with the cooking


assistant mock-up which disables
audio output when user is listening
music (Figure 57) and delays audio
output if the user talks, for more
details see [Vildjiounaite-2011]
E.g., RFID-based authentication is
illustrated in Figure 7

theusersattentiontoit(e.g.,beep).
X

Smart products shall be able to


provide minimal feedback on their
SP.WP5.INT.2b

own(e.g.,beep).

SP.WP5.INT.3

Smart products shall proactively

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 78

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

approach the user to provide


informationoraskforaction.
X

Smartproductsshouldbeabletoask
SP.WP5.INT.4

theuserforinputdata.
Smartproductsshouldbeabletoask X

SP.WP5.INT.4a

theuserforconfirmation.
The user has to be able to explicitly X
address a smart product (also via
another smart product as e.g. the
target smart product has no input

SP.WP5.INT.5

displaysintheenvironment.
Smart products should be able to X
adapt the interaction to available

SP.WP5.INT.8

devices(e.g.,screensize).
Smart products shall be able to

interpret input from input devices in


the environment (voice, gesture,
pointing device, remote control,
SP.WP5.INT.9

touchscreenbuttons).
Smart products should be able to X

SP.WP5.INT.10

proactively approach the user to ask


for missing data if it could not be
automaticallyderived.
X

Theusershouldbeabletoaskforand
invoke available workflows on the
SP.WP5.INT.12

smartproduct.
Smart products should be able to X
guide the user through the workflow

SP.WP5.INT.13

Gesture and pointing device


interfaces were not demonstrated,
but not needed either. Touch screen
was demonstrated for example in the
car and aircraft assembly mock-ups,
speech input was demonstrated in
the cooking assistant, RFID input in
the Cocktail Companion
All mock-ups
E.g., car assistant mock-up asks to
confirm that action was performed
correctly (Figure 32), the cooking
assistant asks whether the user
switched off the heat Figure 40
Task selection functionality is
illustrated for example in Figure
9, Figure 15 and Figure 43 (the
latter shows also currently
unavailable workflows)
All mock-ups, see presentation of the
guiding strategy

usingstepbystepinstructions.
When users are approaching smart
products, smart products should

SP.WP5.LOC.1

E.g., smart wrench can interacts via


display of nomadic device (Figure
37)
E.g., cooking assistant can deliver
instructions via large (Figure 18) and
small (Figure 19) screens

provideaGUI.
Smart products should be able to X

SP.WP5.INT.11

E.g., users can address cooking


assistant via audio interface and
address
smart wrench via display of nomadic
device (Figure 39)

capabilities).
Smartproductsshouldbeabletouse X

SP.WP5.INT.7

E.g., Cocktail Companion asks to


login (Figure 6) and car assistant
mock-up asks to confirm that action
was performed (Figure 32)
All mock-ups, see for example ask
the user for confirmation chapter

consider

whether

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

to

Mainly refers to internal reasoning


capabilities

initiate

Dissemination Level: Public

Page 79

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

interaction with the approaching


usersornot.
Smart products should use input and

Mainly refers to cooperation between


smart products, mock-ups used own
input/ output capabilities

Mainly refers to cooperation between


smart products, mock-ups used own
interaction capabilities

Mainly refers to
internal
reasoning capabilities

Mainly refers to
internal reasoning capabilities

Mainly refers to internal reasoning


capabilities

output devices which are near to the


SP.WP5.LOC.2

userforinteraction.
Smart products should be able to
include nearby SP in interaction with

SP.WP5.LOC.3

theuser.
A smart product should be able to
give directions to users looking for

SP.WP5.LOC.4

nearbydevices,placesoritems.
A smart product should provide
locationbased recommendations to
the users, for example, to find the
nearest fuel station that has a

SP.WP5.LOC.5

requiredtypeoffuel.
A smart product should interrupt or
stop interaction when the user is

SP.WP5.LOC.6

movingaway.
Smart products should be able to X

SP.WP5.MISC.1

automaticallyexecuteworkflows.
Smart products can determine and

Mainly refers to cooperation between


smart products

Mainly refers to internal reasoning


functionality, but e.g. aircraft
assembly mock-up can present
limited choice of tasks to the

provide to the users new workflows


when
SP.WP5.MISC.3

several

smart

products

collaborate.

Smart products shall be able to

SP.WP5.PERS.1

assistant worker (Figure 43);

behave differently depending on

different ways to combine multi-user


preferences can be used in cooking

users roles (for example, workshop

(Figure 61) and car (Figure 62)

workervs.Owner).

assistant mock-ups

Smart products shall be able to X


choose an appropriate moment of
interaction based on longterm user
characteristics (e.g., tolerance to
SP.WP5.PERS.2

All mock-ups, see presentation of the


guiding strategy

interruptions).

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Mainly refers to internal reasoning


functionality, but
changes in timing and form of
reminders were demonstrated
(Figure 40, Figure 41)

Dissemination Level: Public

Page 80

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Smart products shall be able to X


choose an appropriate modality of

functionality, but e.g., Figure 17

interaction based on longterm user

illustrates how car assistant adapts


settings to preferences of single and
multiple users

characteristics (e.g., tolerance to


SP.WP5.PERS.3

interruptions).
Smartproductsshallbeabletoadapt X
content of interaction to longterm

SP.WP5.PERS.4

(Figure 45)

audio to inexperienced users, to


deliver them via GUI only to more
experienced users, and not to deliver
them to the experts

formofinteraction(suchaschoiceof
modality

or

device)

to

user

experience/skills.
X

Smartproductsshallbeabletoadapt
contentofinteraction(suchaslevelof

(Figure 29)

Smartproductsshallbeabletoadapt

Mainly refers to internal reasoning


functionality, but all mock-ups do it.
E.g., Figure 51 and Figure 71

modality of interaction to user

illustrate two modality


recommendations in origami
application

preferences (e.g., audio vs. Graphical


message).
Smartproductsshallbeabletoadapt X

E.g., health tips delivery in cooking

content of interaction to user

assistant (Figure 45) can be based

preferences (e.g., to suggest a diet

on diets or favourite food products; in


the first user study [D5.2.1.Annex] we
demonstrated diet-based health tips

including as many favourite food


SP.WP5.PERS.8

E.g., aircraft assembly mock-up


presents instructions differently
depending on the worker expertise

details)touserexperience/skills.
X

SP.WP5.PERS.7

Mainly refers to internal reasoning


functionality, but e.g. car assistant
can deliver reminders (Figure 32) via

Smartproductsshallbeabletoadapt

SP.WP5.PERS.6

E.g., demonstrated via health tips


delivery in cooking assistant

usercharacteristics(e.g.,allergies).
X

SP.WP5.PERS.5

Mainly refers to internal reasoning

productsaspossible).
Smartproductsshallbeabletoadapt

moment of interaction to user


preferences (e.g., to notify the user

well in advance notifications are not


included in the chosen test scenarios,
but they are nothing fancy

well in advance or just before the


SP.WP5.PERS.9

event).
Smartproductsshallbeabletoadapt X
to

contextdependency

of

user

preferences (e.g., audio message in


onesituationvs.Graphicalmessagein

Mainly refers to internal reasoning


functionality, but cooking and car
mock-ups use context-dependent
preferences in adaptation, for
example, when they adapt states of

SP.WP5.PERS.10 anothersituation).

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 81

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

controls (Figure 16 and Figure 17)

Smartproductsshallbeabletoadapt X
modalityofinteractiontoroles,skills,
capabilities

and

preferences

E.g., demonstrated with car assistant


and cooking assistant mock-ups
(Figure 60 and Figure 17)

of

SP.WP5.PERS.11 multipleuserssimultaneously.
X

Mainly refers to internal reasoning


functionality, but e.g. cooking and car
assistant mock-ups adapt level of
details to preferences of multiple
users; cooking assistant can deliver
generic health tips for multiple users
instead of personal

E.g., demonstrated via different ways


to combine preferences of multiple
users (default option does not
reveal any preferences) and delivery
of generic health tips

Configuration of interaction with X

Test subjects approved the provided


configuration functionality, see D5.5.1

Smartproductsshallbeabletoadapt
content of interaction to roles, skills,
capabilities

and

preferences

of

SP.WP5.PERS.12 multipleuserssimultaneously.

Smart products shall be able to


SP.WP5.PERS.13 protectprivacyofgroupmembers.
Smart products shall be convenient
forapplicationdevelopersandforend
SP.WP5.PERS.14 users.
The reason for interaction shall be X
SP.WP5.PERS.15 providedtousers.
Asmartproductshouldbeabletouse

interactioncapabilitiesofothersmart

See description of explanations


strategy
Mainly refers to cooperation between
smart products, mock-ups had
sufficient interaction capabilities

products that are currently nearby


and provide suitable functionality or
knowledge for a workflow / task it
SP.WP5.PK.1

wantstoexecute.
Smart products should be able to X
proposetasksorinformationtousers
(e.g., closest place to stop, trouble

SP.WP5.RL.2

Mainly refers to internal reasoning


functionality, however, health/ tool
tips demonstrated how some of this
kind of information can be delivered

shootingprocedures).
A Smart product should be able to
learn workflows (or extensions to

Requires sophisticated reasoning


functionality

existing workflows) also concerning


several smart products, by observing
the users interaction with the smart
SP.WP5.RL.3

products.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 82

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Smart

products

should

provide X

alternativeworkflows,e.g.,torecover
SP.WP5.RL.4

E.g., aircraft assembly mock-up


demonstrates recovery procedure
(Figure 39)

fromanerrorcase.
Table 4: Fulfilment of requirements from [D5.1.1]

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 83

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

5 Conclusion and Outlook


This document presented the main interaction strategies and examples of realising these
strategies in different application domains: cooking, car servicing, entertainment and aircraft
assembly. The work presented in this document was aligned with the work on implementation
of smart products architecture tools for implementing multimodal Uis in the WP5, and was
done in cooperation with WP2 and WP3, as well as with application workpackages WP8, WP9
and WP10. The work on designing mock-ups and the findings from the user studies
[D5.2.1.Annex, D5.5.1, Vildjiounaite-2011] with the mock-ups resulted in refining interaction
implementation approach; in refining user model (the latest version is described in [D.2.1.3]),
and in further development of SmartProducts Monitor (namely, the functionality for helping
end user to better understand smart products).
Some of interaction strategies, presented above, constitute the core functionality of smart
products, such as guiding through task steps and product-initiated interaction (such as asking
the users to confirm that time-critical actions are performed in time, if these actions cannot be
recognised from sensor data). Some other strategies refer to functionality which may be called
optional, such as customisation or explanations regarding product actions. However, as the user
studies have shown, implementation of these strategies is very important for end user
acceptance, especially in the domains of personal applications, such as cooking, entertainment
and servicing of a personal car, and in these domains it is also required to adapt smart products
to preferences of multiple users, interacting with the same smart product. Compare to these
domains, in the aircraft assembly domain customisation functionality may be more limited, but
nevertheless it is desirable. The results of the mock-ups development and user studies have also
confirmed the importance of providing the users with means to understand internal logic of
smart products, and importance of politeness of smart products, especially in audio interaction.
The demonstrators based on the mock-ups and interaction strategies, described here, will serve
as a basis for further research on interactions with smart products and adaptation of interaction
to the users and contexts in multi-user multi-device environments.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 84

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Annex

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 85

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

A Glossary
Context

Environment

Event

Lifecycle

Proactive Knowledge

Proactivity

Situation

Context characterizes the actual circumstances in which the


application is used. It comprises all information that distinguishes
the actual usage from others, in particular characteristics of the
user (her location, task at hand, etc) and interfering physical or
virtual objects (noise level, nearby resources, etc). We thereby
only refer to information as context that can actually be processed
by an application (relevant information), but that is not mandatory
for its normal functionality (auxiliary information).
An environment is an identifiable container with a clear border
that may contain smart products and other, non-smart product
entities. Entities inside the container can influence each other but
they are not influenced by anything outside the container.
Any phenomenon in the real world or any kind of state change
inside an information system can be an event. However, it must be
observable and some component in the information system must
observe it in order to notify parties interested in the event.
The lifecycle considered in the SmartProducts project consists of
the following four stages: Design, manufacturing, usage and
maintenance.
The Proactive Knowledge of a smart product is defined as the
ensemble of data and formal knowledge representations, which
directly or indirectly facilitate its proactive behaviour.
Proactive behaviour in turn denotes mixed-initiative
communication, interaction, and action where the actual situation
and goals affect the turn-taking between a smart product and its
environment i.e. users and other smart products. In particular,
proactive knowledge may trigger human-product interaction and
product-environment communication based on perceived needs
(interaction needs may be computed by the product as part of its
smartness, e.g., based on context changes).
Proactivity is defined as a capability to initiate actions and exhibit
goal-driven  ehavior without an explicit request or pre-defined
schedule
Situations are interpretations of context data. Thus, they can also
refer to the states of relevant entities.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 86

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

Smart Products

User

A smart product is an autonomous object designed for selforganized embedding into different environments in the course of
its lifecycle, supporting natural and purposeful product-to-human
interaction. Smart products proactively approach the user,
leveraging sensing, input, and output capabilities of the
environment: they are self-aware and context-aware. The related
knowledge and functionality is shared by and distributed among
multiple smart products and emerges over time.
A user of a smart product is a person who uses the functionality
and/ or the supporting tools of smart products. Thereby we
distinguish between smart products developers (end-users of the
SmartProducts platform, technically skilled), support service
workers (end-users of the SmartProducts platform, some technical
skills required) and smart products end-users (end-users of the
functionality provided by smart products, no technical skills
required) which differ in their level of expertise.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 87

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

B List of Acronyms
eLUM
GUI
IAM
ICT
IS
IAT
LED
MUI
RFID
SUI

Graphical User Interface


Interaction Manager
International Conference on Telecommunications
Interaction Strategy
Interaction Type
Light Emitting Diode
Master User Interface
Radio Frequency Identification Tags
Slave User Interface

UI
W.l.o.g
WP

User Interface
Without limitation of generality
Work Package

electronic Libretto Uso Manutenzione (vehicle manual)

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 88

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

References
[D1.1.1] D1.1.1 Requirements for Smart Products and Proactive. SmartProducts, 2009.
[D2.1.3] D2.1.3 Final Version of the Conceptual Framework. SmartProducts, 2011.
[D3.4.1] D3.4.1 Design of Knowledge Management Methodologies and Technologies
Update. SmartProducts, 2010
[D3.5.1] D3.5.1 Evaluation of Initial Version of Technologies. SmartProducts, 2010
[D5.1.1] D5.1.1 Requirements and Concept of Interaction with Smart Products & Proactive
Knowledge. SmartProducts, 2009
[D5.2.1] D5.1.2.Initial Methodology for Smart Products Usage Modelling and Personalisation.
SmartProducts, 2010.
[D5.2.1.Annex] D5.1.2.Annex. Description of User Tests on Methodology for Smart Products
Usage Modelling and Personalisation. SmartProducts, 2010.
[D5.1.2] D5.2.1 Initial Description of Interaction Strategies and Mock-Up UIs for smart
products. SmartProducts, 2010
[D5.4.1] D5.4.1 Initial Architecture and Specification of Services and UIs to Interact with
Smart Products and Proactive Knowledge. SmartProducts, 2009
[D5.4.2] D5.4.2 Initial Implementation of MMUI to Interact with Smart Products & Proactive
Knowledge. SmartProducts, 2010
[D5.5.1] D5.5.1 Evaluation Report for Initial Implementation. SmartProducts, 2010
[D9.2.1] D9.2.1 System design for Vehicle Product Lifecycle Management Application.
SmartProducts, 2010
[Aitenbichler-2007] Aitenbichler, E., Lyardet, F., Austaller, G., Kangasharju, J., Mhlhuser,
M.: Engineering intuitive and self-explanatory smart products. In: Proceedings of the
2007 ACM symposium on Applied computing, Seoul, Korea, ACM (2007) 16321637
[Austin-2000] Austin, J.L.: How to do things with words. Harvard University Press (2000)
[Bickmore-2007] Bickmore, T., Mauer, D., Crespo, F., Brown, T., Persuasion, Task
Interruption and Health Regimen Adherence, Persuasive Technology '07 (2007), 1-11
[Cheverst-2005] Cheverst, K., et al., Exploring Issues of User Model Transparency and
Proactive Behaviour in an Office Environment Control System, User Modeling and UserAdapted Interaction (2005) 15:235-273
[FIPA-2000] FIPA, T.: Fipa communicative act library specification. Change 2000 (2000)
[ICT]
The
17th
International
Conference
on
Telecommunications
2010
http://ec.europa.eu/information_society/events/ict/2010, last accessed 20.01.2011
[iTEC] http://www.itec10.eu, last accessed 20.01.2011
[Spiekermann-2007] Spiekermann, S., User control in ubiquitous computing: design
alternatives and user acceptance. differences. September (2007).

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 89

SmartProducts

WP5 Multimodal User Interfaces and Context-Aware User Interaction

Deliverable

D.5.1.3: Final Description of Interaction Strategies and Mock-Up UIs for Smart Products

[Stnder-2010] Stnder, M.: Towards Interactionflows for Smart Products, In: ACM
symposium on Applied Computing 2010
[Stnder-2011] Stnder, M.; Hartmann, M.; Mhlhuser, M.: Flexible and Context-Aware
Workflow Execution using Ontologies. Submitted to: ACM SIGCHI Symposium on
Engineering Interactive Computing Systems 2011
[Vildjiounaite-2011] Elena Vildjiounaite, Julia Kantorovitch, Vesa Kyllnen, Ilkka Niskanen,
Mika Hillukkala, Kimmo Virtanen, Olli Vuorinen, Satu-Marja Mkel, Tommi Kernen,
Johannes Peltola, Jani Mntyjrvi, Andrew Tokmakoff, Designing Socially Acceptable
Multimodal Interaction in Cooking Assistants, IUI 2011
[WfMC-1999] Workflow Management Coalition: Terminology & Glossary. 1999, pp. 1-65.

SmartProducts_D_5_1_3.doc
Copyright SmartProducts Consortium 2009-2012

Dissemination Level: Public

Page 90

You might also like