You are on page 1of 21


Using Home Automation to propose new automatic scenarios to disabled users


The increasing cost of ageing population and dependency is an unquestionable and worrying trend. But the constant progresses of information technologies also provide real opportunities for healthcare and assistance of dependent people. In this context, this article proposes an original proactive solution for home monitoring, by applying the classification methods. The existing home automation and multimedia services are used as built-in sensors, which are the input data for the analysis of user habits. From this analysis, assisted living services are automatically proposed to the user. Firstly, the architecture of the ambient assisted living system is presented. Then an original clustering procedure is performed for the activity recognition. From this clustering we aim to automate a scenario identification based on non-supervised classification method.


It is widely admited that in developed countries, the number of aged people (over 80 years old) is increasing, augmenting the number of fragile people. One question is, can technology help fragile people get more autonomy ? It is important since for aged people, this means staying at home longer and thus keep more comfort and cost less to the society. Lots of research has been carried out in the last decades under the Ambient Assisted Living or Smart Homes thematics, these are very well summed up in [3], [17]. This talk will present what we are doing on this field in the Lab-STICC Laboratory. Lab-STICC has been investigating this field, working tightly with the Kerpape Rehabilitation center. These investigations mainly seek to enhance the use of home automation in the rehabilitation center itself. A first project named QuatrA (Aide Ambiante Ajust´ee Automatiquement) [1], gave a solution, for controlling electrical devices through KNX bus and multimedia through an infra-red controller as well as helping disabled people with the navigation with wheelchair in their environment. The aim of this paper is to present a global solution for proposing automatic scenarios to the users based on their use of the home automation system, without adding additional sensors dedicated to measuring activity. This solution is composed of a set of probabilistic tools to analyze activities as well as the AAL architecture supporting these tools.

A. Related work

Population ageing induces increasing needs of places in hospitals and specialized institutions. Improving the quality of life of the elderly and disabled people staying at home, which reduces the time of regarding hospitalization,


are more and more motivated in telemonitoring technology. For instance, in [21] a method to measure circadian variability of the activities using location sensors (infra-red sensors, and magnetic contacts in door) has been developed. As a result, the system can alert the medical provider of changes in user activity or of unusual user behavior. In [23] the authors construct a video monitoring framework, fed by a network of cameras and contact sensors, in order to automatically recognize specified elderly activities. The authors in [20] construct a remote monitoring system with moving sensors in order to detect user behaviors. One main principle of new solutions based on the Information Technology, is the activity recognition, which is deduced from different kinds of sensors. This activity recognition is then used for different purposes, for example fall detection, anomaly detection, person localization, etc. For instance, the authors in [5] focus on the use of Support Vector Machine (SVM) to learn and classify the activities of daily living that are performed by the elderly, fusing a large number of sensors distributed inside the flat. Whereas in [16], based on information collected from all home devices equipped with sensors, the authors apply the functional relation (i.e optional relation or mandatory relation) between different tasks within the activities of daily life for the activity recognition. To limit the cost investment, the authors in [19] intend to use simple sensors that detect the changes in states of objects and devices to recognize the daily services in complex home. More than 70 state-change sensors are used in this approach. Besides, different types of assistance for the elderly are presented in proposals from ambient intelligence, aiming to help the people with disabilities to autonomously integrate to the society. In order to assist the user to simultaneously activate the devices corresponding to the user preference, an intelligent UT-AGENT design for smart home environment is presented in [10]. In another proposal [6], they consider a potential dangerous situation as a consequence of series of temporal events. From this hypothesis, a simulation study was designed to evaluate the dangerous levels generated by the TempCRM (Temporal Context Reasoning Model) model. To improve the quality of people life, several designs of ambient intelligence environment have been proposed in [7], [4]. In the field of robotic assistance for disabled people, different robotic approaches have been presented such as in [15],


However, these current approaches heavily rely on the use of sensors, cameras and telemonitoring systems to collect the information related to the user situation, which means an equipment and installation cost that may be prohibitive in usual cases. Furthermore, input from users and professionals, including occupational therapists (OT), indicate that such intrusive methods can make people uncomfortable and therefore may not be easily accepted.

B. Methodology for services proposition

Based on the Quatra results, proving the importance of automatic scenarios for people with disabilities, our contribution in this paper deals with the automatic identification of new scenarios adapted to the user habits. The global methodology is shown in Fig.1. This work relies on the user’s daily activities via home living devices, to contribute intelligent services for disabled people and the elderly. So a first step, service modeling, is performed to formulate the database and basic terms from the HAS architecture in the user environment. After defining the user database of service use, we need to


OT Scenario selection Service modeling Service Scenario Observation identification identification Fig. 1. Scheme
Service modeling
Fig. 1.
Scheme of our approach

learn the user habits to propose new scenarios adapting to user needs. For this, an observation of the user’s use of HAS is firstly realized. Then, a step of service identification is performed, dealing with clustering each service into different activities, according to their occurrence times. Based on this step, a process to determine assisted service execution through the use of scenarios, meaning scenarios is proposed in scenario identification step. Then, achieved scenarios are proposed to the user or OT, who makes the final decision.

This methodology is supported by the use of an ambient assisted living architecture, which is presented in section II. Then service identification, is introduced, which helps to get the activities of the user. Constructing scenarios, from the observations of service use is presented in section IV. Finally, results are presented in section V.


AAL architecture is about providing services to the users. These services are mainly realized on electrical devices

) and activated through the home automation system or multimedia devices (TV, DVD, phone,

(shutters, lights, bed


) obtained through an infrared remote or realized on a computer.

A. Services

To give some flexibility at the execution, we consider that a service is the realization of one or many operations. An operation being the realization of a function (what) of the system on a given resource (where). This provides reconfiguration capabilities in case of the unavailability of a resource. One of the fact enlightened by the occupational therapists (OT) from the QuatrA project is the need of scenarios to shorten the delivery time of services and to reduce the energy needed for the activation of the devices (adapted


interfaces are generally difficult to handle). A scenario is the execution of multiple services together and is itself a service. Each service is associated with a Quality of Service (QoS), to assess whether the delivery to the user was good or not. Management of scenarios is performed by the system, it can be seen as a service but we will call it a manager since it doesn’t directly delivers it service to the user. Other managers can be encountered in an AAL system, such as the alert management system that checks if the user behavior deviates from what it uses to be or scenario identification, presented in this paper and which monitors the usage of devices to propose new scenarios to the user. In the QuatrA project, we also supported the user with navigation capabilities.

B. Hardware Architecture

The execution of the services is performed on a resource. But a service is triggered by some program code. An hardware architecture as been defined to support the execution of the services across different technologies and in the scale of a rehabilitation center such as Kerpape. Here are the devices we may find in such architecture :

User terminals, embedded with the wheelchair-bound user, deals with interactions between the user and his environment. It is responsible for service publishing and for sending commands to the central server via a mobile terminal (bluetooth connexion).

Mobile terminals, also embedded in the wheelchair, act as a bridge between the user terminal and the central server via a stationary terminal (bluetooth connexion). It is also used for path calculation and wheelchair control using embedded sensors.

Stationary terminals are not only a bridge as mentioned above, but also a control unit for local environment (infrared and KNX/EIB 1 connexion).

A central server performs all the centralized tasks such as service and scenario identification, alert manage- ment, route management for wheelchairs.

Mobile and stationary terminal runs on low cost and low power embedded boards (IGEPv2) with an ARM processor running a lightweight version of Ubuntu Linux, it can access the KNX bus through ethernet and trigger IR commands through an USB dongle. The central server runs on a standard PC running a full desktop Linux whether the user terminals are smartphones or tablets associated with an adapted interface.

C. Software Architecture

A middleware called DANAH [12] has been developed to support the execution of services on the hardware architecture. Its role is to control the environment and activate the devices according to the user wishes. DANAH is composed of two parts : a server and a client. The client runs on the PDAs and the server on the IGEP board. User wishes are expressed on the client and executed on the server which triggers the devices that can either be on the KNX bus or controlled by infra-red. Scenario facilities are provided.

1 The KNX/EIB (European Installation Bus), born of a consortium, is an open standard for home and building control.


5 Fig. 2. Hardware architecture D. Formalization of the service architecture From the methodology presented before,

Fig. 2.

Hardware architecture

D. Formalization of the service architecture

From the methodology presented before, the database is collected from the HAS use. In this section, we propose two levels of modeling for the global system. The first level defines the services proposed from the home automation and multimedia architecture, and is called service architecture. Based on basic services, the scenario level is built up within the scenario identification step. 1) Services modelling: The term of service is central to this approach. There is a need for an obvious definition of the underlying concepts to be able to properly handle them and use them in the data structures. First we need to define the basic blocks:

-Operations are elementary tasks performed by the home automation system. These tasks are the realization of a function (what) by a resource (how) that are divided into two classes: activation operations trigger a service while action operations do the job. Then a service can intuitively be defined with the following definition:

- A service is an operation or a set of mutually dependent operations carried out by the system for the user. As stated in the definition, a service is related to the user who requests it. For instance, the “open door” service is composed of a “launch open door from PDA” activation operation and an “open door” action operation, which is automatically performed. Whereas the “watching TV” service consists of a “launch on TV from PDA” activation operation, and several action operations of “turn on TV”, “change canal”, “+/- volume”. However, a service can also be seen from the point of view of the HAS. Given the distinction between activation and action operations, a service is triggered by an activation operation, it then executes some action operations that provide an added value to the user. A service is called elementary if only one effect is perceived by the user. For instance, the “listen to webradio” service is considered as an elementary service, it contains a “launch webradio from PDA” operation, realizing through the “PDA” activation resource all necessary steps to set-up the webradio like ordered functions performed on PC : “wake-up PC”, “set-up network”, “launch application” and finally “select canal”. The definition of the services in the HAS architecture is willingly not complete. To decrease the number


of distinct services, the activation resource associated with the service is not defined until the service is proposed to the user. 2) Scenario modelling: Services are created and composed from the HAS architecture. Elementary services are defined in the architecture itself, though the activation resources are not yet determined. More complex services are built-up from elementary services, defining scenarios. The definition of scenarios can become complex, moreover, the pool of scenarios evolves in time and is different for each user, so we propose to define scenarios on a higher level, as an association of services. Defining a scenario consists in combining existing services together and allocating an activation resource to each of these services within the scenario. Since each service is launched by the scenario manager, it becomes the activation resource. To enable the launch of a newly created scenario, an activation operation must be added to the scenario itself. From this definition, the execution of a scenario means the execution of all services within the scenario. In cases of automatic execution, these sets of services are automatically executed by the scenario management. We distinguish two kinds of scenario activation.

Automatic. In this kind, no user activation is spent.

Semi-automatic. In this kind, we consider three ways to activate a scenario: parallel, where one activation simultaneously executes all services within the scenario; sequential, where one activation consecutively execute all services within the scenario; step by step, where a simple click on the HMI interface executes each service within the scenario.

We note that, both kinds of scenario activation allow to save the user energy by reducing the effort and time to manually find each service on his HMI interface. Moreover, according to the previously Quatra project investment, different users need different scenarios. Given the interest of new scenarios for the improvement of user autonomy, and the fluctuation of scenarios for different users, it is essential, a method to automatically find new scenarios, adapted to the user’s habits that we present in the next sections.


We rely on the testimonies of OTs from the Kerpape center, who stress the usual regular habits of the disabled patients they assist. Moreover studies in geriatric contexts [18] show the same regularity and structuring role for certain categories of elder people living alone. Therefore, we consider that regular daily services, modeled previously, are likely to represent the habitual user daily services. Actually, each activation of a service gives a timed value, which is considered as a kind of virtual sensor. But a service can be performed several times per day, and these distinct occurrences can be associated to different scenarios. For instance, the “turn on light” service can be associated with the “wake up” scenario in the morning and other scenarios in the evening. Therefore, the first question is how to distinguish different groups of service occurrence times, while there are no preexisting tags to rely on specific daily use periods ?


Answering this question deals with discovering how to split the population into smaller clusters such that data points in a cluster are more similar to each other than data points in different clusters. Such clusters express use periods of related service, called activities. This is the main principle of the service identification.

A. Service subset selection

From the above description, database for the analysis is characterized by the names and occurrence times of the services. They correspond to the use of identified HAS devices and the call times respectively, which produce temporal relationship between different services. After a period of observation, a set of occurrence times for each service is obtained. Besides temporal characteristic, a service can be classified into different types according to its use frequency. Typically, we have different kinds of services: daily service, is requested each day; weekly service, is requested each week; k-days service, is requested each k days. These data represent different types of services.

In this work, we focus on the combination of different daily services to propose daily scenarios. Note that it is just

a case study of the general method. The daily attribute of a service is measured by the number of occurrence days of this service. A service performed once a day for 10 days is more frequent than a service performed 10 times in only a day and absent for nine other

days. From this idea, a service is considered as a daily service if it is observed more than a given ratio corresponding, for instance to 50% of N observed days (i.e threshold = 0 . 5 ). The value of this threshold depends on the user context, and user habits, as well as user capabilities. Consequently,

it is considered as a user parameter to determine daily services corresponding to the user context, and will be advised

by the OT or the user. This parameter allows to classify two types of services: daily services and other services.

B. Clustering procedure

The clustering procedure consists in three steps. First, data are grouped in as many clusters as wished by means

of a clustering method. Performing this task for a set of integers allow us in a second time to choose the most appropriate number of clusters, according to a stability criterion. We finally perform an outlier detection within the resulting clusters. 1) Clustering: Collected data consist in occurrence times of each service during an observation period. We intend to group these data into clusters corresponding to different activities of this service. Since we usually have no a priori knowledge of the actual activities, we opted to the most popular of unsupervised clustering techniques, K-means method [16].

, C k } , so as to

For a given integer k , K-means clustering gives a partition of the input data into k sets {C 1 , minimize the following objective function:

E (k ) =


r=1 x ∈C r

x c r 2


where c r is the center of cluster C r and . stands for the euclidean norm.


After an initialization step, K-means algorithm proceeds by alternating between assignment and update steps until


Initialization step: We choose randomly k cluster centers { c 1 ,

Assignment step: We assign each observation in the data set to the cluster with the closest center:

, c k } in the data set.





= { x i , x i c x i c j






1 ,


, k }


Update step: We calculate the new cluster centers to be the means of the data in the clusters:

where n r = card(C







= 1



x i ∈C



x i


Optimization step: Because of the monotonically non-increasing property of the objective function, the

convergence is always assured. However, there is no guarantee that the global optimum is reached. Furthermore,

the result may depend on the initialization step. An heuristics is to run the algorithm for different starting points

and to consider the best local optimum, e.g. the clustering, which assures the minimum of the objective function.

2) Determining the number of clusters: We generate a Gap curve for the input data by running K-means algorithm

for all values of k between 1 and k max , and for each k we compute the Gap between the resulting clustering and

the clusterings of data following a null reference distribution. The largest jump in the Gap curve represents the best

choice of k .

We chose the uniform distribution to be the null reference distribution and we combined weighted Gap and

weighted DDGap methods [22] to analyze the Gap between clusterings.

The distortion of a clustering {C 1 ,

, C k } is defined as:

W k =




2 n r (n


1) D r


with D r = i,j C r x i x j .

The Gap between a clustering in k clusters of the data set and the corresponding clustering of uniformly distributed

data is given by:

Gap (k ) = E (log (W k )) log (W k )


where E (log (W k )) is the expected log-distortion of uniformly distributed data sets. Weighted Gap method

compares successive values of Gap (k ) and the best choice of k corresponds to a maximum of Gap (k ).

On another hand, weighted DDGap method determines the best jump in the Gap curve by comparing the successive


DD Gap (k ) = D Gap (k ) D Gap (k + 1)


with D Gap (k ) = Gap (k ) Gap (k 1). The best choice of k corresponds to a maximum of DD Gap (k ).

Combination of these two Gap measures is motivated by the fact that D Gap is more likely than DD Gap to


Hence, we propose the following strategy to determine the number of clusters:

1) Run weighted Gap method. 2) If the best choice of k is 1, check the range of this unique cluster: a cluster with occurrence times separated by a lapse of time too long (greater than a fixed threshold) may not be useful to learn habits. In this case, we have better to split the cluster and go to the next step. 3) If k 2 , use DDGap method to estimate the number of clusters and check the ranges of all resulting clusters. Large range clusters are split by iterating steps 1 to 3.

This procedure builds a top-down hierarchical tree of the data set and the resulting clusters correspond to different activities of the same service. Among obtained clusters, some may correspond to rare activities and others may include rare events. In the next step, we intend to detect such abnormalities . 3) Outlier detection: An outlier is an observation which markedly deviates from other members of the sample in which it occurs [2]. Outliers often contain useful information underlying abnormal behaviors and mining for outliers has a great number of applications in a wide variety of domains. However, in this section, we consider outliers as impediments to the learning process and as such must be identified and eliminated. At this point, a service is clustered into a set of activities and each activity is a set of occurrence times of the service. We are concerned with two types of outliers: outlying observations and outlying clusters. An outlying observation is an observation considerably dissimilar from the remainder of the data in its cluster; it corresponds to an abnormal event. An outlying cluster is a small cluster; it corresponds to a rare activity. Outliers mining method consists in:

1) Detecting small clusters: Following [13], a small cluster is defined as a cluster with fewer points than half the average number of points in the k clusters.

2) Detecting outliers in the rest of the clusters (if any): We compute the median absolute deviation ( MAD) of the current cluster which is the mean of the distances between the median of the cluster and each one of the points in the same cluster. On another hand, for each point x k , we compute the median absolute mean of

a given threshold, x k is

the current cluster deprived of declared outlier.

x k , termed MAD k . If the ratio MAD k /MAD is greater than

Both small clusters and outliers are eliminated before construction of scenarios. The clustering procedure is illustrated in Fig.3. Three clusters are identified, according to the occurrence times. After removing outlier clusters and outliers, which are represented in red circles, two clusters are acquired. Each acquired cluster represents a daily activity of the related service. In other words, a daily activity is defined as the service at a point of time, which allows to distinguish the same service at different occurrence times. Then, these activities are used for the identification of new scenarios, which is presented thereafter.


10 Fig. 3. Clustering results IV. S CENARIO CONSTRUCTION Carrying out every day life activities, even

Fig. 3.

Clustering results


Carrying out every day life activities, even the most harmless ones, may be very difficult even impossible for certain elderly and disabled people. Enabling the automatic management of the home facilities may provide them a valuable support and contribute to give them more autonomy and better quality of life (entertainment, communication, self esteem To go farther, we aim to exploit automatic learning to provide a responsive domotic system which offers the user personalized scenarios tailored to his habits, able to anticipate his needs and able to adapt to changes in his habits. Let us first focus on the construction of personalized and anticipatory scenarios.

A. Activity clustering

The basic idea of this activity clustering is to group close activities, requested frequently together into sets of activities which propose new scenarios. First, we intend to turn the idea of time-closeness into a metric of

similarity used in clustering methods. In literature, many methods for data analysis have been adapted to use solely dissimilarities between data such as K-means clusterings, Kohonen’s Self Organizing Maps, HAC (Hierarchical

In our context, accounting additionally the neighbor relation between activities in

scenario construction, we chose the unsupervised algorithm of DSOM (Dissimilarity Self-Organizing Maps) [9] for grouping close activities into clusters, each one is considered as a new scenario. The data in this algorithm are the dissimilarity values between different activities. A dissimilarity measure is therefore required. Illustrating the dissimilarity between different activities by different shapes, the method is shown in Fig.4. Through DSOM algorithm, similar shapes can be grouped together. From this figure, we see that there are two problems to solve before using DSOM algorithm: i) the definition of dissimilarity between activities; ii) the choice of a relevant number of clusters, also called neurons.

Ascendant Classification)


Fig. 4.


11 Fig. 4. I n i t i a l i z a t i o

Illustration of Kohonen algorithm

High similarity

t i o n Illustration of Kohonen algorithm High similarity DSOM algorithm N iterations m n
t i o n Illustration of Kohonen algorithm High similarity DSOM algorithm N iterations m n
t i o n Illustration of Kohonen algorithm High similarity DSOM algorithm N iterations m n
t i o n Illustration of Kohonen algorithm High similarity DSOM algorithm N iterations m n
t i o n Illustration of Kohonen algorithm High similarity DSOM algorithm N iterations m n

DSOM algorithm

of Kohonen algorithm High similarity DSOM algorithm N iterations m n 1) Dissimilarity measure: We define

N iterations



1) Dissimilarity measure: We define the cooccurrence frequency f { i,j } of two activities i and j as:

f { i,j } = f ij + f ji



where f ij is the frequency of the sequence (activity i activity j ) during an observation period of N days.

f { i,j } measures the frequency of the pair of activities { i, j } independently of their occurrence order.

The relative lapse of time between the pair of activities { i, j } is defined as:


δ ij =


2 × ij + ij



ij is the mean lapse of time between activities i and j .


i is the mean lapse of time between activity i and all other existing activities.

The dissimilarity between activities i and j is measured by:

d(i, j ) =

δ ij × (1 f { i,j } )


if i =




We built d so as to take into account both the lapse of time and the cooccurrence frequency in such a way that

when we fix the relative lapse of time δ ij , the similarity between the pair of activities { i, j } increases with the

frequency f { i,j } . On another hand, for a fixed frequency, the smaller the relative lapse of time is, the higher the

similarity between activities is.

2) Clustering by DSOM algorithm: The SOM algorithm [8] is a non linear projection technique for visualizing

the underlying structure of high dimensional vectorial data. Input vectors are represented by prototypes arranged

along a regular low dimensional map (usually 1D or 2D), in such a way that similar vectors in input space become

spatially close in the map. However in several real applications, no vector data description is available and instead the


data are pairwise compared according to a dissimilarity measure. An adaptation of Kohonen’s SOM to dissimilarity

data called DSOM, was proposed in [9]. DSOM algorithm proceeds as follows:

Initialization step: Assign a prototype to each cell (or cluster) of the grid. The prototypes may be chosen

randomly in the data set but other heuristics are possible.

Learning step: Alternate between affectation and representation phases until there is no change in the grid.

At iteration l :

Affectation phase: Assign each observation in the data set to the cluster with the closest prototype regarding

the dissimilarity measure.

Representation phase: The new prototypes are calculated to be solutions of the minimization problems:


c j = argmin y C








h (i, j, l )d(x, y )


where d(., . ) is the dissimilarity measure, c j is the prototype of the cell C j at iteration l and h is a

neighborhood function which decreases with time. A common choice for h is a gaussian kernel.



For a given grid size, several initializations of DSOM are tried and the map with the least quantization error is


We choose the best size of the grid according to the variation of information criterion (VI) proposed in [14].

It is a criterion for comparing partitions, by measuring the distance between two partitions of the same data set.

Applying this variation of information criterion for the evaluation of DSOM algorithm, the idea is to compare the

stability of a clustering through the variation of information between the clustering of complete data set and the

clustering of a reduced data set. The best estimate of the number of clusters is such that its clustering is the most

stable with the reduction of some data points.

The obtained clusters are sets of close activities called scenarios. The services within a scenario are automatically

activated and we will see later the different modes of possible activations. For the moment, we need to order the

activities within the scenarios in order to allow any automatic activation.

B. Scenario ordering

A scenario can be described as a weighted directed acyclic graph whose vertices are the activities within the

scenario and the arcs are ordered pairs of activities. The arcs express the direct dependence between activities and

as such the weight assigned to an arc (i, j ) is the frequency of the sequence (activity i activity j ), namely f ij .

A path is a sequence of vertices connected by arcs and its length is set to be the product of the weights of all

traversed arcs:


l ij = f kl

over all the arcs (k, l ) of the current path.

A hamiltonian path is a path which visits all vertices of the graph exactly once.

We order the activities within a scenario by first choosing the starting and ending activities and then choosing

the hamiltonian path joining them:


Step 1: Given two activities i and j , we set the weight L ij of the event ” i is the starting activity and j is the ending activity” to be:

L ij = l ij

over all hamiltonian

paths joining i to j .


Step 2: The winning activities are the vertices which maximize L ij .

Step 3: Once the starting and ending vertices selected, we choose among of all hamiltonian paths joining them the one (or possibly the ones) with maximal length l ij .

The scenarios we deal with consist in a little number of activities; thus the task of searching hamiltonian paths remains of very reasonable cost.

Once the scenarios ordered, they are proposed for validation to the user and the OT. Indeed, the user may be uninterested in a given scenario or more problematic, a scenario may go against a therapeutic purpose. The expertise

of both the user and the OT is compulsory to make this solution an appropriate and useful answer to the automatic

control of HAS environment.

C. Dynamic adaptation

The behavior of the user may change over time due to changes in his health, private life or seasonality for example. For instance, a change of user habits for the service “turn on light” between winter and summer is illustrated in Fig.5. To prevent deterioration in the calculated scenarios over time, we have to refresh them periodically. There are two strategies for achieving the updates. The first one is to adapt the scenarios regularly without considering whether changes have really happened and the second one is to first detect changes and next to adapt the scenarios. We adopted the first strategy, as illustrated in Fig.6. It consists, for a fixed time window of k days,

in constructing scenarios after an observation period of N days and then to construct new scenarios based on the

last N days. The choice of k is left to the OT.


In the experiments, one real-world and artificial data sets were used. The artificial data sets were generated to analyze the robustness of the approach for handling irregular habits and noise in the data.

A. Simulation design

We developed a simulator under SCILAB, a free software for numerical computation, to generate artificial data

sets. To make them as realistic as possible, we followed Quatra project results and OT’s advices to create the profile

of a fictitious patient at Kerpape center.

This patient was given three typical timetables : one for working days, one for saturdays and one for sundays.

A typical timetable is a set of services sorted with their typical call times, as shown in Tab.I. Predefined scenarios underly the choice of the above activities. For the working days for example, we have the

following predefined scenarios:


14 "Turn on light" service in winter "Turn on light" service in summer "Turn on light"

"Turn on light" service in winter

"Turn on light" service in summer

service in winter "Turn on light" service in summer "Turn on light" service after 8 days

"Turn on light" service after 8 days of changes

Fig. 5.

Example of user habit change

1) “Wake-up” scenario [1-2-3-4] : switch on light, open shutter, turn on TV, turn on hot water. 2) “Go-out” scenario in the morning [5-6-7-8] : turn off TV, open the door, switch off light, close the door. 3) “Go-in” scenario in the noon [9-10-11] : open the door, setup bed, close the door. 4) “Go-to-bed” scenario in the evening [23-24-25-26] : setup bed, turn off TV, switch off out light, switch off light.

A data set of N days is generated by introducing noise in the typical timetables by adding :

variations around the typical call times,

rare events such as non call or abnormal calls of an activity.


15 Fig. 6. Principle of dynamic adaptation TABLE I W ORKING DAY ACTIVITIES Label Time Daily

Fig. 6.

Principle of dynamic adaptation



Label Time Daily activities E14 17:00 Turn off computer E1 08:00 Switch on light E15
Daily activities
Turn off computer
Switch on light
Switch on light
Open shutter
Turn on TV
Turn on TV
Watch DVD
Turn on hot water *
Turn on light ext
Turn off TV
Hang on telephone
Open door
Hang up telephone
Switch off light
Close shutter
Close door
Turn off DVD
Open door
Setup bed
Close door
Turn off TV
Setup bed
Switch off out light
Unset up bed
Switch off light
Turn on computer
* wished service, but not yet available

We used gaussian law to create variations around the typical call times, uniform law to generate activities occurring randomly in an interval of time and Poisson law to generate rare events. For instance, the patient who usually goes


rehabilitation room at 9.30 am on working days is used to wake up around 8 am. His ”turn on light” times for


days were generated via the gaussian law with mean 8 and variance expressing the regularity of the patient. The

higher the variance is, the more irregular his habits are for this particular activity. Sickness and other impediments may cause exceptional calls or non call of an activity. These events are generated via the Poisson law whose parameter characterizes the frequency of such exceptions.


B. Simulation results

We present here the results of a data set simulating 100 days of the fictitious patient. The lowest coefficient of variation (standard deviation over mean) is 15 minute and is associated with activity “open door”(standard deviation=4). The highest coefficient of variation is 70 minute and is associated with activity “turn on light”(mean =7 pm, standard deviation=4). The number of abnormal events represents 0.5% of the data set and 5% of the less dense activity. We are concerned with four matters:

the recognition of the profiles

the recognition of the daily activities

the recognition of the scenarios

the dynamic adaptation

1) Recognition of the profiles : all three profiles were identified. 2) Recognition of daily activities : all activities were correctly identified. Figure 7 shows the partition of the service ”turn on light” into two activities : one in the morning around 8 am and one in the evening around 19 pm.

Turn on light, activity 1 Turn on light, activity 2
Turn on light, activity 1
Turn on light, activity 2

Fig. 7.

Occurrence of the “turn on light” service

3) Scenarios identification : DSOM clustering coupled with variation of information criterion lead to a stable clustering of data in a 3 × 3 grid. We ordered the activities within clusters as presented in section IV-B. The resulting scenarios are presented in Tab.II.




[E23 → E24 → E25 → E26] [E20 → E21 → E22] [E12] [E17 →
[E23 → E24 → E25 → E26]
[E20 → E21 → E22]
[E17 → E18 → E19]
[E13 → E14 → E15]
[E9 → E10 → E11]
[E5 → E6 → E7 → E8]
[E1 → E2 → E4 → E3]

4) Dynamic adaptation : With the seasonality changes of the user, for instance, we chose a fixed window of 2 days for the dynamic adaptation. Observing the user habit changes for 25 days, after 8 days of changes, the contain of the“wake-up” scenario is changed to [2-3-4]: open shutter, turn on TV, turn on hot water. It adapts to the new user habit from winter to summer for the activity “turn on light” in the morning (c.f. Fig.5).

Proposed method showed its robustness by successfully partitioning the data despite the presence of noise and variability in the habits. It is also capable of coping with a certain amount of outliers, thus avoiding the need of extensive preprocessing of the data.

C. Real experimentation

1) Data acquisition: The experimentation was performed in the room of an aged patient with reduced physical abilities at Kerpape center, based on preexisting home living devices and IR control. An IR receptor connected to a PC allowed us to record the patient’s daily activities through his use of HAS devices. The equipments controlled by IR remote control were: a television, a central light, a bed light, two shutters, a telephone and a nurse call switch. The first issue we faced with the system was its inability to signal changes in the on/off states of the bed light and the shutters. Another problem was the central light, which could be activated by other switches than the one used by the patient and that was often the case. That made us consider the activation of the bed light ”ON bed light” and the activation of the shutters ”ON shutter” as whole services irrespective of their On/Off states. The activation of the central light was too questionable to take it into account. On another hand, since the services ”turn on television” and ”turn off television” are related, we decided to only consider the service ”turn on television” and to collect the durations during which the television was on. We ended with seven services:

ON bed light

ON left shutter

ON right shutter

Turn on television

Call medical staff

Hang on phone

Hang off phone

We collected data for 21 days and in a first step, we used them to label the services according to their regularity. Regular services are services which are requested by the user an amount of days greater than a predefined threshold.


Following OT’s advice, we set the threshold to be 0.5 meaning that a regular service is a service which is required at least every two days. It happened that the user requested the services related to the phone and the call of staff frequently some days but over the observation period, he did not request them regularly so they were eliminated from the rest of the study. The activity recognition, coupled with an activity pruning, gave us 14 activities listed in Tab.III. The activities have been labeled according to their mean occurrence times. However, their order of presentation does not presume their occurrence order : activity E3 for example does not necessarily occur after activity E2. They may not happen together or activity E2 may sometimes occur before E3.



Habitual duration Label Time Daily activities Lower Upper quartile quartile E1 [5h01 - 6h25] ON
Habitual duration
Daily activities
[5h01 - 6h25]
ON bed light
[5h36 - 7h14]
ON left shutter
[5h44 - 7h43]
ON right shutter
[5h50 - 8h52]
ON television
[9h49 - 12h33]
ON television
[11h53 - 13h12]
ON bed light
[13h12 - 15h35]
ON television
[15h07 - 16h59]
ON bed light
[16h02 - 18h31]
ON television
[19h28 - 20h29]
ON bed light
[19h13 - 23h21]
ON television
[21h58 - 22h31]
ON bed light
[22h16 - 22h42]
ON right shutter
[22h17 - 22h57]
ON left shutter

The clustering procedure lead to the 2 × 2 grid presented in Tab.IV. This grid corresponds to the most stable clustering of data according to the variation of information criterion. The four scenarios in the map have been ordered following the ordering procedure presented earlier. The scenarios received a further validation from the patient who found them relevant, especially :

the “wake-up” scenario : switch on light, open left shutter, open right shutter, turn on television

the “go-to-bed” scenario : switch off light, turn off television, close left shutter, close right shutter




[E5 → E7 → E6] [E8 → E9 → E10] [E1 → E3 → E2
[E5 → E7 → E6]
[E8 → E9 → E10]
[E1 → E3 → E2 → E4]
[E12 → E11 → E14 → E13]

Not so anecdotal, these scenarios showed that the patient is used to open and close the shutters in an order, which turned out to be dictated by the layout of the room. Pragmatically, these scenarios can be useful for this patient who has great difficulties to activate a service by his own on HMI interface. Indeed, if we design the HMI interface according to the scenarios, activating a whole scenario should be done by a single click. For scenarios requesting step by step activations, we still save the user’s efforts since at each step, the more likely activity is proposed to activation. To conclude, despite poor quality data and limited services, we managed to bring out relevant scenarios and preferences of the patient. The simplification of control on HAS devices can improve his comfort, while giving facility to access to the services. Moreover, the order between activities within scenarios, which is based on the use frequency of these activities, provides useful information for the design of adapted HMI interface for the user, by setting different activities close in the HMI as their order. In consequence, the user can save his effort to request sub scenarios.

D. Discussions

In these simulated data, it is considered that a complete HAS is equipped for the user context. Thus, we obtained new interesting scenarios adapted to the user habits, and corresponding to the Quatra results. Moreover, the automatic generation of user daily activities allows to figure a large number of user situations, in order to test strategies of proposed approach. With real data, though the current HAS has some limits, an interesting scenario that the user frequently requested is obtained, which is quite useful for the patient. The positive opinion from the patient at Kerpape center emphasizes the usefulness of automatic scenario in improving the user’s autonomy in his own environment. These first results of obtained automatic scenarios shows the reality of a flexible approach to propose assisted services adapted to the user’s habits. From these results of simulated data and some first results of current experimentation, the use of HAS services as built-in sensors has double advantages of transparently collecting user data and propose added value for the user. On the other hand, we consider highly regular HAS use as trustworthy user habits, so we can identify deviations as indicative anomaly for alert detection, which is not presented in this paper. Although we have constructed a realistic simulator, for which the proposed solution are approximate, only real data enable to validate proposed approach. This validation consists in checking that the real threshold values are close to the chosen values used in the simulator. We also noted that different users adapt to different threshold values, even when in the same environment. So, considering that no tool can fully self adapt to human factors and specificities, we came to the conclusion that an efficient approach consists in a comprehensive methodology


sustained by a flexible software framework. Therefore, the validation of OT/user is integrated in the system for adapting the solution to each different user’s context. In this sense, the proposed method can not be applied for a user without any regular habits, and the installation of a complete HAS is needed for an effective implementation of proposed approach in real context. Considering the important role of user and OT feedback, adapted HMI interfaces for user and OT are necessary to update the system parameter, which make the system more dynamically adapted to the user context, meaning a more flexible software framework.


This talk describes the set-up of an AAL architecture for providing services to the user. We then show how this architecture can be used to propose scenarios. These scenarios are important to user, because it shortens the time that would be needed to perform each action separately while fitting their exact needs. Our originality lies in the activity recognition techniques that does not rely on data collected by dedicated sensors as we want to use information that is already present in an home automation system. To cope with the information we had, we set-up probabilistic tools. Two main contribution have been proposed in this sense. The first contribution deals with the learning phase of user habits through a clustering procedure. This phase aims to recognize different activities of the same service, according to the occurrence times observed. Based on this first contribution, the second contribution is the identification of new scenarios for the user with disabilities in an automated environment. The combination of activities clustering and an ordering of activities within scenarios allows to obtain automatic scenarios that improve the user autonomy while facilitating the use of daily services with single control. These two contributions allow for the elderly and disabled people to be more autonomous, while ethical problem is less important with built-in sensors. However, the current experimentation can not completely adapt to our needs. So in the future work, we intend to install the complete embedded HAS architecture in Kerpape Center to validate the solutions. The design of adapted HMI interfaces is also essential to deliver a full and flexible system for home healthcare.


[1] F. de Lamotte et al. Quatra: Final report (technical report, in french). Lab-STICC. See demo here:, June 2008. [2] V. Barnett and J. Lewis. Outliers in statistical data. Eds. TECHNIP, 1994. [3] M. Chan, D. Esteve, C. Escriba, and E. Campo. A review of smart homes - present state and future challenges. Computer Methods and Programs in Biomedecine, pages 55–81, 2008. [4] F. Doctor and H. Hagras. A fuzzy embedded agent-based approach for realizing ambient intelligence in intelligent inhabited environnements. IEEE Trans. Systems, Man, and Cybernetics, Part A, 35(1):55–65, January 2005. [5] A. Fleury, M. Vacher, and N. Noury. SVM-Based multimodal classification of activities of daily living in health smart homes: Sensors, algorithms, and first experimental results. IEEE Transactions on Information Technology in Biomedicine, 14(2):274–283, 2010. [6] L. Hsien-Chou and T. Chien-Chih. A RDF and OWL-Based temporal context reasoning model for smart home. J. Information Technology,


[7] H. Jian, D. Yumin, Z. Yong, and H. Zhangqin. Creating an Ambient-Intelligence environment using Multi-Agent system. In Proc. Int.

Conf. Embedded Software and Systems Symposia, pages 253–258, 2008.


[8] T. Kohonen. Self Organizing Maps. Springer, 3 edition, 2001. [9] T. Kohonen and P.J. Somervuo. Self-organizing maps of symbol strings. Neurocomputing, 21:19–30, 1998. [10] N. Kushwaha, M. Kim, D.Y. Kim, and W-D. Cho. An intelligent agent for ubiquitous computing environments: smart home UT-AGENT. In Second IEEE Workshop on Software Technologies for Future Embedded and Ubiquitous Systems, pages 157–159, 2004.

[11] H. Kwee, M. Thonninsen, G. Cremers, J. Duimel, and R. Westgeest. Configuring the MANUS system. Proc. RESNA Int., pages 584–587,


[12] S. Lankri, P. Berruet, A. Rossi, and J.L. Philippe. Arch itecture and models of the danah assistive system. In Proc. of the 3rd Workshop on Services Integration in Pervasive Environments (SIPE 2008), 2008. [13] A. Loureiro, L. Torgo, and C. Soares. Outlier detection using clustering methods: a data cleaning application. In In proceedings of the Data Mining for Business Workshop, 2004. [14] M. Meila. Comparing clusterings - an information based distance. J. Multivariate Analysis, 98(5):873–895, May 2007. [15] M.J. Topping and J.K. Smith. The development of HANDY 1. A robotic system to assist the severely disabled. Technology and Disability, 10(2):95–105, 1999. [16] U. Naeem, J. Bigham, and J. Wang. Recognising activities of daily life using hierarchical plans. Lecture Notes in Computer Science, 4793:175, 2007. [17] N. Noury, G. Virone, J. Ye, V. Riall, and J. Demongeot. New trends in health smart homes. ITBM-RBM, 24:122–135, 2003. [18] B. Ostlund. Watching television in later life: a deeper understanding of TV viewing in the homes of old people and in geriatric care contexts. Scandinavian J. of Caring Sciences, dec 2009. [19] E. Tapia, S. Intille, and K. Larson. Activity recognition in the home using simple and ubiquitous sensors. In Pervasive Computing, pages 175–158. 2004. [20] T.S. Barger, D.E. Brown, and M. Alwan. Health-status monitoring through analysis of behavioral patterns. IEEE Trans. Systems, Man, and Cybernetics, Part A, 35(1):22–27, 2005. [21] G. Virone, N. Noury, and J. Demongeot. System for automatic measurement of circadian activity in telemedicine. IEEE Transactions on Biomedical Engineering, 49:1463–1469, 2002. [22] M. Yan and K. Ye. Determining the number of clusters using the weighted gap statistic. Biometrics, 63(4):1031–1037, 2007. [23] N. Zouba, F. Bremond, M. Thonnat, and V.T. Vu. Multi-sensors analysis for everyday activity monitoring. Int. Conf. Science of Electronic, Technololgy of Information and Telecommunication, 4, March 2007.