David Jordi Vallet Weadon

david.vallet@uam.es
Advanced Studies Diploma – EPS – UAM 2007
Advisor: Pablo Castells Azpilicueta



David Jordi Vallet Weadon

1. Table of contents
1.  Table of contents ..................................................................................................... 1 
1.  Abstract ................................................................................................................... 1 
2.  Introduction .............................................................................................................. 2 
3.  State of the art ......................................................................................................... 4 
3.1.  Personalized Information Retrieval .................................................................. 4 
3.1.1.  Query Operations ......................................................................................... 7 
3.1.2.  Link-Based personalization ......................................................................... 10 
3.1.3.  Document Weighting .................................................................................. 11 
3.1.3.1.  Result Reorder ....................................................................................... 12 
3.1.3.2.  Result Clustering ..................................................................................... 14 
3.1.3.3.  Navigation Support .................................................................................. 14 
3.1.4.  Personalization in Working Applications ..................................................... 15 
3.2.  Context Modeling for Information retrieval ..................................................... 17 
3.2.1.  Clickthrough data ........................................................................................ 20 
3.2.2.  Desktop ....................................................................................................... 21 
3.2.3.  Surrounding text ......................................................................................... 21 
4.  Ontology-Based Personalized Information Retrieval ............................................. 23 
4.1.  User Profile Representation ........................................................................... 24 
4.2.  User Profile Exploitation ................................................................................. 26 
4.3.  Personalized Information Retrieval ................................................................ 30 
4.3.1.  Personalized Search ................................................................................... 30 
4.3.2.  Personalized Browsing ............................................................................... 32 
5.  Personalization in Context ..................................................................................... 32 
5.1.  Notation .......................................................................................................... 35 
5.2.  Preliminaries ................................................................................................... 36 
5.3.  Semantic context for personalized content retrieval ....................................... 38 
5.4.  Capturing the context ..................................................................................... 39 
5.5.  Semantic extension of context and preferences ............................................ 40 
5.5.1.  Spreading activation algorithm ................................................................... 41 
5.5.2.  Comparison to classic CSA ........................................................................ 46 
5.6.  Semantic preference expansion ..................................................................... 47 



David Jordi Vallet Weadon

5.6.1.  Stand-alone preference expansion ............................................................. 48 
5.7.  Contextual activation of preferences .............................................................. 48 
5.8.  Personalization in context .............................................................................. 49 
5.9.  An example use case ..................................................................................... 50 
6.  Experimental work ................................................................................................. 55 
6.1.  Evaluation of Interactive Information Retrieval Systems: An Overview ......... 55 
6.1.1.  User centered evaluation ............................................................................ 56 
6.1.2.  Data driven evaluation ................................................................................ 59 
6.1.3.  Evaluation metrics ...................................................................................... 59 
6.1.4.  Evaluation corpus ....................................................................................... 61 
6.2.  Our Experimental Setup ................................................................................. 62 
6.3.  Scenario Based Testing: a Data Driven Evaluation ....................................... 64 
7.  Conclusions and Future Work ............................................................................... 70 
8.  References ............................................................................................................ 73 



Personalized Information Retrieval in Context Using Ontological Knowledge 1

1. Abstract
Personalization in information retrieval aims at improving the user’s experience by
incorporating the user subjectivity to the retrieval process. The exploitation of implicit
user interests and preferences has been identified as an important direction to
overcome the potential stagnation of current mainstream retrieval technologies as
worldwide content keeps growing, and user expectations keep rising. Without requiring
further efforts from users, personalization aims to compensate the limitations of user
need representation formalisms (such as the dominant keyword-based) and help
handle the scale of search spaces and answer sets, under which a user query alone is
often not enough to provide effective results. However, the general set of user interests
that a retrieval system can learn over a period of time, and bring to bear in a specific
retrieval session, can be fairly vast, diverse, and to a large extent unrelated to a
particular user search in process. Rather than introducing all user preferences en bloc,
an optimum search adaptation could be achieved if the personalization system was
able to select only those preferences which are pertinent to the ongoing user actions.
In other words, an optimal personalization of search results should take into account
user interests in the context of the current search.
Context modeling has been long acknowledged as a key aspect in a wide variety of
problem domains, among which Information Retrieval is a prominent one. In this work,
we focus on the representation and exploitation of both persistent and the live retrieval
user context. We claim that, although personalization alone is a key aspect of modern
retrieval systems, it is the conjunction of personalization and context awareness what
can really produce a step forward in future retrieval applications. This work is based on
the hypothesis that not all user preferences are relevant all the time, and only those
that are semantically close to the current context should be used, disregarding those
preferences that are out of context. The notion of context considered here is defined as
the set of themes under which retrieval user activities occur within a unit of time. The
use of ontology-driven representations of the domain of discourse, as a common,
enriched representational ground for content meaning, user interests, and contextual
conditions, is proposed as a key enabler of effective means for a) a rich user model
representation, b) context capture at runtime and c) the analysis of the semantic
connections between the context and concepts of user interest, in order to filter those



Personalized Information Retrieval in Context Using Ontological Knowledge 2

preferences that have chances to be intrusive within the current course of user
activities.
2. Introduction
The size and the pace of growth of the world-wide body of available information in
digital format (text and audiovisual) constitute a permanent challenge for content
retrieval technologies. People have instant access to unprecedented inventories of
multimedia content world-wide, readily available from their office, their living room, or
the palm of their hand. In such environments, users would be helpless without the
assistance of powerful searching and browsing tools to find their way through. In
environments lacking a strong global organization (such as the open WWW), with
decentralized content provision, dynamic networks, etc., query-based and browsing
technologies often find their limits. Personalized content access aims to alleviate the
information overload problem with an improved Information Retrieval (IR) process, by
using implicit user preferences to complement explicit user requests, to better meet
individual user needs [58, 61, 69, 75]. Personalization is being currently envisioned as
a major research trend, since classic IR usually tends to select the same content for
different users on the same query, many of which are barely related to the user’s wish
[40]. Mayor online services such as Google [19, 133], Amazon [104] or Yahoo! [77] are
nowadays researching for personalization, in particular to improve their content
retrieval systems.
One important source of inaccuracy of these personalization systems is that their
algorithms are typically applied out of context. In other words, although users may have
stable and recurrent overall preferences, not all of their interests are relevant all the
time. Instead, usually only a subset is active in the user’s mind at a given situation, and
the rest can be considered as “noise” preferences. The combination of long-term and
short-term user interests that takes place in a personalized interaction is delicate and
must be handled with great care in order to preserve the effectiveness of the global
retrieval support system, bringing to bear the differential aspects of individual users
while avoiding distracting them away from their current specific goals. Furthermore,
many personalized systems do not distinguish the differences between long-term and
short-term preferences, either applying the first or the latter, or treating both as the
same. What we propose in this work is to have a clear distinction between these,



Personalized Information Retrieval in Context Using Ontological Knowledge 3

identifying long-term interest with the user preferences and the short-term interests with
the user’s context, and to finally find a way that both are complemented in order to
maximize the performance of search results by the incorporation of context-awareness
to personalization.
It is common knowledge that several forms of context exist in the area [87]. This work
is concerned with exploiting semantic, ontology-based contextual information,
specifically aimed towards its use in personalization for content access and retrieval of
documents (any multimedia or text content) associated to a semantic index, where
content is expressed by means of ontological concepts. Among the possible knowledge
representation formalisms, ontologies present a number of advantages [108], as they
provide a formal framework for supporting explicit, machine-processable semantics
definitions, and facilitate inference and derivation of new knowledge based on already
existing knowledge. Our approach is based on an ontology-driven representation of the
domain of discourse, providing enriched descriptions of the semantics involved in
retrieval actions and preferences, and enabling the definition of effective means to
relate preferences and context. The models and techniques proposed here address the
exploitation of persistent, concept-based user preferences, as well an automatic
extraction of live ad-hoc user interests, in such a way that the combination of both
produce contextualized user models, which are then applied to gain accuracy in the
personalization of retrieval results. The extraction and inclusion of real-time contextual
information as a means to enhance the effectiveness and reliability of long-term
personalization enables a more realistic approximation to the highly dynamic and
contextual nature of user preferences, in a novel approach with respect to prior work.
We propose a method to build a dynamic representation of the semantic context of
ongoing retrieval tasks, which is used to activate different subsets of user interests at
runtime, in such a way that out-of-context preferences are discarded. The gain in
accuracy and expressiveness obtained from the ontology-based approach brings
additional improvements in terms of retrieval performance.
The presented work will thus have the following main objectives:
• Study the state of the art in personalized IR and context modeling for IR.
• The presentation of a personalization technique for information retrieval,
applicable to any search space semantically indexed.



Personalized Information Retrieval in Context Using Ontological Knowledge 4

• The proposition of an approach for personalization in context, including an
approach for live user context acquisition and presentation.
• The development and testing of the personalization system, in order to assert
the effect of context-awareness in personalization.
The rest of the document is organized as follows: section 3 will present a brief SoA of
related or similar work to our approach. Section 4 presents the approach for ontology-
based retrieval personalization. Section 5 describes the contextualization techniques,
including the acquisition and exploitation of the context representation, and the
semantic expansion technique used for the contextualization of the user profile. Section
6 shows some early experiments on the implementation of the whole personalization
system and section 7 concludes and outlines future work to be addressed.
3. State of the art
The aim of this section is to gather and evaluate existing techniques, approaches,
ideas, and standards from the field of user modeling and personalization, and related to
information retrieval.
3.1. Personalized Information Retrieval
The earliest work in the field of user modeling and adaptive systems can be traced
back to the late 70’s (see e.g. [89, 94]). Personalization technologies gained
significance in the 90’s, with the boost of large-scale computing networks which
enabled the deployment of services to massive, heterogeneous, and less predictable
end-consumer audiences [64]. Significant work has been produced since the early
times in terms of both academic achievements and commercial products (see [28, 56,
75, 86] for recent reviews).
The goal of personalization is to endow software systems with the capability to change
(adapt) any aspect of their functionality and/or appearance at runtime to the
particularities of users, to better suit their needs. To do so, the system must have an
internal representation (model) of the user. It is common in the user modeling discipline
to distinguish between user model representation, user model learning/update, and
adaptation effects or user model exploitation. Aspects of software that have been
subject to personalization include, among others, content filtering [84], sequencing
[28], content presentation [44], recommendation [102], search [68, 80], user interfaces



Personalized Information Retrieval in Context Using Ontological Knowledge 5

[50, 60, 85], task sequencing [119], or online help [51]. Typical application domains for
user modeling and adaptive systems include education [28, 44, 115, 119], e-commerce
[1, 2, 15, 54], news [24, 102, 129], digital libraries [31, 103], cultural heritage [16],
tourism [55], etc. The field of user modeling and personalization is considerably broad.
The aim of this section is not to provide a full overview of the field, but to report the
state of the art on the area related to this work, i.e. personalized content search,
retrieval and filtering.
Due to the massive amount of information that is nowadays available, the process of
information retrieval tends to select numerous and heterogeneous documents as result
of a single query; this is known as information overload. The reason is that the system
cannot acquire adequate information concerning the user’s wish. Traditionally,
Information Retrieval Systems (IRSs) allow the users to provide a small set of
keywords describing their wishes, and attempt to select the documents that best match
these keywords. The majority of these queries are short ( 85% of users search with no
more than 3 keywords [66]) and ambiguous [78], and often fail to represent the
information need, nevertheless to say to represent also the implicit interests of the
user. Although the information contained in these keywords rarely suffices for the exact
determination of user wishes, this is a simple way of interfacing that users are
accustomed to; therefore, there is a need to investigate ways to enhance information
retrieval, without altering the way they specify their request. Consequently, information
about the user wishes needs to be found in other sources.
Personalization of retrieval is the approach that uses the user profiles, additionally to
the query, in order to estimate the users’ wishes and select the set of relevant
documents [40]. In this process, the query describes the user’s current search, which is
the local interest [21], while the user profile describes the user’s preferences over a
long period of time; we refer to the latter as global interest. The method for preference
representation and extraction, as well as the estimation of the degree to which local or
global interests should dominate in the selection of the set of relevant documents, are
still open research issues [122].
The next sub-sections will summarize approaches for retrieval personalization,
classified by where the personalization algorithm is applied in the search engine
algorithm. Table 1 presents an overall classification of the most important studied
proposals.



Personalized Information Retrieval in Context Using Ontological Knowledge 6

REFERENCE REPRESENTATION LEARNING EXPLOITATION
[132] Terms Implicit Classification
[134] Terms Implicit/ Explicit
Document weighting
Result reorder
[114] Terms Implicit Query operations
[68] bookmarks Implicit Link-based
[80] Terms/categories Implicit Query operations
[40] Terms correlation Implicit Query expansion
[41] Topics Explicit
Document weighting
Result reorder
[101] Terms Implicit Query expansion
[58, 106] Taxonomy (ODP) Implicit
Document weighing
Result reorder
[122] Property Values Implicit Clustering
[18] Term occurrence Explicit
Document weighting
Navigation
[84]
Stereotypes(term
frequency)
Explicit
Document weighting
Result reorder
[72] Taxonomy Explicit
Document weighting
Result reorder
[110] Terms, weighted Implicit Query operations
[39] Topics Explicit Filtering, Link-based
[61] Past queries Implicit Link-based
[90] Taxonomy Implicit Query operations
[46] Query logs Implicit
Document Weighting/
Collaborative Filtering
[112] Query logs Implicit Query operations
[53] Topics/Clusters Explicit
Document weighing
Clustering
[111] Query logs Implicit
Document weighting
Collaborative filtering
[113] Taxonomy (ODP) Explicit Link-based
[106] Taxonomy (ODP) Implicit
Document weighting
Result reorder
Table 1. Overview of personalized information retrieval systems



Personalized Information Retrieval in Context Using Ontological Knowledge 7


3.1.1. Query Operations
Query operations techniques focus on the refinement of the need of information
expressed by the user’s input query. Implicit information like user preferences is used
to reformulate the user queries. Users express their actual information need by a
query. Query operations try to “modify” the given local interest so that the information
need is somehow altered by the interests of the users (i.e. the global interests).
Queries in information retrieval are often seen as vector in a finite space model (see
Figure 1), the axis are formed by each term of the search space, and the values of the
query vector is given by the level of importance that the query’s term has for
representing the information need. The idea of personalization through query operation
is adding information from the user profile to bring the query closer, viewed as a vector
model, to documents that are not only relevant to the query but also relevant to the
preferences from the user. Figure 1 shows the graphical representation of this concept.
Depending on which user the query is going to be “personalized” to, the modified query
“comes closer” geometrically to the user representation. In summary, we can say that
the final query would be the combination of the local interest of the user (i.e. the query
q) and the global interests of either the user u
1
or u
2
.



Personalized Information Retrieval in Context Using Ontological Knowledge 8

0.5
1.0
0
0.5 1.0
1
u
JG
q
G
2
m
q
G
2
u
JJG
1
m
q
G
1
1
(0.45, 0.55)
2 2
m
u q
q = + =
JG
G
G
2
2
(0.80, 0.20)
2 2
m
u q
q = + =
JJG
G
G
(0.7, 0.3) q =
G
1
(0.2, 0.8) u =
JG
2
(0.9, 0.1) u =
JJG

Figure 1. Query operation example for two dimension projection
Depending on how the query is modified, the query operations are classified into term
reweighting or query expansion operations. The example above shows a modification
of the query term weights, this is what is called term reweighting. Query expansion
adds new terms or information to the query, implementing the query representation
with information not directly explicit by the query. Take this example: A user searches
for the term Jaguar, normally the system would not be able to disambiguate the term
between “Jaguar the car brand” from “Jaguar the animal”. But if the system has some
short of query expansion technique, and the user profile contains the term “animal”, the
system would be likely to return documents with the correct disambiguation by adding
this term and finally using as input the query “Jaguar animal” instead of “Jaguar” alone.
Finally, a system can have a combination of the two techniques, changing both the
term importance weights and also adding new terms to the query.
Query expansion is often used in personalized meta-search engines, these search
systems redirect an input query to one or more external search engines, performing a
merge or aggregation of each returned search result list. Terms from the user profile
can be added to original queries and sent to each search engine [6], normally term



Personalized Information Retrieval in Context Using Ontological Knowledge 9

reweighting needs to access the internal functions of the search engine (although
some, though not the most popular, allow the term reweighting as optional parameters)
Relevance feedback [95, 97] is a particular case where query reformulation takes
place. This technique takes explicit relevance judgments from users, that decide which
documents returned by a query are or not relevant. (see section 3.2 for a more in depth
description). In personalization, rather than extracting the relevant information from the
interactions with the user, the search system uses the representation of the user
profile. It is important that the query adaptation to the user profile does not make the
user profile predominant, which could induce to a result set that, although is relevant to
the user preferences; it is not relevant to the original query. A generalization of both
term-reweighting and query expansion techniques can be found in [20].
An example of pure query expansion system presented in [40] computes a term-
correlation matrix for each user, using clickthrough data from past queries and past
queries alone (i.e. implicit feedback techniques). Clickthrough data stands for the
information of which documents the users opened after a query was processed. The
correlation matrix is then used for the query expansion, selecting the top most
correlated terms to the query. [90, 101] added past queries to the clickthrough
information in order to construct a user model based on a term weighted vector,
expanding the user’s query with the most important terms in the user model. In the
desktop search engine presented in [114], the system builds implicitly a “personal
index”, build up from implicit interactions of the user within the OS’s desktop: visited
documents, emails, web pages or created documents. A lighter index is also
maintained from interactions of the user with the search system --mostly past query
terms and visited URLs. Term and related weights are then extracted from this index
and used for the query term reweighting and expansion at query time, enabling
personalization for the desktop search. [110] Builds up the user profile from the user
browsing history, differentiating long-term preferences, as past queries or session
browsing data and short-term preferences, as the current session’s history. Once the
user profile is collected, the query expansion is also the application of relevance
feedback techniques, in this case using the classic Rocchio query reformulation [95].
There are other approaches that, although share the final idea of modifying the query
with the user model, have other types of retrieval models. One example is [112], from
the language model IR area, that mines the query log information of the user (past



Personalized Information Retrieval in Context Using Ontological Knowledge 10

queries plus clickthrough data) and then uses this information to add context
information to the language model of the query. Although the system proposed in [80]
doesn’t alter the terms of the query it does change the information in the sense that the
query is bias towards one topic or the other, by selecting topic specific search engines.
The user model is a term-topic correlation matrix that is used to relate the user’s query
to a list of topics. For instance, a user kind of computer and hardware will be more
likely to have a higher similarity between the query “apple” with the topic Computer
than to the topic Food. Te query is then submitted several times, in a first mode, no
category is indicated, and in subsequent modes indicating top inferred categories.
Finally, the results are merged using a voting algorithm and taking into consideration
the ranking of the categories produced by the user model.
3.1.2. Link-Based personalization
Together with the previous section, these techniques are more often seen in
commercial personalized search engines. Query operations are often applied because
it fits well in personalized meta-search engines. Link-based personalization is used
because it follows the trend of link-based ranking algorithms. These have been a huge
success in the past years, beginning with Google’s PageRank. Link-based
personalization affects directly to the document ranking techniques. These are based
on the idea that “A page has a high rank if the sum of the ranks of its backlinks is high”,
the rank of every document makes it climb in the result set, so pages that are
considered “important” by the page rank algorithm are considered more relevant to a
user. One main advantage of these approaches is that the system does not have to
take into account the content of the document, only the hyperlinks inherent in any Web
page.
Page rank values are often computed by web crawlers that start from an initial page,
and do a random walking through the links of the page and the subsequent links of the
pages pointed by the links. In general, link-based personalized algorithms are
modifications of Google’s PageRank [61, 68] or the HITS authority and hub algorithm
[42, 113]. However, there are many ways to introduce personalized search in page
rank algorithms:
ƒ Topic sensitive page rank. A different page rank value is computed for every
topic, to capture more accurately the notion of importance within the category.



Personalized Information Retrieval in Context Using Ontological Knowledge 11

Thus being able to personalize the final results with the user’s desired topics by
combining the topic-bias page ranks. The topic information can be extracted
from a category hierarchy [61, 113], using hard relations with already existent
Web categories like ODP
1
, or starting from a set of documents considered
representative for the user interests [39], using a classifier to relate user
representative documents, the crawled documents and the query to the set of
predefined topics.
ƒ Relevant Documents. A set of relevant documents is used to alter the normal
page rank algorithm and give a higher rank value to documents related (through
links) to this set [42, 68]. Normally this set of relevant documents is extracted
from the bookmarks of the user, which are considered a good source of
interest.
Personalized alterations of the page rank algorithms are mostly easy to develop, but
there’s still a tradeoff on scalability, as computing these values requires high
computational resources, and is impossible nowadays to compute a full personal page
rank value for every user (this would be with no doubt, the ideal use case), which was
the original Larry and Page suggestion. Some solutions to this have been the
calculation of only a small set of values for small set of topics [39, 61], or more efficient
algorithms where partial page rank vectors are computed, allowing the combination of
these for a final personalized vector [68].
3.1.3. Document Weighting
Document weighting algorithms modify the final ranking of the result set documents.
Most search engines compute a ranking value of relevance between the document and
the need of information (i.e. the query), note that this ranking can be the combination of
several scores, but all these are user independent. A personalized search engine can
then compute a personalized ranking value for every document in the result set. The
benefits of this approach is that this value has only to be computed for the returned top
result set documents, the drawback is that this value has to be computed at query time.
This algorithm is also suitable for meta search engines [72], as commonly the user-
dependent algorithm can focus on a small quantity of the top returned documents,

1
Open Directory Project: http://dmoz.org/



Personalized Information Retrieval in Context Using Ontological Knowledge 12

being able to compute a personalize score at real-time, by accessing the document’s
content or even just using the provided summaries.
The user-dependent score usually comes from a document-user similarity function,
based on term-frequency similarity [84], classification and clustering algorithms [53],
bayesian functions [134], etc... Figure 2 shows the typical flow chart of this type of
personalized Information Retrieval systems.
User
Profile
Search
space
1
Search
engine
1
Search
engine
2
… Search
engine
n
Search
space
2
Search
space
n
Query

Search
engine
Search
space
1
MetaSearch
engine
Meta‐search engine
Standalone search engine
Search
results
Profile
Similarity
Personalized
results

Figure 2. Topology of Document weighting personalized search engines
Following there is a classification by the effect of the document weighting, i.e. how the
computed similarity value is finally used by the retrieval system.
3.1.3.1. Result Reorder
The top n relevant documents to the query are reordered according to the relevance of
these documents to the user profile [58, 106]. The idea is improving the ranking of
documents relevant to the user, but also relevant to the query. Unlike query operations
(see above 3.1.1), results reorder does not change the query information, thus
maintaining the query relevance.
An example [84] of result reordering filters the results of the query obtained by a
popular search engine. For each document in the result set, it computes a score
considering only the proper document and the user profile, presenting first the higher



Personalized Information Retrieval in Context Using Ontological Knowledge 13

ranked documents. Each user profile contains a set of slots, representing a domain of
interest. Each domain of interest has associated a topic and a set of semantic links,
which are indeed a set of terms related to the domain. In addition, a value represents
de degree of interest or non-interest of the user to the domain. A document has a
higher rank in order its terms belong to the user profile, to the query or to the general
term knowledge base.
[134] uses a hierarchical Bayesian network, using implicit and explicit feedback from
the user, content is reordered according to the user model represented in the
hierarchical Gaussian network. The main advantage is that models form other users
can be used to solve the cold start problem, where the system does not have any
information about a new user to the system.
Example of document weighting using topics can be found in [41, 90]. User profiles are
represented as a set of taxonomic topics, explicitly expressed by the user. The result
set is finally reordered using a distance measure (e.g. taxonomic distance) between the
topics associated to the documents and the topics in the user profile.
Vector similarity between the user representation and the document representation is
one of the most common algorithms for computing a personal relevance measure (i.e.
the value that represents the similarity between the document and the user’s interest).
Gauch et al represent the user model by means of a weighted vector of concepts (in
this case, ODP taxonomy concepts) [58, 106], this vector is matched against the top 4
concepts representing a document for the personal relevance measure. The system
presented in [84] has a set of predefined stereotypes (e.g. Java programmer, scientist,
Machine Learning researcher), defined by a set of weighted terms. Users manually
select the stereotypes they think they belong to. The user profile vector is then
extracted from the set of terms of every user’s stereotype and the final personalized
score is the similarity of this vector between the weighted term vectors of the result
set’s documents.
Collaborative filtering methods commonly perform a result reorder, combining the user
profile with other user profiles (usually with a user-user similarity measure). [46, 111]
mine the query log clickthrough information to perform a collaborative personalization
of the results set ranking higher documents that similar users had clicked previously in
similar queries.



Personalized Information Retrieval in Context Using Ontological Knowledge 14

Meta search engines combination methods can be personalized by different criterions.
In [72] the users can express their preference for a given search engine, for a set of
concepts or for the desired popularity of the search results . The final relevance
measure would be the combination of this personal ratings applied to each of the
listings of the search engines.
3.1.3.2. Result Clustering
Query results are clustered in a set of categories, presenting first the categories more
relevant to the user [18, 80, 122]. The algorithm takes 1) the general result set for a
query, 2) obtains the set of categories related to the documents in the result set, 3)
reorders the set of categories according to the user profile and 4) presents the top n
documents for each category. Usually presenting the top three categories in each page
with four-five documents for each category gives a good performance. The GUI has to
allow the user select a concrete category to see all the documents of the result set
related to this category. The system presented in [122] represent the user interests in
terms of relations and values (e.g. Love story films, films from Director x), the results
are classified in terms of properties and ordered by those which are relevant to the
user. A meta-search engine with clustering techniques is presented in [53], the result
are clustered hierarchically by the title and the summary. The user is then able to filter
the result by explicitly stating the interest for one or several clusters.
3.1.3.3. Navigation Support
Navigation support techniques suggest the user a set of links that are more related to
her preferences, computing a user a relevance measure for every link in the current
browsed document. The relevance of each link in the document is computed according
to the relevance of the pointed document to the user. The relevancy of its linked
documents could be also taken into consideration, having an iterative algorithm
resembling to a local personalized web crawler. Other parameters could be used, like
the relevancy to the user of the previous accessed documents on the path that
concludes in the current document [18].



Personalized Information Retrieval in Context Using Ontological Knowledge 15

3.1.4. Personalization in Working Applications
The number of search engines with personalization capabilities has grown enormously
in the past years, from social search engines, were users can suggest collaboratively
which are the best results for a given query [3], to vertical search engines [6, 7], were
users can customize a domain specific search engine. There is an incoming interest by
commercial search engine companies such as Yahoo, Microsoft or Google, but the
latter has been the first to show truly personalization capabilities. The following is a list
of those that have more properties in common with our proposed approach.
• Google Personal
Google’s personalized search [8] (currently discontinued), based in topic Web
categories (from the Open Directory Project), manually selected by the user. The
personalization only affected the search results related to a category selected by
the user. The user could change the degree of personalization by interacting with a
slider, which dynamically reorder the first ten results.
• Google Co-op
Google Co-op [7] allows the creation of shared and personalized search engines in
the sense that users are able to tag web pages and filter results with this new
metadata. Tags are not meant to be a full description of the content of the
annotated Web pages. It is more oriented to what could be called “functionality
tags” (e.g. tagging a page as a review for the custom search engine of digital
cameras). Domains and keywords can also be added to modify search ranking and
expand the user’s query.
• iGoogle
Recently, Google change the name of the personalized homepage to iGoogle [9],
stressing the personalization capabilities. Although we cannot be really sure what
are the concrete applied techniques specifically on Google search engine, and this
technologies are still incipient, two US patents on personalized search have very
been filed by Google in recent years [19, 133]. These patents describe techniques
for personalized search results and rankings, using search history, bookmarks,
ratings, annotations, and interactions with returned documents as a source of
evidence of user interests. The most recent patent specifically mentions "user
search query history, documents returned in the search results, documents visited
in the search results, anchor text of the documents, topics of the documents,



Personalized Information Retrieval in Context Using Ontological Knowledge 16

outbound links of the documents, click through rate, format of documents, time
spent looking at document, time spent scrolling a document, whether a document is
printed/bookmarked/saved, repeat visits, browsing pattern, groups of individuals
with similar profile, and user submitted information". Google patents considers
explicit user profiles, including a list of weighted terms, a list of weighted categories,
and a list of weighted URLs, obtained through the analysis of the aforementioned
information. Techniques for sharing interests among users, and community building
based on common interests, are also described. As an optional part of user
profiles, the patent mentions "demographic and geographic information associated
with the user, such as the user's age or age range, educational level or range,
income level or range, language preferences, marital status, geographic location
(e.g., the city, state and country in which the user resides, and possibly also
including additional information such as street address, zip code, and telephone
area code), cultural background or preferences, or any subset of these".
• Eurekster
Although is mostly oriented to “search groups”. This search engine includes the
ability to build explicitly a user profile by means of terms, documents and domains
[6].It sits on Yahoo! search engine, so only query expansion and domain focused
searches can be performed. Users can also mark which search result they think are
the most relevant for a given query, so that similar queries can make use of this
information.
• Entopia Knowledge Bus
Entopia [5] is a Knowledge Management company which sold a search engine
named k-bus, receiving many awards and being selected as the best search engine
technology in 2003 by the Software & Information Industry Association. This search
engine is promoted to provide highly personalized information retrieval. In order to
rank the answers to a query, the engine takes into account the expertise level of
the authors of the contents returned by the search, and the expertise level of the
users who sent the query. Those expertise levels are computed by taking into
account previous interactions of different kinds between the author and the user on
some contents.
• Verity K2



Personalized Information Retrieval in Context Using Ontological Knowledge 17

The latest version of the K2 Enterprise Solution of Verity, one of the leading
companies in the search engine markets for businesses, includes many
personalization features to sort and rank answers to a query. To build users
profiles, K2 tracks all the viewing, searching, and browsing activities of users with
the system. Profiles can be bootstrapped from different sources of information
including authored documents, public e-mail forums in the organization, CRM
systems, and Web server logs. A user can provide feedback not only to documents
but also to a recommendation coming from a specific user, thus reinforcing the
value of a document and also the relationship between both users [11].
• MyYahoo
The personalization features of yahoo personal search engine [10] are still rather
simple. Users are able to “ban” URL to appear in search results, or to save pages
to a “personal Web” that will give a higher priority on these pages once they appear
in a search result set.
3.2. Context Modeling for Information retrieval
One of the key drivers and developments towards creating personalized solutions that
support proactive and context-sensitive systems has been the results from research
work in personalization systems. The main indication derived from these results
showed that it was very difficult to create generic personalization solutions, without in
general having a large knowledge about the particular problem being solved. These
seemed to result in either a very specialized or a rather generic solution that provided
very limited personalization capabilities. In order to address some of the limitations of
classic personalization systems, researchers have looked to the new emerging area
defined by the so-called context-aware applications and systems [12, 27].
The notion of context-awareness has been long acknowledged as being of key
importance in a wide variety of fields, such as mobile and pervasive computing [62],
computational linguistics [57], automatic image analysis [43], or information retrieval
[23, 61, 73], to name a few. The definitions of context are varied, from the surrounding
objects within an image, to the physical location of the system's user. The definition
and treatment of context varies significantly depending on the application of study [49].
Context in information retrieval has also a wide meaning, going from surrounding
elements in an XML retrieval application [17] , recent selected items or purchases on



Personalized Information Retrieval in Context Using Ontological Knowledge 18

proactive information systems [1, 25], broadcast news text for query-less systems [63],
recently accessed documents [22], visited Web pages [110], past queries and
clickthrough data [23, 46, 101, 110], text surrounding a query [57, 76], text highlighted
by a user [57], recently accessed documents [22] etc… Context-aware systems can be
classified by 1) the concept the system has for context, 2) how the context is acquired,
3) how the context information is represented and 4) how the context representation is
used to adapt the system.
One of the most important parts of any context-aware system is the context acquisition.
Note that this is conceptually different to profile learning techniques, context acquisition
aims to discover the short-term interests (or local interests) of the user [46, 101, 110],
where the short-term profile information is usually disposed once the user's session is
ended. On the other hand, user profile learning techniques do cause a much great
impact on the overall performance of the retrieval system, as the mined preferences
are intended to be part of the user profile during multiple sessions.
One simple solution for context acquisition is the application of explicit feedback
techniques, like relevance feedback [95, 97]. Relevance feedback builds up a context
representation through an explicit interaction with the user. In a relevance feedback
session:
1) The user makes a query.
2) The IR system launches the query and shows the result set of documents.
3) The user selects the results that considers relevant from the top n documents
of the result set.
4) The IR system obtains information from the relevant documents, operates
with the query and returns to 2).
Relevance feedback has been proven to improve the retrieval performance. However,
the effectiveness of relevance feedback is considered to be limited in real systems,
basically because users are often reluctant to provide such information [101], this
information is needed by the system in every search session, asking for a greater effort
from the user than explicit feedback techniques in personalization. For this reason,
implicit feedback is widely chosen among context-aware retrieval systems [32, 71, 110,
124, 125].
Following there is a summary of approaches related to the information retrieval area,
classified by the acquisition technique. A complete classification can be found in Table
2.



Personalized Information Retrieval in Context Using Ontological Knowledge 19

REFERENCE CONCEPT ACQUISITION REPRESENTATION EXPLOITATION
[46] Clickthrough Term
extraction
Term vector Linear combination
with user profile
[125] Clickthrough Term ext. Terms Query expansion
[110] Clickthrough
Browsing data
Term ext. Term vector Result Reordering
[101] Clickthrough
Past queries
Term ext.
query
similarity
Term vector Result Reordering
[29] Implicit Text mining Terms Query expansion
[83] Correlation data None Vector base Change of base
[13] query Term
thesaurus
Terms Query expansion
[100] Clickthrough
Past queries
Bayesian
network
Language model Query expansion
[120] Open
documents,
browsing history
Text extraction Topic vector Result Reorder
[57] Selected text NLP
processing
Terms Query expansion,
search engine
selection
[22] Accessed
documents
Term
extraction
Terms Recommendation
[23] Clickthrough
past queries
History
storage
Queries, documents Context revisit
[76] Surrounding
text, opened
documents
Term
extraction
Terms Query expansion,
reweighting, meta-
search
[47] Desktop
interaction
Term
frequency
Index retrieval
[61] Surrounding text
and past queries
Language
modeling
Topic Vector Link-based topic
bias
[30] Desktop activity,
web navigation
Term
extraction
Term frequency
vector
Proactive queries
Table 2. Overview of contextual information retrieval systems



Personalized Information Retrieval in Context Using Ontological Knowledge 20

3.2.1. Clickthrough data
Clickthrough data is one of the most used sources for context acquisition, as there are
studies that suggest that can be a good implicit estimator of explicit feedback [116].
[46] constructs a short-term profile based on a statistic combination of the accessed
document in past queries within the current user’s session. This short-term profile is
then combined with the overall profile of the user by means of a simple linear
combination. Thus, the user profile is somehow contextualized by the current actions of
the user, what can be seen as a “contextualization” of the personalization algorithm.
This is similar to the work presented in [110] where the authors also distinguish
between long-term and short-term preferences, and the final context information is
combined with the user profile. The clickthrough data collected in [125] is the time the
user spent viewing a document. A threshold value will then determine if a document
was relevant or not, using the terms of implicitly relevant documents to expand the
query and retrieve refined results, in an iterative fashion. Interestingly, threshold values
set according with the task that the user had to perform proved to perform better than
threshold values set accordingly to each user, which somehow proves that the task
information (i.e. the context) can be even more valuable than the overall information of
the user. [101] adds past queries within the session to the clickthrough information. The
system extracts the most relevant terms from the accessed documents and the past
queries, using the information of past queries (both the query terms and the
clickthrough information), but only If the past query is similar enough to the current
query. The novelty of this work is that the contextualization effects are not only applied
to query search results, the contextualization effect is rather more interactive, as the
results are reordered, or more results are added through query expansion, whenever
the users returns to the result list after clicking through a result item. Similar to the
latter, [100] uses past queries and clickthrough data as context information, this
information is represented in form of statistical language models, combining past
queries and document models into one context model, finally adding this context model
to the query model of the user.
Another use of clickthrough data is presented in SearchPad [23]. This system implicitly
collects the clickthrough and past query data and lets the users revisit this information.
The author claims that this makes possible to keep track of the search context, allowing
the user to recover past useful queries and important resource links.



Personalized Information Retrieval in Context Using Ontological Knowledge 21

3.2.2. Desktop
Another concept of context is the information from the user’s desktop. For instance,
opened windows and documents, sent emails, etc…One of the main restriction on
desktop based contexts is that, differently to clickthrough data, which can be harvested
from any Web search application, desktop actions or information has to be obtained
from a local application, which difficulties the evaluation of the framework, restricting
the type of evaluation that can be applied (see section 6.1).
In [29] the information of opened applications, such as document writing or web
browsers, is used to construct queries that can be used to disambiguate entered
queries by the users by query expansion techniques (see 3.1.1). The system
presented in [120] obtains the context information from opened word documents, web
pages and messaging applications. The context is finally a weighted vector of ODP
topics [4] obtained through a classifier. Returned documents of the search engine are
related to topic vectors and the result set are finally re-ranked using a vector similarity
function. The Stuff I’ve Seen application [47] builds up and index with information on
emails, web navigation and created or opened documents. The user can then easily
revisit web pages, sent emails, or created documents. The system uses an inverted
index, using different plug-ins that create a token stream from different web, mail or
document formats.
The proactive information agent presented in [30] monitors the user web browsing
activity and opened document information. Term frequency information is extracted and
used to push contextual queries and to offer contextual menus, like showing a map
when an address is highlighted. The WordSieve system [22] monitors the user
sequence access to documents, building up a task representation of the user’s context.
The task representation is then evaluated over an information agent, capable to return
resources that were used in past similar contexts, or to proactively retrieve Web
results.
3.2.3. Surrounding text
The IntelliZap system [57] allows the user to send a query over a highlighted term,
taking the surrounding text as the context of the query. Terms are extracted from the



Personalized Information Retrieval in Context Using Ontological Knowledge 22

text with NLP techniques, these terms are then expanded through WordNet
2
lexical
relations, and finally the resultant terms are used for a query expansion. If a topic can
be inferred from the extracted context, the system would then choose results from an
specialized search engine. Surrounding text is also used in [76], but is also implicitly
obtained through open documents. A context term vector is generated, using entity
recognition algorithms on the text. This vector is exploited in 1) a query expansion
algorithm, 2) a term reweighting algorithm and 3) an Iterative Filtering Meta-search
(IFM), which basically is the generation of multiple queries from the context vector and
a final fusion of each of the search results. The system presented by Haveliwala [61]
gives also importance to the surrounding text of the query, along with possible past
queries. The context is represented as a weighted vector of topics, extracted with
language model techniques. The systems has 16 pre-processed pagerank values, one
for each of the root topics of the ODP Web directory [4], the contextual vector of topics
will determine which pagerank values will have more impact on the final ranking.
In the case of [13] the context is represented by the query itself. The authors claim that
each term of a query can be disambiguated by analyzing the rest of the query terms.
For instance, disambiguating the term 'element' with the chemistry meaning if appears
among other chemistry terms or assigning an XML meaning if appears with terms
related to the XML field. It is unclear that this technique will be always successful, as a
big percentage of user's queries contain no more than one or two terms [66].


2
http://wordnet.princeton.edu/



Personalized Information Retrieval in Context Using Ontological Knowledge 23

4. Ontology-Based Personalized Information Retrieval
Personalized retrieval widens the notion of information need to comprise implicit user
needs, not directly conveyed by the user in terms of explicit information requests [84].
Again, this involves modeling and capturing such user interests, and relating them to
content semantics in order to predict the relevance of content objects, considering not
only a specific user request but the overall needs of the user. When it comes to the
representation of semantics (to describe content, user interest, or user requests),
ontologies provide a highly expressive ground for describing units of meaning and a
rich variety of interrelations among them. Ontologies achieve a reduction of ambiguity,
and bring powerful inference schemes for reasoning and querying. Not surprisingly,
there is a growing body of literature in the last few years that studies the use of
ontologies to improve the effectiveness of information retrieval [59, 74, 109, 117] and
personalized search [58, 106]. However, past work that claims the use of ontologies
[58, 72, 106] for the user profile representation, does not exploit the variety of
interrelations between concepts, but only the taxonomic relations, losing the inference
capabilities, which will prove critical for the approach here proposed. Our proposed
personalization framework is set up in such a way that the models wholly benefit from
the ontology-based grounding. In particular, the formal semantics are exploited to
improve the reliability of personalization.
Personalization can indeed enhance the subjective performance of retrieval, as
perceived by users, and is therefore a desirable feature in many situations, but it can
easily be perceived as erratic and obtrusive if not handled adequately. Two key
aspects to avoid such pitfalls are a) to appropriately manage the inevitable risk of error
derived from the uncertainty of a formal representation of users’ interests, and b) to
correctly identify the situations where it is, or it is not appropriate to personalize, and to
what extent.
As discussed in section 3.1, personalized IR systems are usually distinguished by three
different parts: 1) the user profile representation, that depicts the long preferences and
interests of the user, 2) the user profile acquisition, that infers and obtains the user
preferences or interests and 3) the user profile exploitation, which adapts the retrieval
system to the user’s interests. Broadly speaking, information retrieval deals with
modeling information needs, content semantics, and the relation between them [99].
The personalization system here presented builds and exploits an explicit awareness of



Personalized Information Retrieval in Context Using Ontological Knowledge 24

(meta)information about the user, the acquisition of this information is out of scope of
this work, the user profile could be either directly provided by the user or implicitly
evidenced along the history of his/her actions and feedbacks.
4.1. User Profile Representation
The personalization system makes use of conceptual user profiles (as opposed to e.g.
sets of preferred documents or keywords), where user preferences are represented as
a vector of weights (numbers from -1 to 1), corresponding to the intensity of user
interest for each concept in the ontology, being negative values indicative of a dislike
for that concept. Comparing the metadata of items, and the preferred concepts in a
user profile, the system predicts how the user may like an item, measured as a value in
[-1,1]. Although, as stated, negative values (i.e. allowing the representation of dislikes)
are supported by the presented system, these have to be treated cautiously, especially
when these values have been implicitly generated by the profile acquisition module of
the system. Many personalized retrieval systems benefit from the implicit or explicit
feedback of the user to readjust the user profile [71, 95], a negative weight for a
concept could cause that the system low-ranked every content that contains that
concept, disabling the possibility of obtaining a real feedback from the user for that
concept, as user tend to only investigate few contents within a result set.
We found many advantages to this representation, in opposition to the common
keyword-based approaches:
• Richness. Concept preferences are more precise and have more semantics
than simple keywords. For instance, if a user sates an interest for the keyword
‘Jaguar’, the systems does not have further information to distinguish Jaguar,
the wild animal from Jaguar, the car brand. A preference stated as
‘WildAnimal:Jaguar’ (this is read as “the instance Jaguar from the wild animal
class) lets the system know unambiguously the preference of the user, and also
allows the use of more appropriate related semantics (e.g. synonyms,
hyperonims, etc…). This, together with disambiguation techniques, leads to the
effective personalization of text-based content.
• Hierarchical representation. Concepts within an ontology are represented in
a hierarchical way, through different hierarchy properties (e.g. subClassOf,
instanceOf, partOf etc…). Parents, ancestors, children and descendants of a



Personalized Information Retrieval in Context Using Ontological Knowledge 25

concept give valuable information about the concept’s semantics. For instance,
the concept animal is highly enriched by each animal class semantics that the
ontology could contain.
• Inference. Ontology standards, such as RDF and OWL, support inference
mechanisms that can be used in the system to further enhance personalization,
so that, for instance, a user interested in animals (superclass of cat) is also
recommended items about cats. Inversely, a user interested in lizards, snakes,
and chameleons can be inferred to be interested in reptiles with a certain
confidence. Also, a user keen of Sicily can be assumed to like Palermo, through
the transitive locatedIn relation.
The ontology-based representation of user interests is richer, more precise, and less
ambiguous than a keyword-based or item-based model. It provides an adequate
grounding for the representation of coarse to fine-grained user interests (e.g. interest
for broad topics, such as football, sci-fi movies, or the NASDAQ stock market, vs.
preference for individual items such as a sports team, an actor, a stock value), and can
be a key enabler to deal with the subtleties of user preferences, such as their dynamic,
context-dependent relevance. An ontology provides further formal, computer-
processable meaning on the concepts (who is coaching a team, an actor's filmography,
financial data on a stock), and makes it available for the personalization system to take
advantage of.



Personalized Information Retrieval in Context Using Ontological Knowledge 26

Topics
Politics
Sports
Leisure
Travel
Movies
Modern Music
Techno
Electronic
Island Travel
Political Region
USA
America
NorthAmerica
Canada
Hawaii
USA Islands
Geographical Region
Islands
Region
Florida
Spanish Islands
Pop
visit
locatedIn
Music
Figure 3. User preferences as concepts in an ontology
Figure 3 presents an example of conceptualized user preferences. Let’s suppose a
user indicates an interest about the topic ‘Leisure’. The system is then able to infer
preferences for ‘Leisure’ sub-topics, obtaining finer grain details about the user
preference. Note that original and more specific preferences will prevail over the
system’s inferences. In this case the user is not kind of modern music, prevailing over
the higher topic inference.
Not only hierarchy properties can be exploited for preference inference. Supposing that
the user has a preferences for the USA, the properties ‘visit’ and ‘locatedIn’ could be
thus used for the system to guess a preference for the Hawaii islands, in this case a
Hawaii tourist guide would have a positive value of preference for the user (more
details on preference expansion can be found in section 5.6).
4.2. User Profile Exploitation
Exploiting user profiles involves using the information contained in profiles to adapt the
information Retrieval system. The goals addressed so far have been focused on
delivering preference-based improvements for content filtering and retrieval, in a way
that can be very easily introduced to support the retrieval functionalities, such as
searching, browsing, and recommending. Automatic personalization is not appropriate
in all situations. Therefore it is considered an optional feature that users can turn on
and off at any time.



Personalized Information Retrieval in Context Using Ontological Knowledge 27

The personalization system assumes that the items in a retrieval space D are
annotated with weighted semantic metadata which describe the meaning carried by the
item, in terms of a domain ontology O. That is, each item JeL is associated a vector
H(J) e |u,1]
|0]
of domain concept weights, where for each x e 0, the weight H(J)
indicates the degree to which the concept x is important in the meaning of d. Thus, as
shown in Figure 4, there is a fuzzy relationship between users and the indexed content
of the system, through the ontology layer. Although the use of this ontology layer is
transparent to the user, the system can take advantage of an ontological
representation of user preferences: higher precision and inference capabilities (see
4.1). Based on preference weights, measures of user interest for content units can be
computed, with which it is possible to discriminate, prioritize, filter and rank contents (a
collection, a catalog section, a search result) in a personal way.

Annotation
Search
space
Preferences

Figure 4. Link between user preferences and search space



Personalized Information Retrieval in Context Using Ontological Knowledge 28

A fundamental notion for this purpose is the definition of a measure of content
relevance for the interests of a particular user. Building on this measure, specific
algorithms can then be developed for filtering and sorting a list of content items, and re-
ranking search results, according to user preferences.
The basis for the personalization of content retrieval is the definition of a matching
algorithm that provides a personal relevance measure (PRM) of a document for a user,
according to his/her semantic preferences. The measure is calculated as a function of
the semantic preferences of the user, and the semantic metadata of the document. In
this calculation, user preferences and content metadata are seen as two vectors in an
N-dimensional vector space, where N is the number of elements in the universe O of
ontology terms, and the coordinates of the vectors are the weights assigned to
ontology terms in user preferences and document annotations, representing
respectively, the intensity of preference, and the degree of importance for document.
Semantic preferences also include inferred preferences, for example deductive
inference, so if a user expresses preference for the animal concept, preferences for
each subclass of animal (i.e. ‘Bird’ concept) would be inferred ( for more information
see section 5.6).
The procedure for matching these vectors has been primarily based on a cosine
function for vector similarity computation, as follows:
PRH = cos(SP
¯
, H
¯
) =
SP
¯
· H
¯
|SP| × |H|
=
∑ (SP
ì
× H
ì
)
t
ì=1
¹∑ (SP
ì
2
)
t
ì=1
× ¹∑ (H
ì
2
)
t
ì=1

Equation 1. Personal Relevance Measure, SP= Semantic Preferences, M=Content metadata
where SP
¯
stands for the semantic preferences P(u) of the user u and H
¯
is the metadata
M(d) related to the document d.
The following figure is the visual representation of similarity between vectors,
considering only a three-dimension space. As in the classic Information Retrieval
vector-model [93], information expressed in vectors are more alike as more close are
the vectors represented in the finite-dimensional space. In classic IR, one vector
represents the query and the other matching vectors are the representation of the
documents. In our representation, the first vector is the user preference, whereas the



Personalized Information Retrieval in Context Using Ontological Knowledge 29

second vectors are also essentially the representation the content in the system’s
search space.
x
3
x
1
x
2
{x
1
, x
2
, x
3
} = domain ontology O
α
2
α
1

Figure 5. Visual representation of metadata and preference's vector similarity
The PRM algorithm thus matches two concept-weighted vectors and produces a value
between -1 and 1. Values near -1 indicate that the preference of the user do not match
the content metadata (i.e. two vectors are dissimilar), values near 1 indicate that the
user interests do match with the content. Note that not all times the system can have
weighted annotations attached to the documents, or is able have analysis tools that
produce weighted metadata, but in case not, the PRM function would assign a weight
of 1 by default to all metadata. Even so, it will be interesting to keep the ability to
support weighted annotations, for reusability in systems that do provide these values
(see e.g. [117]).
For instance, Figure 6 shows a setting where O = {Flower, Dog, Sea, Surf, Beach,
Industry} is the set of all domain ontology terms (classes and instances). According to
her profile, the user is interested in the concepts of ‘Flower’, ‘Surf’, and ‘Dog’, with
different intensity, and has a negative preference for ‘Industry’. The preference vector
for this user is thus SP
¯
= (u.7,1.u,u.u,u.8,u.2, -u.7). A still image is annotated with the
concepts of ‘Dog’, ‘Sea’, ‘Surf’ and ‘Beach’, therefore the corresponding metadata
vector is H
¯
= (u.u,u.8,u.6,u.8,u.2,u.u).



Personalized Information Retrieval in Context Using Ontological Knowledge 30

Class Weight
Class Weight
Flower
Industry
Surf
Dog
Dog
Sea
Surf
Beach
0.7
‐ 0.7
0.8
1.0
0.8
0.6
0.8
0.2
O= {Flower, Dog, Sea, Surf, Beach, Industry} 
Semantic interests Content metadata
{Flower, Dog, Sea, Surf, Beach, Industry}  {Flower, Dog, Sea, Surf, Beach, Industry} 
={0.7,     1.0,  0.0,   0.8,   0.0,       ‐0.7}  ={0.0,      0.8,   0.6,   0.8,   0.2,        0.0} 

Figure 6. Construction of two concept-weighted vectors
The PRM of the still image for this user shall therefore be:
PRH =
(0.7×0.0)+(1.0×0.8)+(0.0×0.6)+(0.8×0.8)+(0.0×0.2)+(-0.7×0.0)
.0.7
2
+1
2
+0.0
2
+ 0.8
2
+0.0
2
+(-0.7)
2
×√0.0
2
+0.8
2
+0.6
2
+ 0.8
2
+0.2
2
+0.0
2
= u.69
This measure can be combined with the relevance measures computed by the user-
neutral algorithms, producing a personalized bias on the ranking of search results, as
explained in the following section.
4.3. Personalized Information Retrieval
Search personalization is achieved in our system by manipulating search result lists in
order to bias the answer to search queries towards user interests. This manipulation
may consist of cutting down search results, reordering the results, or providing an initial
content set for a relevance feedback session. All of these operations are based on the
PRM measure described in the preceding section.
4.3.1. Personalized Search
Personalization of search must be handled carefully. An excessive personal bias may
drive results too far from the actual query. This is why we have taken the decision to



Personalized Information Retrieval in Context Using Ontological Knowledge 31

discard query reformulation techniques, and user personalized filtering and result
reordering as a post process to the execution of queries by the intelligent retrieval
module. Still, the personalized ranking defined by the PRM values should be combined
with the query-dependent rank (QDR) values returned by the intelligent retrieval
modules. That is, the final combined rank (CR) of a document d, given a user u and her
query q is defined as a function of both values:
CR(J, o, u) = ¡(PRH(H
d
¯
, SP
u
¯
), 0ÐR(J, o))
Equation 2. Final personalized Combined Rank
The question remains as to how both values should be combined and balanced. As an
initial solution, we use a linear combination of both:
CR(J, o, u) = z · PRH(H
d
¯
, SP
u
¯
) +(1 -z) · 0ÐR(J, o)
Equation 3. Linear combination of PRM and QDR
where the value of z, between 0 and 1, determines the degree of personalization of the
subsequent search ranking.
What is an appropriate value for z, how it should it be set, and whether other functions
different from a linear combination would perform better, are work in progress in this
task, but some initial solutions have been outlined [34]. Explicit user requests, queries
and indications should always take precedence over system-learned user preferences.
Personalization should only be used to “fill the gaps” left by the user in the information
she provides, and always when the user is willing to be helped this way. Therefore, the
larger the gap, the more room for personalization. In other words, the degree of
personalization z can be proportional to the size of this gap. One possible criterion to
estimate this gap is by measuring the specificity of the query. This can be estimated by
measuring the generality of the query terms (e.g. by the depth and width of the concept
tree under the terms in the ontology), the number of results, or the closeness of rank
values. For instance, the topic of ‘Sports’ is rather high in the hierarchy, has a large
number of subtopics, a large number of concepts belong to this topic, and a query for
‘Sports’ would probably return contents by the thousands (of course this depends on
the repository). It therefore leaves quite some room for personalization, which would be
a reason for raising z in this case.
Ultimately, personalized ranking, as supported by the adapted IR system, should leave
degree of personalization as an optional parameter, so it could be set by the user



Personalized Information Retrieval in Context Using Ontological Knowledge 32

herself, as in Google personalized web search [8]. See also [48, 79, 81, 92, 121] for
state of the art on combining rank sources.
Building on the combined relevance measure described above, a personalized ranking
is defined, which will be used as the similarity measure for the result reordering.
4.3.2. Personalized Browsing
The personal relevance measure can also be used to filter and order lists of documents
while browsing. In this case the room for personalization is higher, in general, when
compared to search, since browsing requests are usually more unspecific than search
queries. Moreover, browsing requests, viewed as light queries, typically consist of
boolean filtering conditions (e.g. filter by date or category), and strict orderings (by title,
author, date, etc.). If any fuzzy filters are defined (e.g. when browsing by category,
contents might have fuzzy degrees of membership to category), the personalization
control issues described above would also apply here. Otherwise, personalization can
take over ranking all by itself (again, if requested by the user).
On the other hand, the PRM measure, combined with the advanced browsing
techniques provides the basis for powerful personalized visual clues. Any content
highlighting technique can be played to the benefit of personalization, such as the size
of visual representations (bigger means more relevant), color scale (e.g. closer to red
means more interesting), position in 3D space (foreground vs. background), automatic
hyperlinks (to interesting contents), etc.
5. Personalization in Context
Specific, advanced mechanisms need to be developed in order to ensure that
personalization is used at the right time, in the appropriate direction, and in the right
amount. Users seem inclined to rely on personalized features when they need to save
time, wish to spare efforts, have vague needs, have limited knowledge of what can be
queried for (e.g. for lack of familiarity with a repository, or with the querying system
itself), or are not aware of recent content updates. Personalization is clearly not
appropriate, for instance, when the user is looking for a specific, known content item, or
when the user is willing to provide detailed relevance feedback, engaging in a more
conscientious interactive search session. Even when personalization is appropriate,
user preferences are heterogeneous, variable, and context-dependent. Furthermore,



Personalized Information Retrieval in Context Using Ontological Knowledge 33

there is inherent uncertainty in the system when automatic preference learning is used.
To be accurate, personalization needs to combine long-term predictive capabilities,
based on past usage history, with shorter-term prediction, based on current user
activity, as well as reaction to (implicit or explicit) user feedback to personalized output,
in order to correct the system’s assumptions when needed. The few proposals that
mention the concept of having a profile based both on long-term (i.e. the user profile)
and short-term interests (i.e. the context) don’t have a clear distinction between the two
representations, either by not differentiating the acquisition techniques [46, 110], or by
not differentiation the exploitation techniques [73].
The idea of contextual personalization, proposed and developed here, responds to the
fact that human preferences are multiple, heterogeneous, changing, and even
contradictory, and should be understood in context with the user goals and tasks at
hand. Indeed, not all user preferences are relevant in all situations. For instance, if a
user is consistently looking for some contents in the Formula 1 domain, it would not
make much sense that the system prioritizes some Formula 1 picture with a helicopter
in the background, as more relevant than others, just because the user happens to
have a general interest for aircrafts. In the semantic realm of Formula 1, aircrafts are
out of (or at least far from) context. Taking into account further contextual information,
available from prior sets of user actions, the system can provide an undisturbed, clear
view of the actual user’s history and preferences, cleaned from extraordinary
anomalies, distractions or “noise” preferences. We refer to this surrounding information
as contextual knowledge or just context, offering significant aid in the personalization
process. The effect and utility of the proposed techniques consists of endowing a
personalized retrieval system with the capability to filter and focus its knowledge about
user preferences on the semantic context of ongoing user activities, so as to achieve
coherence with the thematic scope of user actions at runtime.
As already discussed in previous sections of this document, context is a difficult notion
to grasp and capture in a software system. In our approach, we focus our efforts on this
major topic of retrieval systems, by restricting it to the notion of semantic runtime
context [118]. The latter forms a part of general context, suitable for analysis in
personalization and can be defined as the background themes under which user
activities occur within a given unit of time. From now on we shall refer to semantic
runtime context as the information related to personalization tasks and we shall use the



Personalized Information Retrieval in Context Using Ontological Knowledge 34

simplified term context for it. The problems to be addressed include how to represent
the context, how to determine it at runtime, and how to use it to influence the activation
of user preferences, "contextualize" them and predict or take into account the drift of
preferences over time (short and long-term).
As will be described in section 5.3, in our current solution to these problems, a runtime
context is represented as (is approximated by) a set of weighted concepts from the
domain ontology. How this set is determined, updated, and interpreted, will be
explained in section 5.4. Our approach to the contextual activation of preferences is
then based on a computation of the semantic similarity between each user preference
and the set of concepts in the context, as will be shown in section 5.5.1. In spirit, the
approach consists of finding semantic paths linking preferences to context. The
considered paths are made of existing semantic relations between concepts in the
domain ontology. The shorter, stronger, and more numerous such connecting paths,
the more in context a preference shall be considered.
The proposed techniques to find these paths take advantage of a form of Constraint
Spreading Activation (CSA) strategy [36], as will be explained in section 5.5. In the
proposed approach, a semantic expansion of both user preferences and the context
takes place, during which the involved concepts are assigned preference weights and
contextual weights, which decay as the expansion grows farther from the initial sets.
This process can also be understood as finding a sort of fuzzy semantic intersection
between user preferences and the semantic runtime context, where the final computed
weight of each concepts represents the degree to which it belongs to each set.
Finally, the perceived effect of contextualization should be that user interests that are
out of focus, under a given context, shall be disregarded, and only those that are in the
semantic scope of the ongoing user activity (the "intersection" of user preferences and
runtime context) will be considered for personalization. As suggested above, the
inclusion or exclusion of preferences needs not be binary, but may range on a
continuum scale instead, where the contextual weight of a preference shall decrease
monotonically with the semantic distance between the preference and the context.



Personalized Information Retrieval in Context Using Ontological Knowledge 35

5.1. Notation
Before continuing, we provide a few details on the mathematical notation that will be
used in the sequel. It will be explained again in most cases when it is introduced, but
we gather it all here, in a single place, for the reader's convenience.
O The domain ontology.
R The set of all relations in O.
D The set of all documents or content in the search space.
M : D → [0,1]
|O|
A mapping between document and their semantic
annotations, i.e. M(d) ∈ |u,1]
|0|
is the concept-vector
metadata of a document d ∈ D.
U The set of all users.
P The set of all possible user preferences.
C The set of all possible contexts.
P
O
, C
O
An instantiation of P and C for the domain O, where P is
represented by the vector-space |-1,1]
|0|
and C by |u,1]
|0|
.
P

: U → P A mapping between users and preferences, i.e. P(u) ∈ P is
the preference of user u ∈ U.
C

: U × N → C A mapping between users and contexts over time, i.e. C(u,t)
∈ C is the context of a user u ∈ U at an instant t ∈ N.
EP : U → P Extended user preferences.
EC : U × N → C Extended context.
CP : U × N → P Contextualized user preferences, also denoted as
Φ(P(u),C(u,t)).
v
x
, where v ∈ [-1,1]
|O|
We shall use this vector notation for concept-vector spaces,
where the concepts of an ontology O are the axis of the
vector space. For a vector v ∈ [-1,1]
|O|
, v
x
∈ [-1,1] is the
coordinate of v corresponding to the concept x∈O. This
notation will be used for all the elements ranging in the
|-1,1]
|0|
space, such as document metadata M
x
(d), user
preferences P
x
(u), runtime context C
x
(u,t), and others.
Q The set of all possible user requests, such as queries,
viewing documents, or browsing actions.



Personalized Information Retrieval in Context Using Ontological Knowledge 36

prm : D × U × N → [-1,1] prm(d,u,t) is the estimated contextual interest of user u for the
document d at instant t.
sim : D × Q → [0,1] sim(d,q) is the relevance score computed for the document d
for a request q by a retrieval system external to the
personalization system.
score : D × Q × U × N → [-1,1]
score(d,q,u,t) is the final personalized relevance score
computed by a combination of sim and prm.
5.2. Preliminaries
Our strategies for the dynamic contextualization of user preference are based on three
basic principles: a) the representation of context as a set of domain ontology concepts
that the user has “touched” or followed in some manner during a session, b) the
extension of this representation of context by using explicit semantic relations among
concepts represented in the ontology, and c) the extension of user preferences by a
similar principle. Roughly speaking, the “intersection” of these two sets of concepts,
with combined weights, will be taken as the user preferences of interest under the
current focus of user action. The ontology-based extension mechanisms will be
formalized on the basis of an approximation to conditional probabilities, derived from
the existence of relations between concepts. Before the models and mechanisms are
explained in detail, some preliminary ground for the calculation of combined
probabilities will be provided and shall be used in the sequel for our computations.
Given a finite set Ω, and a ∈ Ω, let P(a) be the probability that a holds some condition.
It can be shown that the probability that a holds some condition, and it is not the only
element in Ω that holds the condition, can be written as:
P(o r U x
xeΩ-{u]
) = ∑ |(-1)
|S|+1
∏ P(x) · P(o|x)
xeS
|
ScΩ-{u]

Equation 4 Probability of holding condition a, inside a finite set Ω
Provided that a ∩ x are mutually independent for all x ∈ Ω (the right hand-side of the
above formula is based on the using the inclusion-exclusion principle applied to
probability [128]). Furthermore, if we can assume that the probability that a is the only
element in Ω that holds the condition is zero, then the previous expression is equal to
P(a).



Personalized Information Retrieval in Context Using Ontological Knowledge 37

We shall use this form of estimating “the probability that a holds some condition” with
two purposes: a) to extend user preferences for ontology concepts, and b) to determine
what parts of user preferences are relevant for a given runtime context, and should
therefore be activated to personalize the results (the ranking) of semantic retrieval, as
part of the process described in detail in [36]. In the former case, the condition will be
“the user is interested in concept a”, that is, P(a) will be interpreted as the probability
that the user is interested in concept a of the ontology. In the other case, the condition
will be “concept a is relevant in the current context”. In both cases, the universe Ω will
correspond to a domain ontology O (the universe of all concepts).
In both cases, Equation 4 provides a basis for estimating P(a) for all a∈O from an initial
set of concepts x for which we know (or we have an estimation of) P(x). In the case of
preferences, this set is the initial set of weighted user preferences for ontology
concepts, where concept weights are interpreted as the probability that the user is
interested in a concept. In the other case, the initial set is a weighted set of concepts
found in elements (links, documents, queries) involved in user actions in the span of a
session with the system. Here this set is taken as a representation of the semantic
runtime context, where weights represent the estimated probability that such concepts
are important in user goals. In both cases, Equation 4 will be used to implement an
expansion algorithm that will compute probabilities (weights) for all concepts starting
from the initially known (or assumed) probabilities for the initial set. In the second case,
the algorithm will compute a context relevance probability for preferred concepts that
will be used as the degree of activation that each preference shall have (put in rough
terms, the (fuzzy) intersection of context and preferences will be found this way).
Equation 4 has some interesting properties with regards to the design of algorithms
based on it. In general, for X={x
ì
]
ì=0
n
, where x
i
∈ [-1,1], let us define R(X) =
∑ |(-1)
|S|+1
∏ x
ì ìcS
|
ScH
n
. It is easy to see that this function has the following properties:
• R (X) ∈ [-1,1]
• R (X) ≥ x
i
for all i (in particular this means that R (X) = 1 if x
i
= 1 for some i).
• R (X) increases monotonically with respect to the value of x
i
, for all i.
• Since R (X) = x
0
+ (1 – x
0
) · R({x
ì
]
ì=1
n
), R (X) can be computed efficiently. Note
also that R(X) does not vary if we reorder X.
These properties will be useful for the definition of algorithms with computational
purposes.



Personalized Information Retrieval in Context Using Ontological Knowledge 38

5.3. Semantic context for personalized content retrieval
Our model for context-based personalization can be formalized in an abstract way as
follows, without any assumption on how preferences and context are represented. Let
U be the set of all users, let C be the set of all contexts, and P the set of all possible
user preferences. Since each user will have different preferences, let P

: U → P map
each user to his/her preference. Similarly, each user is related to a different context at
each step in a session with the system, which we shall represent by a mapping C

: U ×
N → C, since we assume that the context evolves over time. Thus we shall often refer
to the elements from P and C as in the form P(u) and C(u, t) respectively, where u ∈ U
and t ∈ N.
Definition 1. Let C be the set of all contexts, and let P be the set of all possible user
preferences. We define the contextualization of preferences as a mapping Φ

: P × C → P
so that for all p ∈ P and c ∈ C, p |= Φ (p,c).
In this context the entailment p |= q means that any consequence that could be inferred
from q could also be inferred from p. For instance, given a user u ∈ U, if P(u) = q implies
that u “likes x” (whatever this means), then u would also “like x” if her preference was p.
Now we can particularize the above definition for a specific representation of
preference and context. In our model, we consider user preferences as the weighted
set of domain ontology concepts for which the user has an interest, where the intensity
of interest can range from -1 to 1.
Definition 2. Given a domain ontology O, we define the set of all preferences over O as
P
O
= [-1,1]
|O|
, where given p ∈ P
O
, the value p
x
represents the preference intensity for a
concept x ∈

O in the ontology.
Definition 3. Under the above definitions, we particularize |=
O
as follows: given p, q ∈ P
O
,
p |=
O
q ⇔ ∀x ∈

O, either q
x
≤ p
x
, or q
x
can be deduced from p using consistent preference
extension rules over O.



Personalized Information Retrieval in Context Using Ontological Knowledge 39

Now, our particular notion of context is that of the semantic runtime context, which we
define as the background themes under which user activities occur within a given unit
of time.
Definition 4. Given a domain ontology O, we define the set of all semantic runtime
contexts as C
O
= [0,1]
|O|
.
With this definition, a context is represented as a vector of weights denoting the degree
to which a concept is related to the current activities (tasks, goals, short-term needs) of
the user.
In the next sections, we define a method to build the values of C(u,t) during a user
session, a model to define Φ, and the techniques to compute it. Once we define this,
the activated user preferences in a given context will be given by Φ (P(u),C(u, t)).
5.4. Capturing the context
Previously analyzed implementation-level representation of semantic runtime context is
defined as a set of concepts that have been involved, directly or indirectly, in the
interaction of a user u with the system during a retrieval session. Therefore, at each
point t in time, context can be represented as a vector C(u,t)∈[0,1]
|O|
of concept
weights, where each x∈O is assigned a weight C
x
(u,t)∈[0,1]. This context value may be
interpreted as the probability that x is relevant for the current semantic context.
Additionally, time is measured by the number of user requests within a session. Since
the fact that the context is relative to a user is clear, in the following we shall often omit
this variable and use C(t), or even C for short, as long as the meaning is clear.
In our approach, C(t) is built as a cumulative combination of the concepts involved in
successive user requests, in such a way that the importance of concepts fades away
with time. This simulates a drift of concepts over time, and a general approach towards
achieving this follows. This notion of context extraction is extracted from the implicit
feedback area [127], concretely our model is part of the wpq implicit feedback models,
as one that uses a time variable and gives more importance to items occurring close in
time [32].
Right after each user’s request, a request vector Req(t) ∈ C
O
is defined. This vector may
be:



Personalized Information Retrieval in Context Using Ontological Knowledge 40

ƒ The query concept-vector, if the request is a query.
ƒ A concept-vector containing the topmost relevant concepts in a document, if
the request is a “view document” request.
ƒ The average concept-vector corresponding to a set of documents marked as
relevant by the user, if the request is a relevance feedback step.
ƒ If the request is a “browse the documents under a category c” request, Req(t) is
the sum of the vector representation of the topic c (in the [0,1]
|O|
concept
vector-space), plus the normalized sum of the metadata vectors of all the
content items belonging to this category.
As the next step, an initial context vector C(t) is defined by combining the newly
constructed request vector Req(t) from the previous step with the context C(t–1), where
the context weights computed in the previous step are automatically reduced by a
decay factor ξ, a real value in [0,1], where ξ may be the same for all x, or a function of
the concept or its state. Consequently, at a given time t, we update C
x
(t) as
C
x
(t) = ξ · C
x
(t – 1) + (1 – ξ) · Req
x
(t)
Equation 5. Runtime semantic context
The decay factor will define how many action units will be a concept considered, and
how fast a concept will will be “forgotten” by the system. This may seem similar to
pseudo-relevance feedback, but it is not used to reformulate the query, but to focus the
preference vector as shown next.
5.5. Semantic extension of context and preferences
The selective activation of user preferences is based on an approximation to
conditional probabilities: given x∈O with P
x
(u) > 0 for some u ∈ U, i.e. a concept on
which a user u has some interest, the probability that x is relevant for the context can
be expressed in terms of the probability that x and each concept y directly related to x
in the ontology belong to the same topic, and the probability that y is relevant for the
context. With this formulation, the relevance of x for the context can be computed by a
constrained spreading activation algorithm, starting with the initial set of context
concepts defined by C.



Personalized Information Retrieval in Context Using Ontological Knowledge 41

Our strategy is based on weighting each semantic relation r in the ontology with a
measure w(r,x,y) that represents the probability that given the fact that r(x,y), x and y
belong to the same topic. As explained above, we will use this as a criteria for
estimating the certainty that y is relevant for the context if x is relevant for the context,
i.e. w(r,x,y) will be interpreted as the probability that a concept y is relevant for the
current context if we know that a concept x is in the context, and r(x,y) holds. Based on
this measure, we define an algorithm to expand the set of context concepts through
semantic relations in the ontology, using a constrained spreading activation strategy
over the semantic network defined by these relations. As a result of this strategy, the
initial context C(t) is expanded to a larger context vector EC(t), where of course EC
x
(t) ≥
C
x
(t) for all x∈O.
Let R be the set of all relations in O, let R
´
= R∪R
-1
=R∪{r
-1
| r∈R }, and w : R
´
→ [0,1].
The extended context vector EC(t) is computed by:


( )
( ) ( )
( ) ( ) ( ) { }
( )
( )
, , ,
if 0
, , power otherwise
y y
y
x
x r r x y
C t C t
EC t
R EC t w r x y x
∈ ∈
⎧ >

=

⋅ ⋅



O R

Equation 6. Expanded context vector
where R has been is the set of all concept relation in the ontology O and R
-1
is the set
of all inverse relation of R, i.e. a concept x has an inverse relation r
-1
(x,y) <=> {exists
r(y,x) | r∈R}. Finally, power(x) ∈ [0,1] is a propagation power assigned to each concept
x (by default, power(x) = 1). Note that we are explicitly excluding the propagation
between concepts in the input context (i.e. these remain unchanged after propagation).
5.5.1. Spreading activation algorithm
The algorithms for expanding preferences and context will be based on the so called
constrained spreading activation (CSA) strategy (see e.g. [36-38]). [98] Is the first
publication on CSA. Another relevant reference is [96], where CSA is used to improve
the recall of a retrieval system using domain ontologies.



Personalized Information Retrieval in Context Using Ontological Knowledge 42

Based on definition 2, EC(t) can be computed as follows, where C
0
(t) = { x∈O | C
x
(t) > 0}
is the initial updated input with new context values resulting after the current request.
Given x∈O, we define the semantic neighborhood of x as N[x] = {(r, y) ∈ R
´
×O | r (x,y)}.
This algorithm can also be used as a standalone method for expanding preferences
(i.e. compute the EP vector from the initial P), except that time is not a variable, and a
different measure w is used. Figure 7 shows a simple pseudocode of the algorithm.


To exemplify the expansion process, Figure 8 shows a simple preference expansion
process, where three concepts are involved. The user has preferences for two of these
concepts, which are related to a third through two different ontology relations. The
expansion process show how a third preference can be inferred, “accumulating” the
evidences of preference from the original two preferences.
expand (C, EC, w)
for x∈O do
EC
x
= C
x

in_path[x] ← false
for x∈C
0
do expand (x, w, 0)

expand (x, w, prev_cx)
in_path[x] ← true
for (r,y) ∈ N[x] do // Optimization: choose (r,y) in decreasing order of EP
y

if not in_path[y] and C
y
= 0 and EC
y
< 1 then // The latter condition to save some work
prev_cy ← EC
y
// Undo last update from x
EC
y
← (EC
y
– w(r,x,y) * power(x) * prev_cx) / (1 – w(r,x,y) * power(x) * prev_cx)
EC
y
← EC
y
+ (1 – EC
y
)* w(r,x,y) * power(x) * EC
x

if EC
y
> ε then expand (y, w, prev_cy)
in_path[x] ← false

Figure 7. Simple version of spreading activation algorithm



Personalized Information Retrieval in Context Using Ontological Knowledge 43

preference for x = p
x
r
1
(x,y)
Beach
x
Sea
y
nextTo
r
1
p
x
0.8
1) p
y
1
=0.4 = 0.8 × 0.5
w (r
1
)
0.5
⇒ preference for y= p
y
1
= p
x
· w (r
1
,x,y)
2) p
y
2
=0.724 = 0.4 + (1 - 0.4) × 0.9 × 0.6
Domain ontology
C C
Boat
z
p
z
0.6
C
r
2
1)
preference for z = p
z
r
2
(z,y)
⇒ preference for y =p
y
2
= p
y
1
+(1 - p
y
1
) · p
z
· w (r
2
,y,z)
2)
w (r
2
)
Preference Expansion
Figure 8. Example of preference expansion with the CSA algorithm
The simple expansion algorithm can be optimized as follows, by using a priority queue
(a heap H) w.r.t. EC
x
, popping and propagating concepts to their immediate
neighborhood (i.e. without recursion). This way the expansion may get close to O (M
log N) time (provided that elements are not often pushed back into H once they are
popped out of H), where N = |O| and M = |
l
R|. The algorithm would thus be:






Personalized Information Retrieval in Context Using Ontological Knowledge 44



With the suggested optimizations, here M log N should be closer to M log |C
0
|.
There are a lot of optimizations in the CSA state of the art that try to prune the whole
possible expansion tree, the most common were also adapted into the algorithm:
• Do not expand a node more than n
j
jumps. This is the basic “stop condition” in
CSA algorithms, the idea is not expanding to concepts that are “meaningfully”
far away from the original concept. For instance expanding the interest for cats
to ‘LiveEntity’ does not seem to be adequate.
• Do not expand a node (or expand with a reduction degree of w
e
) that has a fan-
out greater than n
e
. The goal is to reduce the effect of “Hub” nodes that have
many relations with other concepts. For instance, if a user is interested in a
group of companies that trade on the Nasdaq stock exchange and belong to the
Computer and Hardware sector, a correct inference is that the user could like
other companies with these two features, but an inference could be considered
incorrect if propagates the preference to the class ‘Company’ and expand to a
thousand other companies that don’t have anything to do with the originals.
expand (C, EC, w)
for x∈O do EC
x
= C
x
// Optimize: insert elements x with C
x
> 0, and copy the rest at the end
H ← build_heap (O × {0})
while H ≠∅ do
(x, prev_cx) ← pop(H) // Optimize here: make heapify stop at the first y with ECy = 0
if EC
x
< ε then stop // Because remaining nodes are also below ε (a fair saving)
for (r,y) ∈ N[x] do
if C
y
= 0 and EC
y
< 1 then // Note that it is possible that y ∉ H and yet ECy be
updated
prev_cy ← EC
y
// Undo last update from x
EC
y
← (EC
y
– w(r,x,y) * power(x) * prev_cx) /
(1 – w(r x,y) * power(x) * prev_cx)
// Optimize: heapify stops as soon as ECz = 0
EC
y
← EC
y
+ (1 – EC
y
) * w(r,x,y) * power(x) * EC
x

push (H,y,prev_cy) // Optimize again: push starts from the first z with ECz > 0

Figure 9. Priority queue variation of spreading activation algorithm



Personalized Information Retrieval in Context Using Ontological Knowledge 45

• Once that a node has been expanded up to n
h
hierarchical properties do not
expand the node any more down through hierarchical properties. The intention
is to not generalize a preference (semantically) more than once, as this is a
risky assumption to make with the original user’s preferences. For instance, in
the example of section 4.1, were the users likes snakes, lizards, and
chameleons, the system can infer quite safely that the user has a probability to
like reptiles in general, but it could seem not so straightforward to infer a
preference for any kind of animal in general.
Figure 10 shows a final version of the algorithm with priority queue and optimization
parameters:


expand (C, EC, w)
for x∈O do EC
x
= C
x
// Optimize: insert elements x with Cx > 0, and copy the rest at the end
H ← build_heap (O × {0},hierarchy_level = 0, expansion_level= 0)
while H ≠∅ do
// Optimize here: make heapify stop at the first y with ECy = 0
(x, prev_cx,hierarchy_level, expansion_level) ← pop(H)
if EC
x
< ε then stop // Because remaining nodes are also below ε (a fair saving)
// Jump limit condition stop
if expansion_level < n
j
do
for (r,y) ∈ N[x] do
// Hierarchy level stop condition
if C
y
= 0 and EC
y
< 1 and (r ∉ hierarchy(O) or hierarchy_level < n
h
) then
prev_cy ← EC
y
// Undo last update from x
// Use fan-out expansion decrease factor
EC
y
← (EC
y
– w(r,x,y) * power(x) * prev_cx) /
(1 – (w(r,x,y) * w
e
(x,n
e
)) * power(x) * prev_cx)
EC
y
← EC
y
+ (1 – EC
y
) * (w(r,x,y) * w
e
(x,n
e
)) * power(x) * EC
x

//Update condition variables
expansion_level++
if r ∈ hierarchy(O) then hierarchy_level++
// Optimize again: push starts from the first z with ECz > 0
push (H,y,prev_cy,hierarchy_level,expansion_level)
Figure 10. Parameter optimized variation with priority queue



Personalized Information Retrieval in Context Using Ontological Knowledge 46

The spreading activation algorithm is rich in parameters, and normally they have to be
set according to the ontology or ontologies used for the preference expansion.
Ontologies are varied on structure and definition, specialized ontologies usually have a
high level of profundity, and general ontologies usually a high amount of topic-
concepts, with high level of fan-out for every topic. A summary of these parameters can
be found in Table 3.
w(r,x,y)/ w(r)
Probability that a concept y is relevant for the current context if we know that a
concept x is in the context or user profile, and r(x,y) holds. Also seen as the power of
preference/context propagation that the relation r∈ has for concepts x and y.
Perhaps the most important parameter of the CSA algorithm, and also the most
difficult parameter to decide. In our experiments (see section 6) this values were
empirically fixed for every property in the ontology, not taking into account the involved
concepts of the relation, this can be express as w(r). Future work will be to study the
power of propagation with the involved concepts, studying techniques of semantic
relation between two concepts of the same ontology.
power(x)
The power of preference/context propagation that a single concept x has. By default
equal to 1.
ε
The threshold of Extended Context (EC) or Extended Preference (EP) that a concept
has to have in order to expand it to related concepts. This value manages the tradeoff
of the performance of the algorithm and having meaningful values, small absolute
values have smaller impact on the personalization of the retrieval system
n
j

Number maximum of expansions that the algorithm does from a single concept

w
e
(x,n
e
)
Reduction factor w
e
of the extended context/preference x applied to a node with a fan-
out of n
e.

n
h

Maximum number of times that a concept can be generalized.
Table 3. Spreading activation algorithm parameters
5.5.2. Comparison to classic CSA
The proposed algorithm is a variation of the Constrained Spreading activation (CSA)
technique originally proposed by Salton [99]. Here EC corresponds to I, and C
corresponds to the initial values of I when the propagation starts. The output function f
is here a threshold function, i.e. O
j
= I
j
if I
j
> ε, and 0 otherwise. The activation threshold
k
j
, here ε, is the same for all nodes. In practice, we use a single property, EC, for both
O and I.



Personalized Information Retrieval in Context Using Ontological Knowledge 47

Instead of making
, j i i j
i
I O w = ⋅

, we compute
{ } ( ) , j i i j
i
I R O w = ⋅ , whereby I
j
is
normalized to [0,1], and corresponds to the probability that node j is activated, in terms
of the probability of activation of each node i connected to j. w
i,j
is here w(r)· power(i),
where r(i,j) for some r∈
l
R, and the power property of i is a node-dependent
propagation factor. Since the graph is a multigraph, because there may be more than
one relation between any two nodes, w
i,j
has to be counted as many times as there are
such relations between i and j.
The termination condition, excluding the optimization parameters, is that no node with I
j

> ε remains that has never been expanded.
Finally, the propagation to (and therefore through) a set of initial nodes (the ones that
have an non-zero initial input value) is inhibited.
Regarding the spreading constraints, in the current version of our algorithm we use:
ƒ Distance constraint.
ƒ Fan-out constraint.
ƒ Path constraint: links with high weights are favored, i.e. traversed first.
ƒ Activation constraint.
5.6. Semantic preference expansion
In real scenarios, user profiles tend to be very scattered, especially in those
applications where user profiles have to be manually defined, since users are usually
not willing to spend time describing their detailed preferences to the system, even less
to assign weights to them, especially if they do not have a clear understanding of the
effects and results of this input. Even when an automatic preference learning algorithm
is applied, usually only the main characteristics of user preferences are recognized,
thus yielding profiles that may entail a lack of expressivity
In order to meet this limitation, two novel approaches may be followed: a) the extension
of preferences through ontology relations, following the same approach, and even the
same algorithm, that is used to expand the runtime context; and b) a context-sensitive
expansion, based on the representation and exploitation processing of the initial set of
user preferences. Both approaches are described, presented, and left here as
alternatives towards a common goal. Further insights to be drawn upon theoretic
analysis, as well as the observations concerning performance and trade-offs from the



Personalized Information Retrieval in Context Using Ontological Knowledge 48

experimental results of future implementation work and testing, shall provide further
grounds for the analysis and evaluation of the two approaches.
5.6.1. Stand-alone preference expansion
The first proposed approach for preference extension is to follow the same mechanism
that was used for the extension of the semantic context, as described in section 5.5.
The main difference is that here relations are assigned different weights w’(r,x,y) for
propagation, since the inferences one can make on user preferences, based on the
semantic relations between concepts, are not necessarily the same as one would make
for the contextual relevance.
For instance, if a user is interested in the Sumo sport, and the concept Japan is in the
context, we assume it is appropriate to activate the specific user interest for Sumo, to
which the country has a strong link: we have nationalSport (Japan, Sumo), and
w(nationalSport, Japan, Sumo) is high. However, if a user is interested in Japan, we
believe it is not necessarily correct to assume that her interest for Sumo is high, and
therefore w’(nationalSport, Japan, Sumo) should be low. In general, it is expected that
w’(r,x,y) ≤ w(r,x,y), i.e. user preferences are expected to have a shorter expansion than
has the context.
Given an initial user preference P, the extended preference vector EP is defined by:
( )
( ) ( ) { }
( )
( )
, , ,
if 0
' , , power otherwise
y y
y
x
x r r x y
P P
EP t
R EP w r x y x
∈ ∈
>


=

⋅ ⋅



O R

Equation 7. Expanded preference vector
Which is equivalent to equation 6 where EC, C and w have been replaced by EP, P and
w’, and the variable t has been removed, since long-term preferences are taken to be
stable along a single session.
5.7. Contextual activation of preferences
After the context is expanded, only the preferred concepts with a context value different
from zero will count for personalization. This is done by computing a contextual
preference vector CP, as defined by CP
x
= EP
x
· C
x
for each x∈O, where EP is the vector
of extended user preferences. Now CP
x
can be interpreted as a combined measure of



Personalized Information Retrieval in Context Using Ontological Knowledge 49

the likelihood that concept x is preferred and how relevant the concept is to the current
context. Note that this vector is in fact dependent on user and time, i.e. CP(u, t). This
can be seen as the intersection value of expanded preferences and context (see Figure
11).
Initial runtime context
Extended context
Initial user preferences
Extended user preferences
Contextualized
user preferences
Context
t
Semantic
user preferences
Domain
concepts

Figure 11. Semantic intersection between preferences and context
Note also that at this point we have achieved a contextual preference mapping Φ as
defined in Section 5.3, namely Φ(P(u),C(u,t)) = CP(u,t), where P(u) |= Φ(P(u),C(u,t)),
since CP
x
(u,t) > P
x
(u,t) only when EP
x
(u) has been derived from P(u) through the
constrained spreading expansion mechanism, and CP
x
(u,t) < EP
x
(u).
5.8. Personalization in context
Finally, given a document d∈D, the personal relevance measure PRM of d for a user u
in a given instant t in a session is computed as prm(d, u, t) = cos (CP(u, t – 1), M(d)),



Personalized Information Retrieval in Context Using Ontological Knowledge 50

where M(d)∈[0,1]
|O|
is the semantic metadata concept-vector of the document
3
. This
prm function is combined with the query-dependent, user-neutral rank value, to produce
the final rank value for the document:
score (d, q, u, t) = f (prm (d, u, t), sim (d, q))
The similarity measure sim(d, q) stands for any ranking technique to rank documents
with respect to a query or request. In general, the combination above can be used to
introduce a personalized bias into any ranking technique that computes sim(d, q), such
as the common query and content based algorithms: keyword search, query by
example, metadata search etc…
5.9. An example use case
As an illustration of the application of the contextual personalization techniques,
consider the following scenario: Sue’s subscribed to an economic news content
provider. She works for a mayor food company, so she has preferences for news
related to companies of this sector, but she also tries to be up-to-date in the
technological companies, as her company is trying to apply the latest technologies in
order to optimize the food production chain. Sue is planning a trip to Tokyo and Kyoto,
in Japan. Her goal is to take ideas from different production chains of several Japan
partner companies. She has to document about different companies in Japan, so she
accesses the content provider and begins a search session.
Let us assume that the proposed framework has learned some of Sue’s preferences
over time or Sue has manually added some preference to the system, i.e. Sue’s profile
includes the weighted semantic interests for domain concepts of the ontology. These
include several companies from the food, beverage and tobacco sector and also
several technological companies. Only the relevant concepts have been included and
all the weights have been taken as 1.0 to simplify the example. This would define the
P(u
Sue
) vector, shown in Table 4 .



3
Note that in order to contextualize user preferences in the instant t, the context is taken one
step earlier, i.e. at t – 1. This means that the last request (the one being currently responded by
the system) is not included in the context. Otherwise, the last request would contribute twice to
the ranking: once through sim(d, q), and again via prm(d, u, t).



Personalized Information Retrieval in Context Using Ontological Knowledge 51

Table 4. Sue's preferences
P(u
Sue
)
Yamazaki Baking Co.
1.0
Japan Tobacco Inc. 1.0
McDonald's Corp. 1.0
Apple Computers Inc. 1.0
Microsoft Corp. 1.0
In our approach, these concepts are defined in a domain ontology containing other
concepts and the relations between them, a subset of which is exemplified in Figure
12.
Japan
Yamazaki Baking
Japan Tobacco Kyoto
Tokyo
locatedIn
locatedIn
subRegionOf
activeInSector
subRegionOf
Makoto Tajima
within
Organization
Apple Inc.
McDonald’s Corp
activeInSector
Microsoft Corp.
activeInSector
activeInSector
Microsoft
Apple
activeInSector
ownsBrand
ownsBrand
McDonald’s
ownsBrand
Food, Beverage
&Tobacco
Technology
Figure 12. A subset of a domain ontology concepts involved in the use case.
The propagation weight manually assigned to each semantic relation is shown in Table
5. Weights were initially set by manually analyzing and checking the effect of
propagation on a list of use cases for each relation, and was tuned empirically
afterwards. Investigating methods for automatically learning the weights is an open
research direction for our future work.





Personalized Information Retrieval in Context Using Ontological Knowledge 52

Table 5. Propagation of weights of semantic relations.
Relation r w(r) w(r
--1
)
locatedIn 0.0 0.6
withinOrganization 0.5 0.5
subRegionOf 0.6 0.9
activeInSector 0.3 0.8
ownsBrand 0.5 0.5
When Sue enters a query q
1
(the query-based search engine can be seen essentially
as a black box for our technique), the personalization system adapts the result ranking
to Sue’s preferences by combining the query-based sim(d,q) and the preference-based
prm(d,u
Sue
) scores for each document d that matches the query, as described in Section
4.2. At this point, the adaptation is not contextualized, since Sue has just started the
search session, and the runtime context is still empty (i.e. at t = 0, C(u, 0) = ∅).
But now suppose that the need of information expressed in q
1
is somehow related to
the concepts Tokyo and Kyoto, as Sue wants to find information about the cities she’s
visiting. She opens and saves some general information documents about the living
and economic style of these two cities.
As a result, the system builds a runtime context out of the metadata of the selected
documents and the executed query, including the elements shown in Table 6. This
corresponds to the C vector (which for t = 1 is equal to Req(t)), as defined in section 5.4.
Table 6. Context vector
C(u
Sue
,1)
Tokyo 1.0
Kyoto 1.0
Now, Sue wants to see some general information about Japanese companies. The
contextualization mechanism comes into place, as follows.
1. First, the context set is expanded through semantic relations from the initial
context, adding more weighted concepts, shown in bold in Table 7. This
makes up the EC vector, following the notation used in Section 5.5.

Table 7. Expanded context



Personalized Information Retrieval in Context Using Ontological Knowledge 53

EC(u
Sue
,1)
Tokyo 1.0
Kyoto 1.0
Japan 0.88
Japan Tobacco Inc. 0.55
Yamazaki Baking Co. 0.55
Food, Beverage & Tobacco (F,B & T) Sector 0.68
Person:’Makoto Tajima’ 0.23
2. Similarly, Sue’s preferences are extended through semantic relations from
her initial ones, as show in section 5.6. The expanded preferences stored in
the EP vector are shown in Table 8, where the new concepts are in bold.
Table 8. Extended user preferences
EP(u
Sue
)
Yamazaki Baking Co. 1.0 Brand:'Microsoft' 0.5
Japan Tobacco Inc. 1.0 Brand:'Mcdonald's' 0.5
McDonald's Corp. 1.0 Person:’Makoto Tajima’ 0.5
Apple Computers Inc. 1.0 Brand:'Apple' 0.5
Microsoft Corp. 1.0 F, B & T Sector 0.70
3. The contextualized preferences are computed as described in section III.D,
by multiplying the coordinates of the EC and the EP vectors, yielding the CP
vector shown in Table 9 (concepts with weight 0 are omitted).
Table 9. Contextualized user preferences
CP(u
Sue
)
Japan Tobacco Inc. 0.55
Yamazaki Baking Co. 0.55
F, B & T Sector 0.48
Makoto Tajima 0.12
Comparing this to the initial preferences in Sue’s profile, we can see that Microsoft,
Apple and McDonald’s are disregarded as out-of-context preferences, whereas Japan
Tobacco Inc. and Yamazaki Baking Co. have been added because they are strongly
semantically related both to the initial Sue’s preferences (food sector), and to the
current context (Japan). Figure 13 shows visually the whole expansion and preference
contextualization process regarding the presented use case.



Personalized Information Retrieval in Context Using Ontological Knowledge 54

Japan
Yamazaki Baking
Japan Tobacco Kyoto
Tokyo
locatedIn
locatedIn
subRegionOf
activeInSector
subRegionOf
Makoto Tajima
within
Organization
Apple Inc.
McDonald’s Corp
activeInSector
Microsoft Corp.
activeInSector
activeInSector
Microsoft
Apple
activeInSector
Initial user preferences
Extended user preferences
Contextualized
user preferences
Initial runtime context
Extended context
ownsBrand
ownsBrand
McDonald’s
ownsBrand
Food, Beverage
&Tobacco
Technology

Figure 13. Visual representation of the preference contextualization
4. Using the contextualized preferences showed in Table 9, a different
personalized ranking is computed in response to the current user query q
2

based on the EC(u
Sue
,1) vector, instead of the basic P(u
Sue
) preference vector,
as defined in section 5.7.
This example illustrates how our method can be used to contextualize the
personalization in a query-based content search system, where the queries could be of
any kind: visual, keyword-based, natural language queries. The approach could be
similarly applied to other types of content access services, such as personalized
browsing capabilities for multimedia repositories, automatic generation of a
personalized slideshow, generation of personalized video summaries (where video
frames and sequences would be treated as retrieval units), etc.



Personalized Information Retrieval in Context Using Ontological Knowledge 55

6. Experimental work
Evaluating interactive retrieval engines is known to be a difficult and expensive task
[130, 131]. The classic IR evaluation model, known as Cranfield-style evaluation
framework [35] specifies an evaluation framework with 1) a document collection, 2)
topic descriptions that express an information need and 3) explicit relevance
assessments made by those who generated the topics descriptions. The unique user
information of the classic model is the topic itself, thus the evaluation framework
needed an extension, adding the interaction model between the user and the retrieval
system [26, 116, 123, 125]. The evaluation is then made following the interaction model
given a topic, which is called the user's task. This interaction model is generally
obtained from real user interaction with the users (explicit) or from query log analysis
(implicit).
6.1. Evaluation of Interactive Information Retrieval Systems:
An Overview
We can broadly group user evaluation approaches by those that use real users, what
we will call user centered approaches, and those that do not interact directly with users,
or data driven approaches. Inside these two approaches there is a broad spectrum of
evaluation techniques. User centered approaches include user questionnaires [47, 82,
126], side-by-side comparisons [116], explicit relevance assessments [57]. Data driven
approaches normally exploit query log analysis [46, 116] or test collections [100]. The
following subsections will summarize the main characteristics of interactive retrieval
evaluation. Table 10 is a brief classification of the studied techniques.




Personalized Information Retrieval in Context Using Ontological Knowledge 56















Table 10. Summary of complete evaluation systems
6.1.1. User centered evaluation
Borlund suggested an evaluation model for IIR (Interactive Information Retrieval)
centered on user evaluation [26]. The author proposed a set of changes to the classic
Cranfield evaluation model. First, the topics or information needs should be more
focused on the human evaluators, who should develop individual and subjective
information need interpretations. And second, the evaluation metrics should meet the
subjective relevance of the users. For the first part, Borlund suggested the simulated
situation, which is composed by a simulated work task situation: a short “cover story”
that describes the situation (i.e. context) that leads to an individual information need,
and an indicative request, which is a short suggestion to the testers on what to search
for. A simulated situation tries to trigger a subjective, but simulated, information need
and to construct a platform against which relevance in context can be judged. An
example of a simulated situation can be found in Figure 14.
In the second part, evaluation metrics, Borlund suggests Retrieval Relevance (RR),
which adds an additional component of subjective relevance. The RR is composed of
two relevance values, one regarding the relevance of the document to the topic (which
can be calculated over a prefixed corpus) and a subjective relevance, given by the
REFERENCE CORPUS TASK ASSESMENTS METRIC
[110] Web Fixed Explicit R-Precision
[46] Web Free Implicit Rank scoring
[29, 30] Web Fixed Explicit P@10
[101] Web Fixed Explicit P@n
[100] Corpus Fixed Fixed, corpus MAP
[120] Web Fixed Users Non-standard
[82] Web Free None Questionnaire
[116] Web Free Explicit/Implicit Side-by-side
[57] Web Fixed Explicit Non-standard
[76] Web Fixed Explicit P@n
[47] Desktop Free None Questionnaire



Personalized Information Retrieval in Context Using Ontological Knowledge 57

user, taking into consideration the simulated situation. Borlund states that the classic
binary relevance punctuation, where documents are evaluated as relevant or not
relevant, is not sufficient for IIR evaluation, where the contextual relevance has to be
taken into account. Thus, the RR has multiple degree relevance assessments. This
tendency can be also observed in other publications [57, 67, 76] as well as in the
interactive text retrieval evaluation community [14].











Figure 14. Example of a simulated situation/simulated work task situation [26]
Some user centered evaluations [47, 82] use test questionnaires and personal
interviews before or after the human tester interacts with the retrieval engine. For
instance, the evaluation of the Stuff I’ve Seen application [47] includes previous and
posterior search questionnaires, filled by the users before and over a month later they
were interacting with the application. The questionnaire had both quantitative (e.g.
“How many times did you use the system yesterday”) and qualitative (e.g. “Do you find
quickly web pages that you have visited before?”) questions. The responses were then
compared with the pre-usage ones. Questionnaires give more direct feedback to the
evaluators, but require quite effort in order to retrieve and analyze the test results,
interviews are also highly time consuming.
Other type of user centered evaluation is using the testers to create the interaction
model with the system. The interaction model can be based on presented topics (or
simulated situations) or the users can interact freely with the system. In [29, 76, 101,
110] evaluations, the users have to perform a set of query topics (in [101] the authors
Simulated situation
Simulated work task situation
After your graduation you will be looking for a job in industry. You want information to
help you focus your future job seeking. You know it pays to know the market. You
would like to find some information about employment patterns in industry and what
kind of qualifications employers will be looking for from future employees.
Indicative request
Find, for instance, something about future employment trends in industry, i.e., areas
of growth and decline.



Personalized Information Retrieval in Context Using Ontological Knowledge 58

used the topics from TREC interactive evaluation track
4
). For each topic the users
perform a free number of queries, in what is known a search session. At the end of the
process the users will explicitly judge the relevancy of each search result (results can
be also mixed with ones from the baseline search system). The result set from the final
iteration will be then evaluated and compared against a baseline result set. The
baseline is normally a Web search engine, since usually the system acts as a proxy of
a popular search engine. In [120] the users have to interact with desktop applications,
as this is how the system gets session related information from the user. The users
perform tasks that stimulate an interaction with the desktop environment, like writing an
essay. At the end of the experiment, the users try the contextualization of a Web
search engine and rank the relevance of each result. Finally, the mean relevance
metric is compared against the unordered results.
Regarding free interaction evaluation, Thomas and Hawking presented a side-by-side
evaluation schema [116] capable of collecting implicit feedback actions and of
evaluating interactive search systems. This evaluation schema consisted in a two panel
search page with one query input. The users substitute their favorite search engine
with the evaluation UI. Thus, the users interact with both the two search engines to be
evaluated. The system was capable of collecting implicit feedback measures, i.e. to
create an interaction model for interactive and context-aware system evaluations.
Finally, the users have to provide relevance assessments for the result returned by the
system to evaluate. The relevance assessments can be collected by implicit feedback
techniques or explicitly. Explicit relevance assessments require a considerable amount
of effort from the testers. Trying to palliate this, some evaluations base the
assessments on implicit feedback techniques, where the relevance of a result is
inferred automatically monitoring the user interaction, like if the user clicked a result,
the time the user spent viewing the content, the amount of scrolling and so on. The use
of Implicit feedback for the generation of relevance judgments has been proven to have
a considerable accuracy, Thomas and Hawking were able to prove that implicit
relevance judgments were consistent with explicit feedback evaluations the 68-87% of
times [116].

4
Evaluation track for interactive IR systems: http://trec.nist.gov/data/interactive.html



Personalized Information Retrieval in Context Using Ontological Knowledge 59

6.1.2. Data driven evaluation
Data driven evaluations construct the interaction without an explicit interaction of a
human subject. The advantage of data driven evaluation is that, once the dataset
collection is prepared, the corpus can be easily reused for other test evaluations. One
of the main problems evaluators face is when the Web is used as document corpus.
The Web is highly dynamic, thus the evaluation has to be as close as possible to the
time the interaction model was created, as this has a strong dependency with the
search results that were displayed to the user.
With the idea of easing the construction of interactive evaluation corpus, White [123]
presented a model for an automatic creation of interaction models, the model could
take into consideration parameters like how many relevant documents are visited or the
“wondering” behavior. These models are optimized with real user information.
Another solution is to exploit the query logs of the interaction of users with a publicly
available search engine, by means of search proxies. The information of the query logs
can go from the clickthrough data to the documents opened by the user, the time spent
viewing a document, or the opened applications. In [46] different personalization and
contextualization approaches are evaluated by analyzing a big set of query log history.
This log history, obtained from the MSN search engine, has information like cross-
session user id (through cookie information), the user’s queries and the clickthrough
information. The authors are then able to implicitly state what results are relevant for
the users, by globally analyzing the query and clickthrough information of a big set of
users. In short words, the authors compensate the uncertainty given by implicit
relevance assessments with a large base of users. Extracting this relevance criterion
from the so called training set, the authors can then evaluate the effectiveness of the
different approaches by calculating the precision of the reordered result set by each
personalization algorithm.
6.1.3. Evaluation metrics
There are numerous evaluation metrics in the literature for classic IR evaluation, IIR
authors normally apply these kind of measures [29, 30, 100, 101, 110]. These metrics
rely in relevance assessments for the returned results. Whereas in classic evaluation
the relevance assessments are based on the relevancy of the result for the document,
the authors of IIR systems incorporate human subjective assessments, either implicitly,



Personalized Information Retrieval in Context Using Ontological Knowledge 60

analyzing interaction logs or explicitly, asking the users to rate the results (relevance
based metrics) or provide a “best” order (ranking position based metrics).
One of the most known metric based on relevance in the IR community is Precision
and Recall (PR) [20], this metric takes into consideration the ratio of relevant
documents retrieved (precision) and the quantity of relevant documents retrieved
(recall), computing a vector of values at 11 fixed percentage points of recall (from 0 to
100% of relevant documents retrieved, with increments of 10%).
Precision at n (P@n), or cut off points, is another metric quite common in the IIR SoA
[29, 76, 101]. P@n is a ration between relevant documents in the first n retrieved
documents and n. Other standard de facto metric is the mean average precision
(MAP). Mean average precision is the average for the 11 fixed precision values of the
PR metric, used for instance in the evaluation of [100]. Other authors use non-standard
(or uncommon) metrics, like the number of relevant documents in the first 10 results
[57], or the average of the explicit assessment of the user, given in values between 1
and 5 [120]. In ranking position metrics, users inspect the returned results set and
indicate what could have been the best subjective order. Finally a measure of distance
(e.g. K-distance) is used in order to calculate the performance of the retrieval system.
Some authors have focused in new metrics for IIR evaluations. Borlund adapted classic
metrics to add the concept of relevancy to the context and multi degree relevance
assessments [26].Two of the metrics presented were the Relative Relevance (RR) and
the Half Life Relevance (HLR), which added multi-valuated relevance assessments and
was based on vectors of two relevance values, one concerning the algorithm itself
(classic ranking) and the second considering the subjective relevance given by the
user. Järvelin & Kekäläinen created two new evaluation metrics, CG (Cumulative Gain)
and DGN (Discount Cumulative Gain) [67]. these take into consideration the position of
the relevant document, giving more importance to highly relevant documents (i.e.
relevant to topic and user context) that appear in the first positions, as those are the
more likely to be noticed by the user. In order to validate their implicit evaluation
methods, Dou et al used rank scoring and average rank metrics [46], with origin in
personalized recommendation evaluations. The metrics measure the quality of a result
list presented by a recommendation system. Both metrics are computed by using the
clickthrough data and the position of the documents in the result set.



Personalized Information Retrieval in Context Using Ontological Knowledge 61

6.1.4. Evaluation corpus
Evaluation corpus are extremely hard to prepare for an evaluation process, this is true
in IR evaluation and even more difficult to achieve in IIR evaluation, as the interaction
model has to be incorporated, either fixed and along with the corpus [100] or as topic
description that can “guide” the user to an interaction with the system [26]. Most
authors prefer to either have a free corpus (the Web, using search proxies or local
repositories as Desktop files), or either use or adapt already existent corpus from the
IR community [100], like corpus from the TREC conference.
The TExt Retrieval Conference (TREC)
5
is an annual international conference with the
purpose of supporting the large scale evaluation inside the Information Retrieval
community. Each year, the conference consists in a set of tracks, which focus in an
specific area of information retrieval. The normal procedure for participating in a TREC
conference is selecting a track, obtain the corpus assigned to the track, evaluate the
list of topics (i.e. search queries) on the retrieval engine and send the results to the
task committee, where the result will be evaluated. The relevance judgments are
obtained with a polling method: the first 100 result for every topic of every submitted
search engine is evaluated, instead than evaluating the whole collection, which given
the size of most of the datasets (some going over 20M documents) it’s simply not
feasible.
TREC Interactive Track researches the user interaction with retrieval engines. The
topic execution is made by human users, and different interactions with the system are
performed, like query reformulation or explicit feedback (depending on the protocol of
each year). The interactions, however, are not stored or added to the evaluation
framework. Thus, the results are not as useful, as they lack of the interaction model
data. There is also an interactive task for video retrieval, TRECVID, in its origin a track
of TREC and now an independent conference. In the interactive task, the experimenter
can refine queries and modify the ranked result list after the initial query. The
Interactive search still performs significantly better than the automatic or manual query
tasks.
The HARD Track [14] investigates the knowledge of the user’s context in information
retrieval systems. The topics are expressed similarly to other TREC tracks, except that

5
http://trec.nist.gov/



Personalized Information Retrieval in Context Using Ontological Knowledge 62

additional metadata on the “context” of the user is given. This metadata includes the
desired genre of the returned results, the purpose of the search or an specification of
geographic focus of documents. The relevance judgments take into consideration the
context described in the metadata, judging a document as non relevant (to the topic),
soft relevant (i.e. on topic but not relevant to the context) and hard relevant (both
relevant to topic and context).
In [100] the authors adapted a TREC dataset, taking advantage that there were
already relevant assessments and topic definitions for the document database. As this
corpus lacked of an interactive model, the authors constructed one by monitoring three
human subjects searching for the evaluation topics.
6.2. Our Experimental Setup
The contextualization techniques described in the previous sections have been
implemented in an experimental prototype, and tested on a medium-scale corpus. A
formal evaluation of the contextualization techniques may require a significant amount
of extra feedback from users in order to measure how much better a retrieval system
can perform with the proposed techniques than without them. For this purpose, it would
be necessary to compare the performance of retrieval a) without personalization, b)
with simple personalization, and c) with contextual personalization. This requires
building a testbed consisting of: a document corpus, a set of task descriptions, the
relevance judgments for each task description, and the interaction model, either fixed
or provided by the users.
The document corpus consists of 145,316 documents (445MB) from the CNN web
site
6
, plus the KIM domain ontology and knowledge base publicly available as part of
the KIM Platform, developed by Ontotext Lab
7
, with minor extensions. The KB contains
a total of 281 RDF classes, 138 properties, 35,689 instances, and 465,848 sentences.
The CNN documents are annotated with KB concepts, amounting to over three million
annotation links. The relation weights R and R
-1
were first set manually on an intuitive
basis, and tuned empirically afterwards by running a few trials.

6
http://dmoz.org/News/Online_Archives/CNN.com
7
http://www.ontotext.com/kim



Personalized Information Retrieval in Context Using Ontological Knowledge 63

Task descriptions will be similar to the simulated situation explained in section 6.1.1,
the goal of the task description is to provide the sufficient query and contextual
information to the user to perform the relevance assessments and to stimulate the
interaction with the evaluation system.
Similar to the TREC Hard track (see 6.1.4) we will use real users that will provide
explicit relevance assessments with respect to a) query relevance, b) query relevance
and general user preference (i.e. regardless of the task at hand), and c) query
relevance and specific user preference (constrained to the context of her task)
The selected metrics are precision and recall values for single query evaluation and
average precision and recall, and mean average precision for whole system
performance evaluation. Average precision and recall is the average value of the PR
points for a set of n queries. PR values was chosen as it allows a finer analysis of the
results, as the values can be represented in a graph and different levels of
performance can be compared at once. For instance, a retrieval engine can have a
good precision, showing relevant results in the top 5 documents, whereas the same
search system can lack of a good recall performance, as it is unable to find a good
proportion of relevant documents in the search corpus, see Figure 15 for a visual
description of the precision and recall areas.
Since the contextualization techniques are applied in the course of a session, one way
to evaluate them is to define a sequence of steps where the techniques are put to
work. This is the approach followed presented set of experiments, for which we the
task descriptions consist of a fixed set of hypothetic context situations, detailed step by
step.



Personalized Information Retrieval in Context Using Ontological Knowledge 64

Precision area Recall area
Optimal results
0,0 0,2 0,4 0,6 0,8 1,0
Recall
0,0
0,2
0,4
0,6
0,8
1,0
P
r
e
c
i
s
i
o
n
Good precision performance
Good recall performance

Figure 15. Different areas of performance for a precision and recall curve. The goal area to
reach is the upper right part, as it will denote that the search engine has a good precision over
all the results and has achieved a good recall value of documents.
6.3. Scenario Based Testing: a Data Driven Evaluation
Although subjective, these experiments allow meaningful observations, and testing the
feasibility, soundness, and technical validity of the defined models and algorithms. A
finer evaluation of the results can be performed, as the interaction model is fixed, and a
better and fine grained comprehension of the proposed system behavior, which is
critical, due to the high quantity of different parameters and factor that affects the final
search results ranking.
The retrieval system used for this experiment is a semantic search engine co-
developed by the authors [33], which did not implement itself any personalization
capability. The retrieval system was adapted in a way that included: 1) a user profile
editor, 2) personalization capabilities, 3) the context monitor engine 4) the semantic
expansion module and 5) the contextual personalization algorithm.
The user is able to input a keyword search and what we call semantic or ontology-
based search, using the Knowledge Base (KB) relations [33]. Figure 16 shows the
main window of the evaluation and retrieval system.



Personalized Information Retrieval in Context Using Ontological Knowledge 65

Document viewer
Result summaries
Keyword query input
Semantic query input
Personalization slider
Semantic query dialog

Figure 16. Main window of the evaluation system
The profile editor allowed selecting manually single concepts as user preferences.
Exploiting the semantic description of the KB, users were also able to create what we
called complex preferences, i.e. preferences based on restrictions, like “I like airline
companies” or “I don’t like tech companies that trade on the New York stock
exchange”. Thus avoiding having to search and manually select those concepts as
preferences. Figure 17 shows the main component of the profile editor UI.



Personalized Information Retrieval in Context Using Ontological Knowledge 66

Ontology browser
Concept Preferences
Complex Preferences

Figure 17. Profile editor UI
The context monitor engine is based on simple implicit feedback techniques, extracting
concept from input queries (either keyword or semantic) and opened document. The
extracted concepts are applied within a timeline of actions, as explained on section 5.4,
where a user input query is the unit of time. Finally, the semantic expansion and user
profile contextualization algorithms are applied, using the contextualized preferences
as input of the personalization algorithm (see section 4.3.1). Figure 18 shows the
debug evaluation dialog where the contextual and user profile information is shown.



Personalized Information Retrieval in Context Using Ontological Knowledge 67


Figure 18. Contextual and personalization information dialog
The user preferences are fixed for the whole experiment, consisting on a set of
weighted concepts, manually assigned by the evaluators, where part of them because
of the relation with the tasks descriptions. The task descriptions consisted of a short
topic, indicating when a document is relevant to the task, a description pointing when a
document is relevant to the user preferences, and a step by step interaction model (see
Figure 19). The latter means considering sequences of user actions defined a priori,
which makes it more difficult to get a realistic user assessments, since in principle the
user would need to consider a large set of artificial, complex and demanding



Personalized Information Retrieval in Context Using Ontological Knowledge 68

assumptions.

Figure 19. Example of task description for the use case described in section 5.9, one of the
final set of 10 use cases
The final search result of the last query interaction is presented to the evaluator (role
that we played ourselves) to provide the explicit relevance assessments of the whole
result set. The document corpus, the task descriptions, the fixed interaction model and
the relevance assessments finally formed a full collection test for the proposal. This
collection was stored in such a way to facilitate reproduction of the evaluation and
reuse of the test collection.
A final set of ten task descriptions were created as part of the evaluation set up. Figure
20 a) shows the results of this experimental approach for the use case described in the
previous section. This is a clear example where personalization alone would not give
better results, or would even perform worse than non-adaptive retrieval (see the drop of
precision for recall between 0.1 and 0.4 in Figure 20 a), because irrelevant long-term
preferences (such as, in the example, technological companies which are not related to
the current user focus on Japan-based companies) would get in the way of the user.
The experiment shows how our contextualization approach can avoid this effect and
significantly enhance personalization by removing such out-of-context user interests
and leaving the ones that are indeed relevant in the ongoing course of action.
Task X: Japan Based Companies
Relevancy to task
Relevant documents are those that mention a company that has or has had an
operation based in Japan. The document has to mention the company, it is not
mandatory that the article mentions the fact that the company is based in Japan.
Relevancy to preferences
Consider that the article adjusts to your preferences when one of the mentioned
companies has a positive value in your user profile.
Interaction model
1. Query input[keyword]: Tokio Kyoto Japan
2. Opened document: n=1, docId=345789
3. Opened document: n=3, docId=145623
4. Query input[semantic]: Japanese companies



Personalized Information Retrieval in Context Using Ontological Knowledge 69

0,0 0,2 0,4 0,6 0,8 1,0
Recall
0,0
0,2
0,4
0,6
0,8
1,0
P
r
e
c
i
s
i
o
n
Contextual Personalization
Simple Personalization
Personalization Off
0,0 0,2 0,4 0,6 0,8 1,0
Recall
0,0
0,2
0,4
0,6
0,8
1,0
P
r
e
c
i
s
i
o
n
Contextual Personalization
Simple Personalization
Personalization Off

Figure 20. Comparative performance of personalized search with and without contextualization,
showing the precision vs. recall curve for a) one of the scenarios, and b) the average over 10
scenarios. The results in graphic a) correspond to the query ”Companies based in any
Japanese region”, from the use case described in Section 5.9.
It can also be observed that the contextualization technique consistently results in
better performance with respect to simple personalization, as can be seen in the
average precision and recall depict by Figure 20 b), which shows the average results
over the ten use cases, and Figure 21, depicting the MAP histogram comparing the
contextualized vs. non-contextualized personalization at retrieval time.




Personalized Information Retrieval in Context Using Ontological Knowledge 70

‐0,20
0,00
0,20
0,40
0,60
0,80
1,00
1 2 3 4 5 6 7 8 9 10
A
v
e
r
a
g
e
P
r
e
c
i
s
i
o
n
(
M
A
P
)
 
C
o
n
t
e
x
t
/
(
P
e
r
s
o
n
a
l
i
z
a
t
i
o
n
O
n
/
P
e
r
s
o
n
a
l
i
z
a
t
i
o
n
O
f
f
)
Use case Number
MAP Histogram
Personalization On
Personalization Off
Figure 21. Comparative mean average precision histogram of personalized search with and
without contextualization for the ten use cases. The light-gray bars compare personalized
retrieval in context vs. simple personalized retrieval without context, and the dark-gray ones
compare personalized retrieval in context vs. retrieval without personalization.
7. Conclusions and Future Work
The document here presented introduces a novel technique for retrieval
personalization, where short term preferences are taken into account, not only as
another source of preference, but as a complement for the user profile, in order that
aids the selection of those preferences that will produce reliable and “in context”
results. The contributions of this work can be summarized in four main points:
• A personalization framework based on an ontology based representation of user
interests. Two of the three main parts of any personalization framework have been
addressed: user profile exploitation and user profile representation. The user profile
representation is based on ontology concepts, which are richer and more precise
than classic keyword based approaches. The user profile exploitation lays over a
semantic index, where concepts are related to every document in the search
corpus and exploits concepts related to items or documents to make a semantic
matching between the user profile and the document metadata. User profile
learning has been left out from the scope of this work, as it is alone a wide and
complex area of investigation [15, 122].



Personalized Information Retrieval in Context Using Ontological Knowledge 71

• A context modeling approach for information retrieval. We have here proposed
an approach for the automatic acquisition and exploitation of a live user context, by
means of the implicit supervision of user actions in a search session. Rather than
exploiting context by itself, we introduced the concept of using context to improve
personalization in retrieval systems.
• A method for the combination of long-term and short-term preferences, by the
semantic combination between user preferences and the runtime context. The
semantic combination is performed by adapting a classic constraint spreading
activation algorithm to the semantic relations of the ontological KB.
• Presentation of experimental results on the final retrieval system based on
data driven evaluation techniques. The results were driven over a semantic search
engine co-developed by the author [33], using a document corpus with metadata
produced by simple annotation techniques, as these are not the scope of this work.
Context, as here presented, is seen as the themes and concepts related to the user’s
session and current interest focus. In order to get these related concepts, the system
has first to monitor the interactions of the user with the system. However, when the
user starts a search session, the system still does not have any contextual information
whatsoever. This problem is known as cold start. The initial selected solution is to not
filter the user’s preferences in the first iteration with the system, but 1) this would
produce a decay on the performance of the system for this interaction (as seen in the
early experiments) and 2) this would suppose that the only useful context source can
be only taken by interactions with the system, which is easily refutable. Take, for
instance, other interactions of the user with other applications [47], or implicit
contextual information, like the location or the agenda of the user. Another limitation of
the contextualization approach is assuming that consecutive user queries tend to be
related, which does not hold when sudden changes of user focus occur. A desirable
feature would be to automatically detect a user’s change of focus [65, 107], and being
able to select which concepts (if not all) of the current context representation are not
more desirable for the preference filtering.
The experiments here shown are promising, and have being of great help in order to
analyze in detail the behavior of different sub-modules of the system. However, there is
a factor of subjectivity that has to be addressed in future work. An experimentation



Personalized Information Retrieval in Context Using Ontological Knowledge 72

approach involving real human subjects would complement the presented data driven
experiment, giving a more general view of the performance of the retrieval system.
As any algorithm that exploits semantic metadata, both the quality of the metadata
related to any document in the search space, and the richness of the representation of
these concepts within the KB are critical to the overall performance. The practical
problems involved in meeting the latter conditions are the object of a large body of
research on ontology construction [108], semantic annotation [45, 74, 91], semantic
integration [70, 88], and ontology alignment [52], and are not the focus of this work. Yet
this kind of metadata is still not extended in common corpora like the Web, one solution
would be to adapt the technique to folksonomies [105], very related to the tag clouds
we are used to see nowadays. Folksonomies are more limited than ontologies, but the
immediate application to today’s web pages and services would give valuable feedback
of the usefulness of the proposed techniques. Although the quality of the KB ontology
is critical, the experiments were performed with an ontology semi-automatically created
and populated by Web scrapping. We were able to prove that, even using these kind of
ontologies, there is still a significant improvement over classic IR personalization,
without having to put the effort that a manual construction ontology (with, presumably,
a higher quality) would require.




Personalized Information Retrieval in Context Using Ontological Knowledge 73

8. References
1. Amazon. http://www.amazon.com.
2. BroadVision's One-To-One. http://www.broadvision.com.
3. Decipho. http://www.decipho.com.
4. DMOZ Open directory proyect. http://dmoz.org.
5. Entopia. http://www.entopia.com.
6. Eurekster. www.eurekster.com.
7. Google Co-op. http://www.google.com/coop.
8. Google Personalized Web Search.http://labs.google.com/personalized.
9. iGoogle. Google personalzed search & Home. http://www.google.com/ig.
10. My Yahoo! Personal Search engine: http://my.yahoo.com.
11. Verity K2. http://www.verity.com.
12. Abowd, G.D., Dey, A.K., Orr, R. and Brotherton, J., Context-awareness in
wearable and ubiquitous computing. in First International Symposium on
Wearable Computers. (ISWC 97), (Cambridge, MA, 1997), 179-180.
13. Akrivas, G., Wallace, M., Andreou, G., Stamou, G. and Kollias, S., Context-
Sensitive Semantic Query Expansion. in ICAIS '02: Proceedings of the 2002
IEEE International Conference on Artificial Intelligence Systems (ICAIS'02),
(Divnomorskoe, Russia, 2002), IEEE.
14. Allan, J., Overview of the TREC 2003 HARD track. in Proceedings of the 12th
Text REtrieval Conferenc (TREC, (2003).
15. Ardissono, L. and Goy, A. Tailoring the Interaction with Users in Web Stores.
User Modeling and User-Adapted Interaction, 10 (4). 251-303.
16. Ardissono, L., Goy, A., Petrone, G., Segnan, M. and Torasso, P. INTRIGUE:
Personalized recommendation of tourist attractions for desktop and handset
devices Applied Artificial Intelligence, Special Issue on Artificial Intelligence for
Cultural Heritage and Digital Libraries, Taylor, 2003, 687-714.
17. Arvola, P., Junkkari, M. and Kekäläinen, J., Generalized contextualization
method for XML information retrieval. in CIKM '05: Proceedings of the 14th
ACM international conference on Information and knowledge management,
(Bremen,Germany, 2005), ACM, 20-27.
18. Asnicar, F. and Tasso, C., ifWeb: a Prototype of User Model-Based Intelligent
Agent for Document Filtering and Navigation in the World Wide Web. in Proc. of
6th International Conference on User Modelling, (Chia Laguna, Sardinia, Italy,
1997).
19. Badros, G.J. and Lawrence, S.R. Methods and systems for personalised
network searching, US Patent Application 20050131866, 2005.
20. Baeza-Yates, R. and Ribeiro-Neto, B. Modern Information Retrieval. Addison-
Wesley, 1999.
21. Barry, C. User-defined relevance criteria: an exploratory study. J. Am. Soc. Inf.
Sci., 45 (3). 149-159.
22. Bauer, T. and Leake, D., Real time user context modeling for information
retrieval agents. in CIKM '01: Proceedings of the tenth international conference
on Information and knowledge management, (Atlante, Georgia, USA, 2001),
ACM, 568-570.
23. Bharat, K., SearchPad: explicit capture of search context to support Web
search. in Proceedings of the 9th international World Wide Web conference on



Personalized Information Retrieval in Context Using Ontological Knowledge 74

Computer networks : the international journal of computer and
telecommunications netowrking, (Amsterdam, The Netherlands, 2000), North-
Holland, 493-501.
24. Bharat, K., Kamba, T. and Albers, M. Personalized, interactive news on the
Web. Multimedia Syst., 6 (5). 349-358.
25. Billsus, D., Hilbert, D. and Maynes-Aminzade, D., Improving proactive
information systems. in IUI '05: Proceedings of the 10th international
conference on Intelligent user interfaces, (San Diego, California, USA, 2005),
ACM, 159-166.
26. Borlund, P. The IIR evaluation model: a framework for evaluation of interactive
information retrieval systems. Information Research, 8 (3). paper no.152.
27. Brown, P.J., Bovey, J.D. and Chen, X. Context-aware applications: from the
laboratory to the marketplace. Personal Communications, IEEE [see also IEEE
Wireless Communications], 4 (5). 58-64.
28. Brusilovsky, P., Eklund, J. and Schwarz, E. Web-based education for all: a tool
for development adaptive courseware. Computer Networks and ISDN Systems,
30 (1--7). 291-300.
29. Budzik, J. and Hammond, K., User interactions with everyday applications as
context for just-in-time information access. in IUI '00: Proceedings of the 5th
international conference on Intelligent user interfaces, (New Orleans, Louisiana,
United State, 2000), ACM, 44-51.
30. Budzik, J. and Hammond, K., Watson: Anticipating and Contextualizing
Information Needs. in 62nd Annual Meeting of the American Society for
Information Science, (1999).
31. Callan, J., Smeaton, A., Beaulieu, M., Borlund, P., Brusilovsky, P., Chalmers,
M., Lynch, C., Riedl, J., Smyth, B., Straccia, U. and Toms, E. Personalisation
and Recommender Systems in Digital Libraries Joint NSF-EU DELOS Working
Group Report, 2003.
32. Campbell, I. and van Rijsbergen, C., The ostensive model of developing
information needs. in Proceedings of COLIS-96, 2nd International Conference
on Conceptions of Library Science, (1996), 251-268.
33. Castells, P., Fernandez, M. and Vallet, D. An Adaptation of the Vector-Space
Model for Ontology-Based Information Retrieval. IEEE Transactions on
Knowledge and Data Engineering, 19 (2). 261-272.
34. Castells, P., Fernández, M., Vallet, D., Mylonas, P. and Avrithis, Y. Self-Tuning
Personalized Information Retrieval in an Ontology-Based Framework. 3762.
977-986.
35. Cleverdon, C.W., Mills, J. and Keen, M. Factors determining the performance of
indexing systems. ASLIB Cranfield project, Cranfield.
36. Crestani, F. Application of Spreading Activation Techniques in
InformationRetrieval. Artif. Intell. Rev., 11 (6). 453-482.
37. Crestani, F. and Lee, P. Searching the Web by constrained spreading
activation. Inf. Process. Manage., 36 (4). 585-605.
38. Crestani, F. and Lee, P., WebSCSA: Web search by constrained spreading
activation. in Research and Technology Advances in Digital Libraries, 1999.
ADL '99. Proceedings. IEEE Forum on, (1999), 163-170.
39. Chakrabarti, S., den Berg, M. and Dom, B. Focused crawling: a new approach
to topic-specific Web resource discovery. Computer Networks and ISDN
Systems, 31 (11-16). 1623-1640.



Personalized Information Retrieval in Context Using Ontological Knowledge 75

40. Chen, P.-M. and Kuo, F.-C. An information retrieval system based on a user
profile. J. Syst. Softw., 54 (1). 3-8.
41. Chirita, P., Nejdl, W., Paiu, R. and Kohlschütter, C., Using ODP metadata to
personalize search. in SIGIR '05: Proceedings of the 28th annual international
ACM SIGIR conference on Research and development in information retrieval,
(Salvador, Brazil, 2005), ACM, 178-185.
42. Chirita, P.A., Olmedilla, D. and Nejdl, W., Finding related hubs and authorities.
in Web Congress, 2003. Proceedings. First Latin American, (Santiago, Chile,
2003), 214-215.
43. Dasiopoulou, S., Mezaris, V., Kompatsiaris, I., Papastathis, V.K. and Strintzis,
M.G. Knowledge-assisted semantic video object detection. Circuits and
Systems for Video Technology, IEEE Transactions on, 15 (10). 1210-1224.
44. De Bra, P., Aerts, A., Berden, B., de Lange, B., Rousseau, B., Santic, T., Smits,
D. and Stash, N. AHA! The Adaptive Hypermedia Architecture. 4. 115-139.
45. Dill, S., Eiron, N., Gibson, D., Gruhl, D., Guha, R., Jhingran, A., Kanungo, T.,
McCurley, K.S., Rajagopalan, S., Tomkins, A., Tomlin, J.A. and Zien, J.Y. A
Case for Automated Large Scale Semantic Annotation. Journal of Web
Semantics, 1 (1). 115-132.
46. Dou, Z., Song, R. and Wen, J., A Large-scale Evaluation and Analysis of
Personalized Search Strategies.pdf. in Proceedings of the 16th international
World Wide Web conference (WWW2007), (Banff, Alberta, Canada, 2007),
572-581.
47. Dumais, S., Cutrell, E., Cadiz, J.J., Jancke, G., Sarin, R. and Robbins, D., Stuff
I've seen: a system for personal information retrieval and re-use. in SIGIR '03:
Proceedings of the 26th annual international ACM SIGIR conference on
Research and development in informaion retrieval, (Toronto, Canada, 2003),
ACM, 72-79.
48. Dwork, C., Kumar, R., Naor, M. and Sivakumar, D., Rank aggregation methods
for the Web. in World Wide Web, (Hong Kong, 2001), 613-622.
49. Edmonds, B., The Pragmatic Roots of Context. in CONTEXT '99: Proceedings
of the Second International and Interdisciplinary Conference on Modeling and
Using Context, (Trento, Italy, 1999), Springer-Verlag, 119-132.
50. Eisenstein, J., Vanderdonckt, J. and Puerta, A., Adapting to mobile contexts
with user-interface modeling. in WMCSA '00: Proceedings of the Third IEEE
Workshop on Mobile Computing Systems and Applications (WMCSA'00),
(Monterey, California, USA, 2000), IEEE.
51. Encarnação, M., Multi-level user support through adaptive hypermedia: a highly
application-independent help component. in IUI '97: Proceedings of the 2nd
international conference on Intelligent user interfaces, (Orlando, Florida, USA,
1997), ACM, 187-194.
52. Euzenat, J., Evaluating ontology alignment methods. in Proceedings of the
Dagstuhl seminar on Semantic, (Wadern, Germany, 2004), 47-50.
53. Ferragina, P. and Gulli, A., A personalized search engine based on web-snippet
hierarchical clustering. in WWW '05: Special interest tracks and posters of the
14th international conference on World Wide Web, (Chiba, Japan, 2005), ACM,
801-810.
54. Fink, J. and Kobsa, A. A Review and Analysis of Commercial User Modeling
Servers for Personalization on the World Wide Web. User Modeling and User-
Adapted Interaction, 10 (2-3). 209-249.



Personalized Information Retrieval in Context Using Ontological Knowledge 76

55. Fink, J. and Kobsa, A. User Modeling for Personalized City Tours. Artif. Intell.
Rev., 18 (1). 33-74.
56. Fink, J., Kobsa, A. and Nill, A., Adaptable and Adaptive Information Access for
All Users, Including the Disabled and the Elderly. in Proc. of the 6th
International Conference on User Modelling (UM’1997), (Chia Laguna,
Sardinia, Italy, 1997), Springer, 171-176.
57. Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G. and
Ruppin, E., Placing search in context: the concept revisited. in World Wide
Web, (2001), 406-414.
58. Gauch, S., Chaffee, J. and Pretschner, A. Ontology-based personalized search
and browsing. Web Intelli. and Agent Sys., 1 (3-4). 219-234.
59. Guha, R., McCool, R. and Miller, E., Semantic search. in, (2003), 700-709.
60. Hanumansetty, R. Model Based Approach for Context Aware and Adaptive user
Interface Generation, 2004.
61. Haveliwala, T., Topic-sensitive PageRank. in Proceedings of the Eleventh
International World Wide Web Conference, (Honolulu, Hawaii, USA, 2002).
62. Heer, J., Newberger, A., Beckmann, C. and Hong, J. liquid: Context-Aware
Distributed Queries, 2003.
63. Henzinger, M., Chang, B.-W., Milch, B. and Brin, S., Query-free news search. in
WWW '03: Proceedings of the 12th international conference on World Wide
Web, (Budapest, Hungary, 2003), ACM, 1-10.
64. Hirsh, H., Basu, C. and Davison, B. Enabling technologies: learning to
personalize. Communications of the ACM, 46 (8). 102-106.
65. Huang, X., Peng, F., An, A. and Schuurmans, D. Dynamic web log session
identification with statistical language models. Journal of the American Society
for Information Science and Technology, 55 (14). 1290-1303.
66. Jansen, B., Spink, A., Bateman, J. and Saracevic, T. Real life information
retrieval: a study of user queries on the Web. SIGIR Forum, 32 (1). 5-17.
67. Järvelin, K. and Kekäläinen, J., IR evaluation methods for retrieving highly
relevant documents. in SIGIR '00: Proceedings of the 23rd annual international
ACM SIGIR conference on Research and development in information retrieval,
(Athens, Greece, 2000), ACM, 41-48.
68. Jeh, G. and Widom, J., Scaling personalized web search. in WWW '03:
Proceedings of the 12th international conference on World Wide Web,
(Budapest, Hungary, 2003), ACM, 271-279.
69. Jose, J.M. and Urban, J. EGO: A Personalised Multimedia Management and
Retrieval Tool. International Journal of Intelligent Systems (Special issue on
"Intelligent Multimedia Retrieval"), 21 (7). 725-745.
70. Kalfoglou, Y. and Schorlemmer, M. Ontology mapping: the state of the art.
Knowl. Eng. Rev., 18 (1). 1-31.
71. Kelly, D. and Teevan, J. Implicit Feedback for Inferring User Preference: A
Bibliography. SIGIR Forum, 32 (2). 18-28.
72. Kerschberg, L., Kim, W. and Scime, A., A Semantic Taxonomy-Based
Personalizable Meta-Search Agent. in WISE '01: Proceedings of the Second
International Conference on Web Information Systems Engineering (WISE'01)
Volume 1, (Kyoto, Japan, 2001), IEEE.
73. Kim, H. and Chan, P., Learning implicit user interest hierarchy for context in
personalization. in IUI '03: Proceedings of the 8th international conference on
Intelligent user interfaces, (Miami, USA, 2003), ACM, 101-108.



Personalized Information Retrieval in Context Using Ontological Knowledge 77

74. Kiryakov, A., Popov, B., Terziev, I., Manov, D. and Ognyanoff, D. Semantic
annotation, indexing, and retrieval. Web Semantics: Science, Services and
Agents on the World Wide Web, 2 (1). 49-79.
75. Kobsa, A. Generic User Modeling Systems. User Modeling and User-Adapted
Interaction, 11 (1-2). 49-63.
76. Kraft, R., Chang, C., Maghoul, F. and Kumar, R., Searching with Context. in
WWW '06: Proceedings of the 15th international conference on World Wide
Web, (Edinburgh, Scotland, 2006), ACM, 367-376.
77. Kraft, R., Maghoul, F. and Chang, C., Y!Q: contextual search at the point of
inspiration. in CIKM '05: Proceedings of the 14th ACM international conference
on Information and knowledge management, (Bremen, Germany, 2005), ACM,
816-823.
78. Krovetz, R. and Croft, B. Lexical ambiguity and information retrieval. ACM
Trans. Inf. Syst., 10 (2). 115-141.
79. Lee, J., Analyses of multiple evidence combination. in SIGIR '97: Proceedings
of the 20th annual international ACM SIGIR conference on Research and
development in information retrieval, (New York, NY, USA, 1997), ACM, 267-
276.
80. Liu, F., Yu, C. and Meng, W. Personalized Web Search For Improving Retrieval
Effectiveness. IEEE Transactions on Knowledge and Data Engineering, 16 (1).
28-40.
81. Manmatha, R., Rath, T. and Feng, F., Modeling score distributions for
combining the outputs of search engines. in SIGIR '01: Proceedings of the 24th
annual international ACM SIGIR conference on Research and development in
information retrieval, (New York, NY, USA, 2001), ACM, 267-275.
82. Martin, I. and Jose, J., Fetch: A personalised information retrieval tool. in
Proceedings of the 8th Recherche d'Information Assiste par Ordinateur
(computer assisted information retrieval, RIAO 2004), (Avignon, France, 2004),
405-419.
83. Melucci, M., Context modeling and discovery using vector space bases. in
CIKM '05: Proceedings of the 14th ACM international conference on Information
and knowledge management, (Bremen, Germany, 2005), ACM, 808-815.
84. Micarelli, A. and Sciarrone, F. Anatomy and Empirical Evaluation of an Adaptive
Web-Based Information Filtering System. User Modeling and User-Adapted
Interaction, 14 (2-3). 159-200.
85. Mitrovic, N. and Mena, E., Adaptive User Interface for Mobile Devices. in DSV-
IS '02: Proceedings of the 9th International Workshop on Interactive Systems.
Design, Specification, and Verification, (Dublin, Ireland, 2002), Springer-Verlag,
29-43.
86. Montaner, M., López, B. and De, L. A Taxonomy of Recommender Agents on
the Internet. Artificial Intelligence Review, 19 (4). 285-330.
87. Mylonas, P. and Avrithis, Y., Context modelling for multimedia analysis. in Proc.
of 5th International and Interdisciplinary Conference on Modeling and Using
Context (CONTEXT ’05), (Paris, France, 2005).
88. Noy, N. Semantic integration: a survey of ontology-based approaches.
SIGMOD Rec., 33 (4). 65-70.
89. Perrault, R., Allen, J. and Cohen, P., Speech acts as a basis for understanding
dialogue coherence. in Proceedings of the 1978 workshop on Theoretical
issues in natural language processing, (Urbana-Campaign, Illinois, United
States, 1978), Association, 125-132.



Personalized Information Retrieval in Context Using Ontological Knowledge 78

90. Pitkow, J., Schuatze, H., Cass, T., Cooley, R., Turnbull, D., Edmonds, A., Adar,
E. and Breuel, T. Personalized search. Commun. ACM, 45 (9). 50-55.
91. Popov, B., Kiryakov, A., Ognyanoff, D., Manov, D. and Kirilov, A. KIM - a
semantic platform for information extraction and retrieval. Journal of Natural
Language Engineering, 10 (3-4). 375-392.
92. Renda, E. and Straccia, U., Web metasearch: rank vs. score based rank
aggregation methods. in SAC '03: Proceedings of the 2003 ACM symposium on
Applied computing, (New York, NY, USA, 2003), ACM, 841-846.
93. Ricardo and Berthier Modern Information Retrieval. ACM, 1999.
94. Rich, E. User modeling via stereotypes. 329-342.
95. Rocchio, J. and Salton, G. Relevance feedback in information retrieval.
Prentice-Hall, 1971.
96. Rocha, C., Schwabe, D. and de Aragao, M. A hybrid approach for searching in
the semantic web, 2004.
97. Salton, G. and Buckley, C. Improving retrieval performance by relevance
feedback. Journal of the American Society for Information Science, 41 (4). 288-
297.
98. Salton, G. and Buckley, C., On the use of spreading activation methods in
automatic information. in SIGIR '88: Proceedings of the 11th annual
international ACM SIGIR conference on Research and development in
information retrieval, (1988), ACM, 147-160.
99. Salton, G. and McGill, M. Introduction to Modern Information Retrieval.
McGraw-Hill, 1986.
100. Shen, X., Tan, B. and Zhai, C., Context-sensitive information retrieval using
implicit feedback. in SIGIR '05: Proceedings of the 28th annual international
ACM SIGIR conference on Research and development in information retrieval,
(Salvador, Brazil, 2005), ACM, 43-50.
101. Shen, X., Tan, B. and Zhai, C., Implicit user modeling for personalized search.
in CIKM '05: Proceedings of the 14th ACM international conference on
Information and knowledge management, (Bremen, Germany, 2005), ACM,
824-831.
102. Sheth, B. and Maes, P., Evolving agents for personalized information filtering. in
Artificial Intelligence for Applications, 1993. Proceedings., Ninth Conference on,
(1993), 345-352.
103. Smeaton, A. and Callan, J. Proceedings of the 2nd DELOS Network of
Excellence Workshop on Personalisation and Recommender Systems in Digital
Libraries ERCIM Workshop Proceedings - No. 01/W03. European Research
Consortium on Informatics and Mathematics, Dublin, Ireland, 2001.
104. Smith, B.R., Linden, G.D. and Zada, N.K. Content personalisation based on
actions performed during a current browsing session, US Patent Application
6853983B2, 2005.
105. Specia, L. and Motta, E., Integrating Folksonomies with the Semantic Web. in
Proceedings of 4th European Semantic Web Conference site. ESWC 2007,
(Innsbruck, Austria, 2007).
106. Speretta, M. and Gauch, S., Personalized search based on user search
histories. in Web Intelligence, 2005. Proceedings. The 2005 IEEE/WIC/ACM
International Conference on, (2005), 622-628.
107. Sriram, S., Shen, X. and Zhai, C., A session-based search engine (Poster). in
Proceedings of SIGIR 2004, (2004).



Personalized Information Retrieval in Context Using Ontological Knowledge 79

108. Staab, S. and Studer, R. Handbook on Ontologies. SpringerVerlag, Berlin
Heidelberg New York, 2004.
109. Stojanovic, N., Studer, R. and Stojanovic, L., An Approach for the Ranking of
Query results in the Semantic Web. in The Semantic Web, (2003), Springer,
500-516.
110. Sugiyama, K., Hatano, K. and Yoshikawa, M., Adaptive web search based on
user profile constructed without any effort from users. in Proceedings of the
13th international conference on World Wide Web, (New York, NY, USA, 2004),
ACM, 675-684.
111. Sun, J.-T., Zeng, H.-J., Liu, H., Lu, Y. and Chen, Z., CubeSVD: a novel
approach to personalized Web search. in WWW '05: Proceedings of the 14th
international conference on World Wide Web, (Chiba, Japan, 2005), ACM, 382-
390.
112. Tan, B., Shen, X. and Zhai, C., Mining long-term search history to improve
search accuracy. in KDD '06: Proceedings of the 12th ACM SIGKDD
international conference on Knowledge discovery and data mining,
(Philadelphia, PA, USA, 2006), ACM, 718-723.
113. Tanudjaja, F. and Mui, L., Persona: a contextualized and personalized web
search. in System Sciences, 2002. HICSS. Proceedings of the 35th Annual
Hawaii International Conference on, (Hawaii, US, 2002), 1232-1240.
114. Teevan, J., Dumais, S. and Horvitz, E., Personalizing search via automated
analysis of interests and activities. in SIGIR '05: Proceedings of the 28th annual
international ACM SIGIR conference on Research and development in
information retrieval, (Salvador, Brazil, 2005), ACM, 449-456.
115. Terveen, L. and Hill, W. Beyond Recommender Systems: Helping People Help
Each Other. Addison, 2001.
116. Thomas, P. and Hawking, D., Evaluation by comparing result sets in context. in
CIKM '06: Proceedings of the 15th ACM international conference on Information
and knowledge management, (Arlington, Virginia, USA, 2006), ACM, 94-101.
117. Vallet, D., Fernández, M. and Castells, P. An Ontology-Based Information
Retrieval Model, 2005.
118. Vallet, D., Fernández, M., Castells, P., Mylonas, P. and Avrithis, Y., A
contextual personalization approach based on ontological knowledge. in,
(2006).
119. Vassileva, J. Dynamic Course Generation on the WWW. British Journal of
Educational Technologies, 29 (1). 5-14.
120. Vishnu, K. Contextual Information Retrieval Using Ontology Based User
Profiles, 2005.
121. Vogt, C. and Cottrell, G. Fusion Via a Linear Combination of Scores. Inf. Retr.,
1 (3). 151-173.
122. Wallace, M. and Stamou, G., Towards a context aware mining of user interests
for consumption of multimedia documents. in Multimedia and Expo, 2002. ICME
'02. Proceedings. 2002 IEEE International Conference on, (Lausanne,
Switzerland, 2002), 733-736 vol.731.
123. White, R., Contextual Simulations for Information Retrieval Evaluation. in SIGIR
2004: Information Retrieval in Context Workshop, (Sheffield, UK, 2004).
124. White, R., Jose, J. and Ruthven, I. An implicit feedback approach for interactive
information retrieval. Inf. Process. Manage., 42 (1). 166-190.
125. White, R. and Kelly, D., A study on the effects of personalization and task
information on implicit feedback performance. in CIKM '06: Proceedings of the



Personalized Information Retrieval in Context Using Ontological Knowledge 80

15th ACM international conference on Information and knowledge
management, (Arlington, Virginia, USA, 2006), ACM, 297-306.
126. White, R., Ruthven, I. and Jose, J., A study of factors affecting the utility of
implicit relevance feedback. in SIGIR '05: Proceedings of the 28th annual
international ACM SIGIR conference on Research and development in
information retrieval, (Salvador, Brazil, 2005), ACM, 35-42.
127. White, R., Ruthven, I., Jose, J. and Van Rijsbergen, C.J. Evaluating implicit
feedback models using searcher simulations. ACM Trans. Inf. Syst., 23 (3).
325-361.
128. Whitworth, W. Choice and Chance, with one thousand exercises Books. Hafner,
1965.
129. Widyantoro, D., Ioerger, T. and Yen, J., An adaptive algorithm for learning
changes in user interests. in CIKM '99: Proceedings of the eighth international
conference on Information and knowledge management, (Kansas City,
Missouri, United States, 1999), ACM, 405-412.
130. Wilkinson, R. and Wu, M., Evaluation Experiments and Experience from the
Perspective of Interactive Information Retrieval. in Proceedings of the 3rd
Workshop on Empirical Evaluation of Adaptive Systems, in conjunction with the
2nd International Conference on Adaptive Hypermedia and Adaptive Web-
Based Systems, (Eindhoven, The Netherlands,), 221-230.
131. Yang, Y. and Padmanabhan, B. Evaluation of Online Personalization Systems:
A Survey of Evaluation Schemes and A Knowledge-Based Approach. Journal of
Electronic Commerce Research, 6 (2). 112-122.
132. Yuen, L., Chang, M., Lai, Y.K. and Poon, C., Excalibur: a personalized meta
search engine. in Computer Software and Applications Conference, 2004.
COMPSAC 2004. Proceedings of the 28th Annual International, (2004), 49-50
vol.42.
133. Zamir, O.E., Korn, J.L., Fikes, A.B. and Lawrence, S.R. Personalization of
Placed Content Ordering in Search Results, US Patent Application
20050240580, 2005.
134. Zigoris, P. and Zhang, Y., Bayesian adaptive user profiling with explicit &
implicit feedback. in CIKM '06: Proceedings of the 15th ACM international
conference on Information and knowledge management, (Arlington, Virginia,
USA, 2006), ACM, 397-404.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.