You are on page 1of 21

JOURNAL OF OPERATIONS RESEARCH SOCIETY 1

Modelling Subjective Utility through


Entropy

Abstract— We introduce a novel entropy framework A few studies (Savage, 1954; Green and Srivastava, 1986;
for the computation of utility on the basis of an agent’s Kubler et al., 2014; Echenique and Saito, 2015; Yang and
subjective evaluation of the granularized information Qiu, 2005a; Izhakian, 2016) are based on the notion of
source values. A concept of evaluating agent as an infor-
mation gain function of this entropy framework is pre- subjective expected utility theory, but these works focus
sented, which takes as its arguments both an information more on the subjective probabilities of uncertain events,
source value and the agent’s evaluation of the same. A and less on representing the subjective utility.
method to model the agent’s perceived utility values is These difficulties are well addressed by the possibility
proposed. Based on these values, several new measures theory (Zadeh, 1999; Dubois and Prade, 1980a,b) that
are designed for the evaluation of the information source
values, perceived utilities, and the evaluating agent. A real deals with the vagueness and imprecision due to non-
application is included. specificity, or confusion of the agent. The possibility the-
ory is based on the fuzzy set theory (Zadeh, 1965). A
Index Terms—Expected utility; information sets;
agent; Shannon transforms; utility measures; fuzzy sets; fuzzy set F defined for a set of points (objects) X =
Hanman-Anirban entropy function; multi-attribute de- {x1 , x2 , . . . , xn } is characterized by a membership func-
cision making. tion (MF) µF (xi ) that associates with each xi a real
number in the interval [0, 1]. F is a set of ordered pairs
{(x1 , µF (x1 )), (x2 , µF (x2 )), . . . , (xn , µF (xn ))}. The value
I. Introduction
of µF (xi ) gives the degree of association, referred to as
The area of decision making has always gained a lot of “membership grade”, of xi in the vague concept repre-
attention with different approaches appearing in the lit- sented by fuzzy set F . Since the membership grade lies in
erature (Busemeyer and Townsend, 1992, 1993; Townsend the interval [0, 1], the other non-membership component
and Busemeyer, 2010; Diederich, 1997; Roe et al., 2001; is obvious.
von Neumann and Morgenstern, 1944). The expected util- The application of fuzzy set for utility representation
ity theory (EUT) (von Neumann and Morgenstern, 1944) has gained attention in the recent times (Martinez-Cruz
is one of the most popular of these. It postulates that the et al., 2015; Porcel et al., 2015; Chamodrakas and Mar-
decision maker (DM) chooses among the risky (uncertain) takos, 2012). A related concept of the possibilistic uncer-
prospects by comparing their expected utility values. The tainty in decision making has also been studied in the
expected utility is the sum of the individual utilities of literature. One of the first studies in this direction is
the uncertain outcomes, weighted by the corresponding the introduction of the interval-valued expectation of a
probability values. For example, we consider two options fuzzy number in (Dubois and Prade, 1987). In (Dubois
(decisions): picnic outdoor or picnic indoor, depending et al., 1998, 2001), the qualitative possibilistic decision
upon the possible states of nature: rain or no rain. The theory is developed based on a DM’s behavior (pessimistic
choice rests upon the likelihood of the rain and the relative or optimistic). The notions of possibilistic mean value
quality of picnic indoor/outdoor with or without the rain. and variance of a fuzzy number is proposed in (Carlsson
An agent prefers the option with a higher expected utility. and Fuller, 2001, 2011). The weighted possibilistic mean
In the real world, any decision making process inevitably and weighted possibilistic variance are proposed in (Fuller
characterizes the uncertainties (Liu, 2007), both due to and Majlender, 2003). These indicators have been used
randomness (probability of rain or no rain) and vagueness in decision making applications in (Carlsson and Fuller,
(intensity of rain). The expected utility theory is focussed 2002, 2011; Zhang and Li, 2005; Zhang et al., 2007, 2009;
primarily upon probabilistic uncertainty, and it does not Georgescu, 2009; Georgescu and Kinnunen, 2011c,a).
have a provision to consider the possibilistic uncertainty The elaboration of expected utility theory in the pos-
(or vagueness). For instance, in the rain example, we sibilistic domain has received some attention recently.
encounter the following difficulties while applying the The transition from probabilistic to possibilistic models
expected utility theory: is shown in (Georgescu and Kinnunen, 2011b) by replac-
• It is difficult to crisply (numerically) assess the utili- ing the probability distribution (random variable) with
ties associated with the uncertain outcomes. possibilistic distribution (fuzzy set) and by substituting
• The utility derived from each of the possible choices the probabilistic indicators (expected value, variance) with
is specific to an individual agent. possibilistic mean and variance. The possibilistic models
• The decision outcomes may significantly vary with the in the context of risk aversion are developed in (Georgescu,
intensity of rain, which does not get a representation 2009; Georgescu and Kinnunen, 2011b). In (Georgescu and
in EUT. Kinnunen, 2011a), the possibilistic model of (Georgescu
JOURNAL OF OPERATIONS RESEARCH SOCIETY 2

and Kinnunen, 2011b) is extended with a multidimen- this study, we develop the concept of utility information
sional utility function to deal with multiple risk param- value using the recent formalism of information set. The
eters. proposed utility information value is a representation of
The notion of possibilistic utility rests upon fuzzy sets both the information source value, and its individualistic
that are chosen to represent vague utilities specific to evaluation that depends on the agent’s background, in-
an agent. This representation, however, suffers from the dividual circumstances, values, and psychology. Besides,
following drawbacks: it is based on the entire range of the information source
• It treats the membership grades separate from the values available at the disposal of the agent. We also
information source values. present several uncertainty measures to evaluate the utility
• It has no provision to connect them together as a information values provided by an agent, and the agent
single entity, leading to interpretation difficulties. oneself.
• The overall extent of fuzziness in a vague utility We briefly summarize the contributions made in the
fuzzy concept, remains unconsidered. A fuzzy set paper:
itself could be less or more fuzzy. For example, a fuzzy • The state-of-the-art is discussed in Section II.
set ‘middle income group’ is quite less fuzzy in the • The concept of information source values is formalized
case of population of a small island (with people in in Section III.
same occupation of, say, agriculture) than that in the • The proposed utility information theory is built based
case of population of a city or a country with people on an information theoretic entropy function in Sec-
with wide diversities. The gradualness in different tion IV.
membership grades will be much higher in the latter • The utility information theory and the related con-
than the former. Consequently, area under the curve cepts are introduced in Section V.
(representing MF) will be higher for the latter than • Section VI delves upon different forms of the proposed
the former. utility information sets.
• The pre-defined shape of a fuzzy set is restrictive for • A few measures for subjective utility are presented in
uncertainty evaluation. Section VII.
Due to these difficulties, the actual worth that an infor- • We show the usefulness of the proposed concepts
mation source value has for an agent is not reflected from through a real application in Section VIII.
the possibilistic utility. At best, it only gives relatively • Section IX gives the conclusions with an outlook on
an agent’s satisfaction from different information source the future work.
values, but the issue of utility representation remains
unexplained. In this regard, we see a huge potential of II. background
the information set theory (Aggarwal and Hanmandlu, In this section, we briefly describe the concept of utility
2016) in addressing these shortcomings. The information followed by a brief overview of state-of-the-art fuzzy set
set theory gives a linked representation of uncertainty in and possibility theory based methods for representation
terms of “information values” by combining the actual in- of the subjective utility.
formation source values with the agent’s evaluation of the Let X = {x1 , . . . , xn } be a non-empty set. A preference
same through an entropy framework. The information set relation  defined on X is a binary, reflexive, transitive,
theory also advances the state-of-the-art of the uncertainty and a total relation on X . Therefore,  leads to a total
evaluation by introducing the concept of “agent” that can preorder relation (Candeal et al., 2001). Let us denote a
be seen as an empowered MF. preference relation on X by . A real-valued function
This inspires us to cross-fertilize the expected utility u defined on (X , ) is said to be a utility function if
theory with the information set theory. The expected x1  x2 ⇐⇒ u(x1 ) ≤ u(x2 ). Therefore, if u is a utility
utility value is synonymous with the expected return value function on (X , ), then u also represents the underlying
that takes into account the rewards (enjoyment by the preference relation .
agent) and the uncertainties associated with the individual We now discuss the fuzzy set based representation of
outcomes. The information value yielded by the informa- u in the context of multi attribute decision-making. Let
tion set theory is also the expected value of the information us consider a set of alternatives, each one of which is
(ucertainty). This interesting commonality between the described by multiple attributes. Let X = {x1 , . . . , xn }
two motivates us to combine them. Specifically, our core denote a set of alternatives defined by a set of attributes
idea is to better model the subjective utility with the aid A. For a collection of the different values that an attribute
of the information set theory. X from A assumes for the given alternatives in X ,
The utility value, in general, refers to the units of different soft concepts such as high satisfaction, moderate
reward (or satisfaction) corresponding to the given in- satisfaction and low satisfaction can be conceived by the
formation source value. However, this unit may have a agent. Each of these concepts is represented through a
k
different definition as well as interpretation for each agent fuzzy set, say FX , characterized by a membership function
and onlooker, respectively. This has been attempted to µX that is specific to the agent, such that µkX (x) ∈ [0, 1]
k
k
be addressed by the possibilistic utility but the issues, denotes the degree of membership for x in FX . For in-
as discussed above, remain. Motivated by the same, in stance, in selection of a house, a buyer (agent) assesses
JOURNAL OF OPERATIONS RESEARCH SOCIETY 3

the membership grade µ3X (x) for each of the attributes X


of an available house (alternative), say x, in a concept, say
3
high satisfaction FX , as shown in Fig. 1. This membership
3
degree µX (x) would give the agent’s desirability (or the
utility) for the particular attribute value.
FX is a normal fuzzy set if there exists x ∈ X such that
µX (x) = 1. The support of FX is defined by s(FX ) = {x ∈
X |µX (x) > 0}. For any γ ∈ [0, 1], the γ-level set of fuzzy
Fig. 1: Sample Membership Functions for IX
set FX in R is defined by
(
γ {x ∈ X |µX (x) ≥ γ} ifγ > 0
[FX ] = (1) {1, 2, 3} (on the y-axis) in FX 1 2
, FX 3
and FX indicate the
cl(s(FX )) ifγ = 0
relative satisfaction degrees of the agent for different in-
where cl(s(FX )) is the topological closure of the set formation source values {IX (xi )}, shown on the x-axis.
s(FX ) ⊆ R. However, these methods have their own limitations that
A fuzzy set of the real line R with a normal, fuzzy con- form the main motivation for this study, in which we are
vex and upper-semicontinuous MF of bounded support is motivated to develop an approach to model the unique
termed as the fuzzy number in (Dubois and Prade, 1987). evaluation rationale of an information source value by an
Let A be a fuzzy number and γ ∈ [0, 1]. Then [A]γ is a agent, which is best known to the agent himself.
closed and convex subset of R. We denote a1 (γ) = min[A]γ
and a2 (γ) = max[A]γ . Hence [A]γ = [a1 (γ), a2 (γ)] for all
γ ∈ [0, 1]. A function f : [0, 1] → R is said to be a weighting B. Motivation
function if f is non-negative, monotone increasing and sat-
R1 The modelling of subjective utility holds significance in
isfies the following normalization condition: 0 f(γ)dγ = 1.
many domains. Many interesting studies have appeared
in the literature for its measurement (Aumann, 1962;
III. Foundation for Utility Information Theory Becker et al., 1964; Keeney and Raiffa, 1976; Weber, 1987;
A. Information Source Values Miyamoto, 1988; Mármol et al., 1998; Grable and Lytton,
We consider a choice situation in which an agent faces a 1999; Clemen and Reilly, 2001; Bertsimas and Hair, 2013).
choice to make among different alternatives, each of which However, they do not give a structured approach to model
is characterized by multiple attributes. We use the term the subjective utility, which can be applied across domains.
information source values to refer to the various values Most of these approaches require an extensive interaction
that an attribute takes, for the given alternatives that with the DM, which restricts their applicability.
form the choice set. For instance, while choosing a house, Besides, the extant approaches do not consider the
a buyer evaluates multiple alternatives in the given choice whole range of the available information source values in
set X against a set of desired attributes, say {X: area, the determination of the utility values. That is, the utility
Y : location, Z: sunlight}. The collection of the actual area values remain unaltered with the expansion or contraction
(X) values for X constitutes a set IX , and the actual area of the choice set, and thereby the set of information source
values for the various alternatives xi ∈ X are referred to values. This, however, might not give a true modelling
as the information source values. The actual information of the utility assessment process. In the real world, our
source value that xi points to is denoted by IX (xi ). appreciation for an information source value very much
A collection of attribute X values is termed as the set depends on the alternative options, we have at our disposal
of information source values IX . Mathematically, it is for the given information source value.
expressed as That is, our choices vary with the emergence (or omis-
sion) of alternatives in the choice set. It is not uncommon
IX = {IX (xi )} | ∀xi ∈ X , (2)
to see an entrant information source value disintermedi-
where IX (xi ) refers to an individual information source ating all the other alternatives completely. For instance,
value. In the continuous domain, we consider a continuous with the emergence of a new technology, as the speeds of
stream of normalized information source values generated our toys, computers etc. significantly improves, the extant
by a function f : [0, 1] → R, and a continuous point specifications rapidly become unuseful. This real world
information source value is denoted as {f (x)} aspect is very much difficult to implement with the extant
The change in notation between the set of information utility modelling methods.
source values and the individual information source values In practice, the subjective utility corresponding to an
may be noted. information source value IX (xi ) depends on both IX (xi )
The information source values for attribute X, by their and its individualistic evaluation by the agent who consid-
own virtue, do not give a measure of the agent’s desirability ers the range of the information source values, i.e. IX in
for the same. The fuzzy set and possibilistic methods, his evaluation. For example, in a house buying scenario,
as shown in Section II, have been used to this end. For different buyers have different aspirations for each of the
example, in Fig. 1, the membership grades {µkX (xi )}, k = attributes.
JOURNAL OF OPERATIONS RESEARCH SOCIETY 4

Consider two potential buyers, say A and B. Let A discrete optimization (Nakagawa et al., 2013). All these
strictly look for a house in the range of 2000 to 2200 sq. studies make use of the conventional entropy functions like
ft., and let B be flexible with regard to X, but still prefers Shannon (Shannon, 1948), and Pal & Pal (Pal and Pal,
X to be in the range of 2000-2200 sq. ft. In such a case, 1992) that deal with the randomness in the information
any house outside this range holds nil utility for A. In source values. For instance, the Shannon entropy function
contrast, the subjective utility for B, with respect to X, ESh is given by
is maximum when the actual attribute value lies in the
preferred interval [2000, 2200], and it keeps on diminishing
X
ESh = − p(xi ) log (p (xi )) , (3)
gradually as X takes values outside this interval. i
In this study, we are motivated to model this subjective
utility that is specific to the DM, and which depends on the where p denotes the probability mass function for the
agent’s experience, background, priorities, values, inertia, occurences of xi , p(xi ) is the probability of xi , and
habits, and circumstances. We focus on the subjective − log(p(xi )) gives the information quantity arising from an
association between the information source value and its event with probability p(xi ). The quantity of information
corresponding utility for the agent. Little attention has conveyed in a random event depends on the probability
been paid to formalize this link between the two. For of the event. The information quantity arising from each
instance, utility modelling through fuzzy set (Martinez- possible occurrence of xi with probability p(xi ) is given as
Cruz et al., 2015; Porcel et al., 2015; Chamodrakas and − log(p(xi )). The lesser the p(xi ), more is the − log(p(xi )).
Martakos, 2012), or possibility theory (Georgescu and This implies that the less probable an event is, more is
Kinnunen, 2011b) only maps an information source value the uncertainty, or more is the gain in information with
to the degree of agent’s satisfaction. Hence, the agent’s its occurrence. For this reason, it is also referred to as
evaluation of an information source value remains delinked information gain.
with its material support in such a representation. Besides, The weighted sum of the information gains for all possi-
utility in this form may be interpreted differently by ble events with varying probabilities is termed as entropy,
different onlookers. where probability values p(xi ) constitute the weight vec-
We consider the concept of utility as the subjective tor. Hence, entropy indicates the extent of disorderliness
appreciation of an information source value by the agent. of a random system. A situation with a large number of
Hence, utility is analogous to the value that an informa- possible events with varying probabilties is bound to have
tion source value holds for the agent. For instance, any high disorderliness and a high value of entropy.
house with X outside the interval [2000, 2200] bears no The conventional entropy functions, however, only give
significance for A, whereas this may not be the case for B. a measure of uncertainty in the probabilistic domain,
More specifically, we represent the utility as a function of and they have no provision to consider the possibilistic
both the information source value and its corresponding uncertainty associated with an event. On the other side,
evaluation by the agent. In this regard, the information the fuzzy entropies (Luca and Termini, 1972; Kosko, 1986;
set theory (Aggarwal and Hanmandlu, 2016) appears to Liu, 1992; Xie and Bedrosian, 1983, 1984; Sander, 1989;
be useful as it fuses the actual information source value Pal and Bezdek, 1994) quantify the uncertainty in the
with its corresponding evaluation by the agent through membership grades, disregarding the associated actual
an information-theoretic entropy function. The concept of information source values.
entropy, in the context of random variable, has been first
In this regard, Hanman-Anirban entropy function (Han-
propounded in (Shannon, 1948), which gives a measure
mandlu and Das, 2011) comes in handy. The general-
of uncertainty about a discrete random variable having
ized non-normalized form of the Hanman-Anirban entropy
a probability mass function. This entropy value also mea-
function EG (See Appendix A for details), is given as
sures the spread of a probability distribution that achieves
its maximum value when the distribution assigns equal 3 α
+b(p(xi ))2 +c(p(xi ))+d)
p(xi )e−(a(p(xi ))
X
probabilities to all outcomes (uniform distribution). These EHA,G = (4)
interesting properties of the entropy function are imbibed i
in the information set theory. Concretely, we develop an
information-based utility framework in this paper. This entropy function converts to Shannon’s entropy func-
tion with a proper choice of parameters as shown in
Appendix C.
IV. Entropy Function and Agent as the
Underpinnings of Utility Information We now recast it in the information theoretic form to
develop the proposed utility framework. To this end, we
A. Entropy as Subjective Utility first normalize the information source values as:
Entropy is a measure of uncertainty widely used in
computational intelligence applications. Recently, it has IX (xi ) − min{IX }
IX (xi ) = , (5)
been explored in the areas of risk measure (Yang and Qiu, max{IX } − min{IX }
2005b), operations management (Andersson et al., 2013),
portfolio optimization (Glasserman and Xu, 2013), and and replace p(xi ) in (4) with these normalized information
JOURNAL OF OPERATIONS RESEARCH SOCIETY 5

To this end, let µX = |I1X | i IX (xi ) be the mean of


P
source values, as shown:
3 2 αXthe information source values in IX , |·| refers to the count
IX (xi )e−(aX (IX (xi )) +bX (IX (xi )) +cX (IX (xi ))+dX ) of values in IX , and σX is the standard deviation in the
X
EX =
Xi values in IX . We take aX = 0, bX = 0, cX = σ1X , dX =
= IX (xi )gX (xi ) − µσX
X
. Accordingly, we get
i  α
X
abs(IX (xi )−µX )
(6)α − σX
3 2 gX (xi ) = e , (8)
where gX (xi ) = e−(aX (IX (xi )) +bX (IX (xi )) +cX (IX (xi ))+dX ) X

refers to the information gain corresponding to IX (xi ); which corresponds to a generalized Gaussian function that
and aX , bX , cX , dX and αX are the real valued has gX (xi ) peaking out at some value(s) of IX .
parameters, corresponding to X, specific to the agent.
EX gives the possibillistic uncertainty with respect to X.
The entropy value EX , computed through (6) is analo-
gous to the expected value in the probabilistic domain, with
weights provided by the normalized information source
values IX (xi ) and the information gain by gX (xi ). In
essence, EX gives the expected value for IX , and the value
IX (xi )gX (xi ) gives the agent’s perceived net worth or sub- (a) α = 0.1 (b) α = 0.5
jective utility for IX (xi ). Hence, the two key determinants
of the subjective utility are the information source value
and the associated information gain, as assessed by the
agent. We now look more closely at the role of information
gain in the determination of subjective utility.

B. Agent’s Evaluation Function (c) α = 1.0 (d) α = 1.5

We draw an interesting analogy between the entropic


information gain gX (xi ) and an agent’s evaluation of
information source value IX (xi ). Let us reproduce the
information gain function from (6):
3 αX
+bX (IX (xi ))2 +cX (IX (xi ))+dX )
gX (xi ) = e−(aX (IX (xi ))
(7) (e) α = 2.0 (f) α = 2.5
We shall also refer to this information gain function as
agent. In general, an agent is defined as anything that
can be viewed as perceiving its environment. A human
agent has eyes, ears and other organs to perceive informa-
tion. A robotic agent has sensors performing the task of
perceivers(Russell and Norvig, 2003). The parameters aX ,
bX , cX , dX and αX impart adaptability to the information
(g) α = 3.0 (h) α = 3.5
gain function, in the sense that they can be used to specify
the unique evaluation behaviour of an agent for attribute
X.
Let us consider the house example taken up in Sec-
tion III. If IX (xi ) < 2000 or IX (xi ) > 2200, then gX (xi ) =
0, and hence SX (xi ) = 0 for A. In contrast, the attribute
values outside the interval [2000, 2200] also hold some
utility for B. This subjectivity in the utility behaviour of (i) α = 4.0 (j) α = 5.0
different agents towards an attribute is modelled through
the function gX (·), determined by its parameters aX , bX , Fig. 2: The evaluation values, corresponding to the normalized
cX , dX and αX that are specific to the agent. information source values, as obtained with the information
Besides other values, they can be fixed to the statistical gain function given in (8).
moments such as mean and variance of the distribution of
the information source values. This leads to a generalized In this form, gx (xi ) gives the agent’s evaluation of an
Gaussian function (Aggarwal and Hanmandlu, 2016) that information source value IX (xi ) by virtue of not merely
gives different shapes, such as triangular, trapezoidal, IX (xi ) but the whole distribution of the information
Gaussian, etc., in accordance with parameter αX , as shown source values The representation of gx (xi ) in (8) portrays
in Figure 2. the agent’s aspired (ideal) value for X as µX , and the
JOURNAL OF OPERATIONS RESEARCH SOCIETY 6

fact that the agent’s satisfaction (evaluation value) is evaluation function. This allows to better reflect the
maximum at IX (xi ) = µX , and it weans as the information interplay between an information source value and the
source values drift away from µX . This somewhat mimics agent’s satisfaction from the same. It also addresses
our evaluation behaviour in the real life. Our evaluation the issue of ambiguous interpretation of the evalua-
of an information source value is also dependent on the tion values.
range of attribute values at our disposal (represented by • The evaluation function is based on the entire distri-
IX ), and we also have an ideal or the most suitable value in bution of the information source values, making the
our mind for each attribute, while making a choice among evaluation values dependent on the given choice set.
different alternatives. • Information gain is not restrictive. In comparison, MF
Hence, the information gain function in (8) yields the is generally defined by a pre-defined shape (triangle,
gain in satisfaction (utility) gX (xi ), corresponding to an trapezoid etc.) that maps an information source value
information source value IX (xi ). Besides, gX (·) always to a membership degree. In comparison, the informa-
yields non-negative values, normalized to range between tion gain formalism is more flexible with wider repre-
0 and 1. These properties make evaluation values arrived sentation abilities, as it is defined by the parameter
at through gX (·) easily interpretable. For instance, we values in the Hanman entropy function.
see from Fig. 2(g) that gX (xi ) = 0.4 at IX (xi ) = 0.2, • The presence of the free parameters helps to represent
indicating that the agent with αX = 3 relatively obtains the complex individualistic evaluation behaviour of an
a satisfaction of 0.4 (on a scale of 0 to 1) from the agent.
normalized information source value of 0.2. By relatively,
we mean that given the entire range of values in IX , V. Utility Information Values
the agent evaluates a particular information source value
IX (xi ) = 0.2 as giving a relative satisfaction of 0.4 vis-a- In this section, we introduce the concepts of utility
vis other information source values. information values and utility information sets. Let us
That is, gX (·) considers the entire distribution of the reconsider (6) :
information source values (through statistical moments X
µX and σX ) in the evaluation of any information source EX = IX (xi )gX (xi ) ,
i
value. If IX changes, µX and σX also get modified, and
hence the evaluation function gX (·). This feature adds to where xi ∈ X . EX is obtained as the sum of IX (xi )gX (xi )
the usefulness of the proposed framework, as it brings values, i.e. the evaluations weighted by the corresponding
it closer to the real world decision making. Most often, normalized information source values. It gives the net
our satisfaction from an information source value quite utility, X holds for the agent by virtue of the different
depends on the available range of the information source values that X takes for the alternatives in the given choice
values. For instance, a decade ago, our appreciation for set.
a Pentium or Celeron processors used to be much more To have an indication of the overall utility values, irre-
than what it is now, with the availability of more advanced spective of the choice set, we could conceive the concept of
options. In a similar context, our appreciation for an item normalized subjective utility. The normalized utility EX,N
(vehicle, toy, clothing) often varies, as new alternatives associated with SX in the discrete domain is given by
(with different attribute values) get added to our consid-
1 X
eration, i.e. the choice set and the set of information source EX,N = IX (xi )gX (xi ), ∀xi ∈ X (9)
values IX expands. The same is true in the case of omission |X | i
of alternatives from the choice set. The continuous analog of the same is shown as:
The agent’s individualistic evaluation characteristics
Z b
(experience, background, priorities, values, inertia, habits, 1
and circumstances) are captured through αX . This mimics EX,N = R b SX (x)dx (10)
a
f (x)dx a
our process of evaluation of an information source value
in the real world, which inspires us to model the agent’s where SX (x) = fX (x)gX (x) gives the continuous subjec-
evaluation function for X through gX (·) that gives the tive utility with respect to f (x).
degree of enjoyment gX (xi ) realised by the agent from Normalized utility gives a quantified representation of
IX (xi ). In the sequel, we refer to gX (·) as the evaluation the subjective utility of an attribute for the agent. It helps
function, and gX (xi ) as the evaluation of the information to bring the subjective utility corresponding to different
source values IX (xi ) in IX . Based on the evaluation values, attributes on the same scale. Hence, it can be used to
we compute the subjective utility, as shown in the next gauge different attributes for their usefulness (utility) for
section. the agent. The more it’s value for an attribute, the more
We summarize the advantages of the proposed entropic desirable the attribute is for the agent. Similarly, it is
information gain based evaluation function as follows: useful to compare the broad utility values that different
• The information source value is bound together with agents obtain from a particular attribute.
its evaluation as a single entity, hence explicitly incor- The weighted evaluation value IX (xi )gX (xi ) gives the
porating the information source values in the agent’s perceived worth of IX (xi ) in the eyes of the agent. It is
JOURNAL OF OPERATIONS RESEARCH SOCIETY 7

through this value that we model the subjective utility of following properties of the proposed evaluation function g(·)
IX (xi ) for the agent, shown as: helps to accomplish this analogy between the probabilistic
and qualitative domains:
SX (xi ) = IX (xi )gX (xi ) , (11)
• 0 ≤ gX (x) ≤ 1.
where SX (xi ) denotes subjective utility. We also refer • It is free of any assumption about the function form.
to the same as utility information value in the sequel, • Easy interpretibility.
because it gives the utility information associated with an If the agent places multiple bids, each of which charac-
information source value, as perceived by the agent. The terized by a bidding units IX (xi ), and gain gX (xi ), then
value SX (xi ) is nothing but the entropy value with respect the expected gain for the agent is:
to a specific information source value IX (xi ). X
It is interesting to note that EX computed through EX = IX (xi )gX (xi ) , (15)
(6) gives a measure of the average usefulness (or utility) i
X provides to the agent by virtue of its various values which is same as the entropy expression shown in (6).
for the given choice set, in the same way as ESh in (6) This gives the entropy corresponding to random variable
(or any conventional probabilistic entropy function) gives X, with probability p(xi ) replaced by IX (xi ). This could be
a measure of uncertainty or information for a random seen as subjective probability. The more the agent believes
variable. in occurrence of an event, the more he is likely to bid
Similarly, while p(xi ) log (p (xi )) in (6) gives a measure for the particular occurrence. The same expression, in the
of the uncertainty about the specific possible outcome qualitative settings, gives a measure of the net worth, or
xi before observing it, the entropy value SX (xi ) gives a the subjective utility of X for the agent in the context of
measure of the net worth IX (xi ) holds for the agent in the choice set X .
the information-theoretic domain. The set of such entropy
(also termed as utility information) values gives the utility It is worthwhile to emphasize that the above utility
information set, shown as: framework is applicable when X is a discrete variable, i.e.
X takes countable number of values IX (xi ), and gX (xi )
SX = {SX (xi )}, (12) and SX (xi ) give the agent’s evaluation and utility informa-
and the sum of the utility information values in SX gives tion value for a particular instance IX (xi ), respectively. In
a measure of entropy or total information EX for IX , in this regard, SX (·) performs like a utility mass function that
the eyes of the agent. We further demonstrate this analogy gives the subjective utility corresponding to an information
between the entropy and the subjective utility by means source value. We now extend the proposed concept of
of the following example, adapted from (Friedman et al., subjective utility to a continuous case, in which X always
2007). takes an uncountable number of values, shown as f (x).
The continuous analog of (6) is shown as:
Example 5.1. Let X be a random variable that takes any Z b
value x from a finite set X . Let p(x) denote probability EX = SX (x)dx , (16)
that X = x. Consider a horse race with the winning horse a
denoted by a random variable X with possible states x ∈ X. where EX gives the subjective utility when fX (x) ∈ [a, b].
An individual (agent) can place a bet of the form X = x, EX gives the expected value of utility over a predefined
indicating a bid of winning on horse x. The payoff (gain) interval [a, b], with IX (xi ) and gX (xi ) replacing p(xi ) and
when indeed X = x is gX (x), and 0 otherwise. The net − log(p(xi )) as the weight vector and the gain function,
gain of the agent when the bet is won thus is: respectively.
SX (x) = IX (x)gX (x) , (13) The greater the value of SX (x), the more is the subjec-
tive utility that the agent associates with f (x). We term
where IX (x) represents the units of wealth bid on X = x, the continuous range of EX values as the subjective utility
and gX (x) denotes the associated gain from a single unit distribution, and SX (x) as the subjective utility density
of the winning bid X = x. Hence, function that is obtained as the derivative of subjective
( utility distribution EX .
IX (x)gX (x) if X = x,
SX (x) = (14) We briefly give the advantages of the proposed concept
0 otherwise of utility information value over the extant methods for
This scenario depicts a typical probabilistic scenario, the representation of subjective utility as follows:
with nil or unit gain from a random occurrence. The • The information source values get the representation
proposed utility information value concept is a possibilistic both in the individual assessment of the utilities
counterpart of the same in the qualitative settings for the (gX (xi )) by the agent as well as in the total utility
decision analysis problems, in which gX (x) takes any value (EX ) derived from a set of information source values.
in the interval [0, 1]; and SX (x) gives the gain conditioned • The information source value and its associated enjoy-
by IX (x). ment (for the agent) are combined as a single entity in
The entropy value computed in (13) stands true both in the utility information value. These two values remain
the probabilistic as well as the possibilistic domains. The disconnected in the case of possibilistic utility that
JOURNAL OF OPERATIONS RESEARCH SOCIETY 8

mainly gives a quantification of the possibilities, while The concept of non-linear utility quantifies the subjective
disregarding the actual information source values. utility for a non-linear utility information set.
• The evaluations of information source values are cap- We see from (11) that the agent’s evaluation plays the
tured through the information gain function that primary role in the determination of the subjective utility
imbibes the statistics (knowledge) of the distribution corresponding to an information source value IX (xi ). This
through the free parameters present in the Hanman- leads to the fact that different agents would have different
Anirban entropy function. subjective utilities for the same information source value.
We now present the counterpart of the proposed utility
VI. Forms of Utility Information Set information set obtained by replacing the agent with its
complement.
The uniqueness of the proposed utility information set
lies in the exponential gain function gX (·) that provides Definition 6.2. The complement utility information set
different forms of agent through a choice of parameters. gives the set of the complementary utilities as viewed by
With the proposed formalism, it is also possible to capture the complement agent, and is expressed as
the uncertainty in the various evaluations of the agent, 0
n 0 o
SX = SX (xi ) , (23)
besides that in the information source values. We shall
look at this aspect of utility information set in Section VII, 0
where SX (xi ) = IX (xi ) (1 − gX (xi ))
where we develop some uncertainty measures for the same.
The forms in (6) and (16) are linear. Many more forms While utility information set gives the subjective utility
of utility sets can be conceived, depending upon the values by an agent, it is a collection of the non-utility
way quantification of utility values is performed. We now values for the agent. Alternatively, it can be seen as
introduce a few forms of the proposed utility information the utility information set given by another agent with
set. diametrically opposite evaluation scheme.
0
The utility values of SX is given as
Definition 6.1. The non-linear variant of the proposed
0 X 0
utility information set is termed as non-linear utility in- EX = SX (xi ) (24)
formation set, and is defined as i

SX,nl = {(IX (xi ))gX (xi ) }, ∀xi ∈ X (17) 0 1 X 0


EX,N = S (xi ) (25)
|X | i X
The individual elements of SX,nl are shown as:
For the continuous domain, these values are as follows:
SX,nl (xi ) = (IX (xi ))gX (xi ) (18) Z b
0 0
EX = SX (xi )dx (26)
The proposed non-linear utility information set com- a
bines the agent’s evaluation of an information source value, Z b
0 1 0
and the information source value itself through a non- EX,N = R b SX (xi )dx (27)
linear relationship. It provides another structure to model a
f (x)dx a
the subjective utility of an attribute value for an agent, 0

through the (non-linear) nature of relationship. It is quite where SX (xi ) = f (x)(1 − gX (x)) The complement utility
possible that subjective utility of an attribute may be information set gives the subjective utility values arrived
better represented through the non-linear utility than the at through the linear combinations of the information
linear utility, while the reverse may be true for another source values and their corresponding evaluations by the
0

attribute or another agent. agent. Its utility value EX indicates the overall utility an
The non-linear utility associated with SX,nl is given as: attribute holds for the complement agent, in the context
X of the given choice set.
EX,nl = SX,nl (xi ), ∀xi ∈ X (19) Non-linear complement utility information set gives a
i collection of the complement utility values, computed
through a non-linear relation. One of the forms of the same
The non-linear utility in the continuous case is:
is shown as n 0 o
Z b 0
SX,nl = SX,nl (xi ) , (28)
EX,nl = SX,nl (x)dx, (20)
a 0
where SX,nl (xi ) = (IX (xi ))(1−gX (xi )) . The overall utility
where SX,nl (xi ) = (f (x)) .gX (x) 0
value for SX,nl is shown as:
The normalized non-linear utility values are shown as:
0 X 0
1 X EX,nl = SX,nl (xi ), (29)
EX,N,nl = SX,nl (xi ), ∀xi ∈ X (21) i
|X | i
and the normalized utility is obtained as
Z b
1 0 1 X 0
EX,N,nl = R b SX,nl (x)dx (22) EX,N,nl = S (xi ) (30)
f (x)dx a |X | i X,nl
a
JOURNAL OF OPERATIONS RESEARCH SOCIETY 9

The utility values for the continuous case are as following: evaluation of IX (xi ) unequivocally. Taking gX,A (xi ) = 1.0
Z b and gX,B (xi ) = 0.4, we compute the subjective utility by
0 0
EX,nl = SX,nl (x)dx (31) applying (11). The subjective utiltities, so computed, for A
a and B are 0.4 ∗ 1 = 0.4 and 0.9 ∗ 0.6 = 0.54, respectively,
0 1
Z b
0
establishing that B derives more utility from his salary,
EX,N,nl = R b SX,nl (x)dx, (32) than A.
a
f (x)dx a
These subjective utility values are absolute and compa-
0
where SX,nl (x) = (f (x))(1−gX (x)) . We consider the follow- rable across agents, unlike the membership grades. The
ing example to show the mapping of information source fuzzy set has the membership grade and the information
values to the utility information (entropy) values. source value delinked, leading to the loss of interpretability.
In comparison, the proposed representation of subjective
Example 6.1. In this example, we compare the proposed utility is quite simpler and more comprehensive.
approach with a fuzzy set based utility approach. We con-
sider two persons A and B with salaries over a period of VII. Utility Information Measures
a few years be denoted by the sets {xi }ni=1 and {yi }m i=1
respectively. Consider two of the normalized information We develop a few measures to evaluate the effect of
source values one each from these sets as IX (xi ) = $0.4 the information source values or the agent’s evaluations
and IX (yi ) = $0.9, where X denotes the salary attribute. on the overall subjective utility. To this end, we adapt
Let us first apply the fuzzy set based approach to represent the Shannon transforms (Aggarwal and Hanmandlu, 2016)
the utility, A and B obtain from their respective salaries. in the context of the proposed concepts. The proposed
In this case, A and B assign a membership value to their measures get their name from their similarity in nature
own respective fuzzy sets for the concept high salary, to the Shannon entropy. These measures basically yield
denoted by, say FX A
and FXB
. A, who has got an excellent the entropy values, like the utility information value, but
hike on his earlier remuneration as a result of a change with the difference that they have the utility information
of job, evaluates his salary of $0.4 as relatively high (in values as the evaluation values in the Shannon entropy
comparison with his earlier salary), and hence assigns it a framework. These measures can be seen as second order
membership grade as µA A subjective utility. Depending on the type of informa-
X (xi ) = 1 in FX . In comparison, B
assesses his salary not very high, and he has µB tion source values, and the utility information values, we
X (yi ) = 0.6
in FX B
. present a few such information measures. The presented
We note that in such a representation, it is difficult measures are not exhasutive, as many more measures can
to compare µA B be conceived on the same lines.
X (xi ) and µX (yi ), since they are defined
on different fuzzy sets, and by different agents. Secondly, Definition 7.1. The linear and non linear information
µX (·) may be interpreted differently by different onlookers, source transforms are expressed as
as it is only an evaluation of the degree of membership X
of an information source value, but not the utility value, HX = − IX (xi ) log(SX (xi )) (33)
as perceived by the agent A or B. In contrast, let the i
evaluation of the information source values be done through X
the proposed evaluation function, as shown in (7) or (8). HX,nl = − IX (xi ) log(SX,nl (xi )) (34)
i
In such a case, the evaluation of IX (xi ) is modelled in
accordance with the set of parameters aX , bX , cX , dX and HX gives a measure of the average subjective utility
αX in the case of former, and αX with the latter, as shown of X for the agent by means of the utility informa-
in Fig. 2. These parameters are specific to the agent, and tion values gathered for X, weighted by the normalized
shape the evaluation scheme of the agent. information source values. In this sense, the transform
Unlike the case with the membership grades, both the can be seen as providing the secondary evaluation of
evaluation and the subjective utility values are uniquely in- the information source values on the basis of the utility
terpretable, and convey the absolute information. While the information values as the primary evaluation. Similarly,
evaluation value gX (xi ) indicates the satisfaction the agent each of the IX (xi ) log(SX (xi )) values gives the subjective
gets from an information source value IX (xi ) on a scale of utility corresponding to IX (xi ), based on SX (xi ). Like
0 to 1, the subjective utility value SX (xi ) indicates the net Shannon entropy gives the average uncertainty about a
worth IX (xi ) has for the agent. In this example, if we equate random variable, HX provides a higher order measure for
the membership grades with the agent’s evaluation gX (xi ), the subjective utility.
then it would reflect that A evaluates the normalized infor- The continuous linear and non-linear information source
mation source value for salary, i.e. IX (xi ) = 0.4 as giving transforms are obtained as
A the maximum possible satisfaction. For B, IX (xi ) = 0.9 Z b
gives an enjoyment degree of 0.4 on a scale of 0 to 1. HX = − f (x) log(SX (x))dx (35)
a
It may be worth noting the difference in µX (xi ) = 0.4
and gX (xi ) = 0.4. While µX (xi ) = 0.4 does not say Z b

much on its own, gX (xi ) = 0.4 specifies the relative HX,nl = − f (x) log(SX,nl (x)) (36)
a
JOURNAL OF OPERATIONS RESEARCH SOCIETY 10

The proposed information source transform HX,S gives attribute. For example, it finds application in assessing the
an evaluation of the information source values for an evaluations of the commodity prices by an equity specialist
attribute, based on the utility information values. Different (hetero agent). Similarly, it can be deployed to study the
utility information values given by different agents would assessments of economic parameters by non-economists
lead to a different value of the transform for the same set (biologists, physicists, etc.) that perform the role of hetero
of information source values. In this light, we present the agents. In essence, the proposed hetero transform helps
complement information source, and hetero information in cross-fertilization of knowledge from two diverse disci-
source transforms. plines, enriching the overall pool of knowledge leading to
better progeny (outcome and results).
Definition 7.2. The complement information source
transform evaluates the information source values IX , on Definition 7.4. The relative source transform evaluates
the basis of the utility information values given by the the information source values in IX based on the relative
0
complement agent gX . It is shown as utility information values obtained from the agent with
0 X  0  respect to the hetero agent.
HX = − IX (xi ) log SX (xi ) (37)  
X SX (xi )
i HX,relative = IX (xi ) log (43)
0 X  0  i
TX (xi )
HX,nl = − IX (xi ) log SX,nl (xi ) (38)  
i SX,nl (xi )
HX,nl,relative = IX (xi ) log (44)
0 0 TX,nl (xi )
The variants of HX and HX,nl for the continuous do-
main are given as The evaluating factor in this measure is the ratio of
Z b the utility information values provided by the agent and
0 0
HX = − f (x) log(SX (x))dx (39) the hetero agent. This ratio gives the evaluation of an
a information source value, and the resulting transform
Z b gives the second order subjective utility for the particular
0 0
HX,nl = − f (x) log(SX,nl (x)) (40) attribute. For example, commodity prices are evaluated by
a the relative information given by the commodities expert
This transform gives a secondary evaluation of the (evaluating agent) with respect to that provided by the
information source values from the perspective of the com- equities specialist (hetero agent). This measure can also
plement agent. A comparison of information source and be applied to have a measure of difference of opinions (or
complement information source transforms would indicate the agreement) between a pair of agents about a set of the
the differences in the net worth of an attribute for two information source values.
different sets of agents with completely different ideologies.
Definition 7.5. The Kullback-Leibler transform is defined
For instance, how different indicators of an economy are
as
viewed by a communist and a capitalist, can be answered  
by this transform.
X SX (xj )
HX,KL (xj ) = IX (xj ) log ,
Note:- In the sequel, we skip the expressions for the i
SX (xi ) (45)
continuous transforms that can be derived on similar lines ∀xi , xj ∈ X ; and j 6= i
as shown for the information source transforms.  
X SX,nl (xj )
Definition 7.3. A hetero agent is an agent from a differ- HX,nl,KL (xj ) = IX (xj ) log (46)
SX,nl (xi )
ent domain. The hetero source transform gives a meaure i
of the information source values in IX on the basis of the This measure is used to evaluate an information source
utility information values given by a hetero agent, and is value with respect to the other information source values
expressed as from the same domain, based on the corresponding utility
X information values by the same agent. For example, a
HX,hetero = − IX (xi ) log (TX (xi )) (41)
politician in a country is evaluated with respect to the
i
X other politicians from the same country by an individual.
HX,nl,hetero = − IX (xi ) log (TX,nl (xi )) (42) The transforms, proposed so far, are about the evalu-
i ation of the information source values that could be the
where TX (xi ) = IX (xi )hX (xi ), TX,nl (xi ) = attribute values, probability values (useful in the proba-
hX (xi ) bilisic domain), or any set of values corresponding to a
(IX (xi )) and hX (xi ) is the hetero agent from
single entity. We now present the transforms to evaluate
another domain.
the evaluations of the agent. Such transforms have the
This transform yields an evaluation of the informa- (evaluating) agent values as the information source values.
tion source values through the utility information values Different kinds of utility information values gathered by
furnished by another agent from another domain. It is different agents lead to various such agent transforms.
particularly useful in a few situations, when it is desirable We now investigate these measures based on Shannon
to have an external perspective in the evaluation of an transforms.
JOURNAL OF OPERATIONS RESEARCH SOCIETY 11

Definition 7.6. The agent transform gives a measure of TX (xi )) the evaluations (gX (xi ) about, say, the inflation-
the evaluations by an agent with the utility information ary parameters), provided by an economist (evaluating
values, provided by the same agent, and is as following agent). Given the utility information values by the physi-
X cist, corresponding to the evaluations by the economist,
AX = − gX (xi ) log(SX (xi )) (47) the economist gets new insights that may improve his
i
evaluation scheme in the future. In general, this measure
X
AX,nl = − gX (xi ) log(SX,nl (xi )) (48) holds application wherever there is an intent to enrich
i the overall evaluation scheme (of the information source
values), with the diverse knowledge from other domains,
This measure promises a lot of potential in the practical
which is otherwise primarily governed only by the domain
decision making situations that inevitably require the
specific knowledge of the agent.
intervention of the agent and hence often the associated
discrepancies. It helps to evaluate the evaluations of the Definition 7.9. The relative agent transform provides a
agent based on the utility information values provided by measure to evaluate an agent with respect to the ratio of
him. The information source values in this transform are the utility information values furnished by the agent and
the evaluations {gX (xi )}i , provided by the agent (to be those by a hetero agent, and is expressed as
evaluated). I useful to compare different agents who are  
X SX (xi )
evaluating the same information source values. AX,relative = gX (xi ) log (53)
i
TX (xi )
Definition 7.7. A complement agent can be defined as  
an agent with diametrically opposite views from that of the X SX,nl (xi )
AX,nl,relative = gX (xi ) log (54)
evaluating agent. The complement agent transform helps i
TX,nl (xi )
to evaluate the evaluations of an agent by making use of
the complementary utility information values. It is given By considering a ratio of the utility information values
as provided by both the evaluating and the hetero agents,
this measure helps to have a balanced evaluation, without
0 X  0 
AX = − gX (xi ) log SX (xi ) (49)
i
putting a greater emphasis on either of the agents. It
 0  can also be applied to study the extent of differences
0 X
AX,nl = − gX (xi ) log SX,nl (xi ) (50) (or the agreements) between the evaluations given by the
i evaluating agent and by the hetero agent.
This measure is an offshoot of the agent transform. Definition 7.10. The ratio of the difference between the
It evaluates the evaluations {gX (xi )}i by means of the information source transform, and the complement infor-
corresponding complementary utility information values mation source transform, and the difference between the
0
{SX (xi )}i . Together with the agent transform, it holds a agent transform and the complement agent transform is
good potential in the real life decision making situations called the “measure of evaluations”, given by
in multiple domains such as economic analysis, supplier 0
selection (supply chain management), ranking models HX − HX
MX = 0 (55)
(machine learning), medical diagnosis, credit score analysis AX − AX
(finance), and auditing (accounting) etc. For instance, 0
HX,nl − HX.nl
when an alternative has equally compelling advantages MX,nl = 0 (56)
and disadvantages, the combination of these two measures AX,nl − AX,nl
is useful in assessing the utility and disutility from such This measure is meant to study the evaluation scheme
an alternative. of the information source values by an agent. The value
Definition 7.8. The hetero agent transform evaluates the of the measure indicates the attitudinal character of the
agent with the utility information values given by a hetero agent. For example, a less demanding agent may give
agent. It is obtained as higher utilities to the same information source value than
X his more demanding counterpart.
AX,hetero = − gX (xi ) log (TX (xi )) (51)
i VIII. Case-study
X
AX,nl,hetero = − gX (xi ) log (TX,nl (xi )) (52) A. Application in Car Selection
i We consider a car selection problem in which a buyer
The hetero agent utility transform evaluates the evalu- evaluates many alternatives (forming a choice set) to select
ations of the agent based on the utility information values the one that suits him the most. To this end, he assesses
provided by an external agent. This measure assumes the desirable properties (attributes) of the available alter-
significance in the real world situations where knowledge natives in the given choice set.
from a diverse discipline is gathered to enrich the state For the purpose of this application, we collected the
of operation. For instance, this transform can be used to data regarding the key attributes of the recent car models
study how a physicist (hetero agent) perceives (through that are available for buying in the Indian car market. We
JOURNAL OF OPERATIONS RESEARCH SOCIETY 12

consider the following attributes :- X1 : length (mm), X2 : The similar is the case in the real world. We also determine
width (mm), X3 : height (mm), X4 : engine capacity (cc), the utility for an information source value after considering
X5 : power (hp), X6 : mileage (KM/pl), and X7 : cost (mil- the entire range of the information source values at our
lion rupees). The information source values corresponding disposal, and our evaluations vary as the choice set, and
to these attributes are populated in Table I. hence the set of the information source values changes.
We first normalize the information source values, given Given the subjective utility values, corresponding to
in Table I, by applying (5). The normalized information different attributes of an alternative, the total utility
source values, so obtained, are shown in Table II. For the for the alternative can be computed by aggregating the
sake of confidentiality, the alternatives are designated as subjective utility values. The alternative with the greatest
x1 , . . . , x60 rather than by their original identities. subjective utility gives the likely choice of the agent.
For each of the attributes, Xi , a set of information However, we restrict ourselves here from going deeper into
source values can be conceived, which is denoted as Ii , i = this application, as focus of the study is on the modelling
1, . . . , 7. We compute mean µi and σi values for Ii . The of subjective utility values.
evaluation behavior of a buyer towards an attribute Xi
is modelled through the evaluation function shown in (8),
and hence governed by the αi .
We consider two buyers A and B, who may have dif-
ferent evaluation patterns for an attribute. To model the
evaluation behavior of A towards different attributes, we
choose α1 = 2.0, α2 = 2.5, α3 = 3.5, α4 = 1.5, α5 = 2.8,
α6 = 5.0, and α7 = 4.1. For B, we take these values as
α1 = 4.1, α2 = 5.0, α3 = 2.8, α4 = 1.5, α5 = 3.5, α6 = 2.5,
and α7 = 2.0.
We replace αi , µi and σi in (8) to compute the evalua-
tion values {gi (xi )}, ∀xi ∈ X for A and B. The values, so
computed, are given in Tables III- IV.

B. Results
We note from Tables III and IV that for the same infor-
mation source value, the evaluation values vary for A and
B. This variation is explained by the different αi values
for A and B. We also notice that the evaluation value
does not always directly relate to the subjective utility.
For instance, consider the evaluation and subjective utility
values of A corresponding to x2 and x4 , and X1 . It is ob-
served that though the evaluation for x4 (g1 (x4 ) = 0.9976)
is more than that for x2 (g1 (x2 ) = 0.9044), indicating a
greater gain in utility in the case of x4 than x2 from X1 ,
the overall subjective utility from x4 (S1 (x4 ) = 0.5264) is
lesser than that from x2 (S1 (x2 ) = 0.5350). This is because
of the greater information source value of x2 (4456 mm)
than x4 (4324 mm), which also plays an equally important
role in the determination of the subjective utility. For
the given case, this reflects that A finds the length 4324
mm of x4 quite suitable for his purpose, and therefore
g1 (x4 ) > g1 (x2 ). However, since S1 (x2 ) > S1 (x4 ), A still
prefers x2 over x4 with respect to X1 , indicating his overall
preference for the longer car.
Another interesting aspect here is that the agent’s
evaluation for an information source value is determined
after taking into consideration the distribution of the
information source values, generated from the given choice
set, i.e. X = {x1 , . . . , x60 }. If the choice set changes with
the addition or omission of alternatives, then the set of the
information source values for each attribute changes, and
therefore the evaluations and the subjective utility values
for each of the information source values also get modified.
JOURNAL OF OPERATIONS RESEARCH SOCIETY 13

TABLE I: Information Source Values TABLE II: Normalized Information Source Values

X1 X2 X3 X4 X5 X6 X7 X1 X2 X3 X4 X5 X6 X7
x1 4739 1940 1279 5198 600 6 43.70 x1 0.7286 0.5794 0.0924 0.6264 0.6039 0.0818 0.3574
x2 4456 1796 1416 1968 141 20.38 3.101 x2 0.5915 0.3655 0.2526 0.1947 0.1342 0.6699 0.0233
x3 5141 2223 1742 5950 600 10.90 39.41 x3 0.9234 1.0000 0.6339 0.7269 0.6039 0.2822 0.3221
x4 4324 1765 1421 1995 143 20.58 3.108 x4 0.5276 0.3194 0.2584 0.1983 0.1363 0.6781 0.0234
x5 5020 2140 1360 5935 552 10.9 41.00 x5 0.8648 0.8766 0.1871 0.7249 0.5548 0.2822 0.3352
x6 4421 1793 1409 1798 177 16.60 4.838 x6 0.5746 0.3610 0.2444 0.1720 0.1711 0.5153 0.0376
x7 5299 1984 1488 3993 500 6.38 32.15 x7 1.0000 0.6448 0.3368 0.4653 0.5016 0.0973 0.2623
x8 4633 1811 1429 1995 188 22.69 4.312 x8 0.6773 0.3878 0.2678 0.1983 0.1823 0.7644 0.0333
x9 4380 2025 1241 5935 380 6.0 38.00 x9 0.5547 0.7057 0.0479 0.7249 0.3788 0.0818 0.3105
x10 4933 1874 1455 1968 190 16.66 5.306 x10 0.8226 0.4814 0.2982 0.1947 0.1844 0.5177 0.0415
x11 4818 1947 1391 5998 626 6.45 45.12 x11 0.7669 0.5899 0.2233 0.7333 0.6305 0.1002 0.3691
x12 4899 2094 1464 2993 258 14.69 6.188 x12 0.8062 0.8083 0.3087 0.3317 0.2540 0.4372 0.0487
x13 4462 1998 1204 7993 987 4.0 121.80 x13 0.5944 0.6656 0.0046 1.0000 1.0000 0.0000 1.0000
x14 3640 1595 1520 936 57 25.44 0.5470 x14 0.1962 0.0668 0.3742 0.0568 0.0483 0.8768 0.0023
x15 3785 1635 1485 1198 67 20.63 0.3630 x15 0.2664 0.1263 0.3333 0.0918 0.0585 0.6801 0.0008
x16 4550 1965 1200 1998 250 10.00 3.709 x16 0.6371 0.6166 0.0000 0.1987 0.2458 0.2454 0.0283
x17 3235 1585 1856 511 9.78 28.45 0.2595 x17 0.0000 0.0520 0.7672 0.0000 0.0000 1.0000 0.0000
x18 4435 1680 1765 2499 80 13.72 0.8110 x18 0.5814 0.1931 0.6608 0.2657 0.0718 0.3975 0.0045
x19 3429 1560 1541 799 53 25.17 0.3573 x19 0.0939 0.0148 0.3988 0.0384 0.0442 0.8658 0.0008
x20 4571 1951 1203 4497 597 8.47 41.66 x20 0.6472 0.5958 0.0035 0.5327 0.6009 0.1828 0.3406
x21 3235 1585 1856 511 9.78 28.45 0.2595 x21 0.0000 0.0520 0.7672 0.0000 0.0000 1.0000 0.0000
x22 4435 1680 1765 2499 80 13.72 0.8110 x22 0.5814 0.1931 0.6608 0.2657 0.0718 0.3975 0.0045
x23 3429 1560 1541 799 53 25.17 0.3573 x23 0.0939 0.0148 0.3988 0.0384 0.0442 0.8658 0.0008
x24 4571 1951 1203 4497 597 8.47 41.66 x24 0.6472 0.5958 0.0035 0.5327 0.6009 0.1828 0.3406
x25 4570 1910 1322 3855 553 9.52 33.75 x25 0.6468 0.5349 0.1426 0.4469 0.5558 0.2257 0.2755
x26 4618 1942 1273 6262 731 6.49 48.33 x26 0.6700 0.5824 0.0853 0.7686 0.7380 0.1018 0.3955
x27 4907 1953 1379 6262 652 4.00 40.91 x27 0.8100 0.5988 0.2093 0.7686 0.6571 0.0000 0.3344
x28 4568 1952 1211 3902 660 8.77 42.45 x28 0.6458 0.5973 0.0128 0.4532 0.6653 0.1950 0.3471
x29 3657 1627 1485 1368 158 11.12 3.0231 x29 0.2044 0.1144 0.3333 0.1145 0.1516 0.2912 0.0227
x30 3989 1687 1505 1368 145 16.32 1.0414 x30 0.3653 0.2035 0.3567 0.1145 0.1383 0.5038 0.0064
x31 3989 1706 1542 1248 92 20.50 0.8651 x31 0.3653 0.2318 0.4000 0.0985 0.0841 0.6748 0.0049
x32 4596 1730 1494 1368 89 14.9 0.7971 x32 0.6594 0.2674 0.3438 0.1145 0.0810 0.4458 0.0044
x33 4860 1780 1885 2149 139 11.6 1.2551 x33 0.7873 0.3417 0.8011 0.2189 0.1322 0.3108 0.0081
x34 3992 1820 2055 2596 80 17 0.6382 x34 0.3667 0.4011 1.0000 0.2786 0.0718 0.5317 0.0031
x35 3995 1695 1525 1196 87 18.16 0.5382 x35 0.3682 0.2154 0.3801 0.0915 0.0790 0.5791 0.0022
x36 3999 1765 1708 1498 99 22.7 0.8551 x36 0.3701 0.3194 0.5941 0.1319 0.0913 0.7648 0.0049
x37 4892 1860 1837 2198 158 12.61 2.8011 x37 0.8028 0.4606 0.7450 0.2254 0.1516 0.3521 0.0209
x38 3886 1695 1525 1196 87 18.16 0.5353 x38 0.3154 0.2154 0.3801 0.0915 0.0790 0.5791 0.0022
x39 4784 1961 1391 4951 396 13.11 6.6685 x39 0.7504 0.6107 0.2233 0.5934 0.3952 0.3726 0.0527
x40 3886 1695 1525 1498 99 25.83 0.6021 x40 0.3154 0.2154 0.3801 0.1319 0.0913 0.8928 0.0028
x41 4933 1849 1464 1993 212 23.1 4.022 x41 0.8226 0.4442 0.3087 0.1980 0.2069 0.7811 0.0309
x42 3990 1680 1505 1498 99 25.8 0.788 x42 0.3657 0.1931 0.3567 0.1319 0.0913 0.8916 0.0043
x43 4456 1735 1686 1498 99 21.91 1.215 x43 0.5915 0.2748 0.5684 0.1319 0.0913 0.7325 0.0078
x44 4386 1683 1603 1498 99 24.2 1.0875 x44 0.5576 0.1976 0.4713 0.1319 0.0913 0.8261 0.0068
x45 4386 1683 1603 1498 99 24.2 1.146 x45 0.5576 0.1976 0.4713 0.1319 0.0913 0.8261 0.0072
x46 4440 1695 1495 1497 117 17.8 1.153 x46 0.5838 0.2154 0.3450 0.1317 0.1097 0.5644 0.0073
x47 4545 1820 1685 1997 154 13.7 2.258 x47 0.6346 0.4011 0.5672 0.1986 0.1475 0.3967 0.0164
x48 3955 1694 1544 1498 98 27.3 0.911 x48 0.3488 0.2139 0.4023 0.1319 0.0902 0.9529 0.0053
x49 4270 1780 1630 1396 89 21.38 1.024 x49 0.5014 0.3417 0.5029 0.1182 0.0810 0.7108 0.0062
x50 4570 1800 1465 1582 126 22.54 1.506 x50 0.6468 0.3714 0.3099 0.1431 0.1189 0.7582 0.0102
x51 3985 1734 1505 1396 89 23.14 0.862 x51 0.3633 0.2734 0.3567 0.1182 0.0810 0.7828 0.0049
x52 3495 1550 1500 814 55 21.1 0.430 x52 0.1259 0.0000 0.3508 0.0405 0.0462 0.6993 0.0014
x53 3765 1660 1520 1197 81 18.9 0.527 x53 0.2567 0.1634 0.3742 0.0916 0.0728 0.6094 0.0022
x54 3585 1595 1550 1086 68 19.81 0.4933 x54 0.1695 0.0668 0.4093 0.0768 0.0595 0.6466 0.0019
x55 3995 1760 1555 1396 89 21.19 0.976 x55 0.3682 0.3120 0.4152 0.1182 0.0810 0.7030 0.0059
x56 4690 1880 1690 2199 194 13.01 3.230 x56 0.7049 0.4903 0.5731 0.2256 0.1885 0.3685 0.0244
x57 3675 1715 1635 1198 82 18.15 0.536 x57 0.2131 0.2451 0.5087 0.0918 0.0739 0.5787 0.0022
x58 4475 1850 1660 1995 182 18.42 2.198 x58 0.6007 0.4457 0.5380 0.1983 0.1762 0.5897 0.0159
x59 4375 1700 1475 1591 121 17.4 1.079 x59 0.5523 0.2228 0.3216 0.1443 0.1138 0.5480 0.0067
x60 3995 1660 1520 1120 71 24.4 0.733 x60 0.3682 0.1634 0.3742 0.0814 0.0626 0.8343 0.0039
JOURNAL OF OPERATIONS RESEARCH SOCIETY 14

TABLE III: Evaluation gi (xi ) and Subjective Utility Values Si (xi ) for A

Evaluation Values Subjective Utility


Alt.
g1 (xi ) g2 (xi ) g3 (xi ) g4 (xi ) g5 (xi ) g6 (xi ) g7 (xi ) S1 (xi ) S2 (xi ) S3 (xi ) S4 (xi ) S5 (xi ) S6 (xi ) S7 (xi )
x1 0.4515 0.3873 0.0729 0.1474 0.0163 0.0000 0.0052 0.3290 0.2244 0.0067 0.0923 0.0098 0.0000 0.0019
x2 0.9044 0.9998 0.8714 0.8664 0.9335 0.9670 0.9790 0.5350 0.3654 0.2201 0.1687 0.1254 0.6478 0.0229
x3 0.0541 0.0000 0.1383 0.0637 0.0163 0.5491 0.0538 0.0499 0.0000 0.0877 0.0463 0.0098 0.155 0.0173
x4 0.9976 0.9885 0.8907 0.8766 0.9376 0.9563 0.9790 0.5264 0.3158 0.2302 0.1739 0.1278 0.6485 0.0229
x5 0.1178 0.0004 0.5362 0.0648 0.0610 0.5491 0.0256 0.1019 0.0003 0.1003 0.0470 0.0338 0.155 0.0086
x6 0.9413 1.0000 0.8412 0.7988 0.9849 1.0000 0.9918 0.5409 0.3611 0.2056 0.1374 0.1685 0.5153 0.0374
x7 0.0163 0.1639 0.9978 0.4477 0.1798 0.0001 0.4276 0.0163 0.1057 0.3361 0.2084 0.0902 0.0000 0.1122
x8 0.6328 0.9935 0.9175 0.8766 0.9924 0.6402 0.9888 0.4286 0.3853 0.2457 0.1739 0.1810 0.4894 0.0330
x9 0.9739 0.0537 0.0126 0.0648 0.7132 0.0000 0.0942 0.5403 0.0379 0.0006 0.0470 0.2702 0.0000 0.0293
x10 0.1915 0.8019 0.9739 0.8664 0.9934 1.0000 0.9939 0.1575 0.3861 0.2905 0.1687 0.1832 0.5178 0.0413
x11 0.3305 0.3452 0.7461 0.0602 0.0069 0.0001 0.0019 0.2535 0.2036 0.1667 0.0441 0.0044 0.0000 0.0007
x12 0.2276 0.0038 0.9845 0.8469 0.9963 0.9955 0.9968 0.1835 0.0031 0.3040 0.2809 0.2531 0.4353 0.0486
x13 0.8973 0.1160 0.0012 0.0041 0.0000 0.0000 0.0000 0.5334 0.0772 0.0000 0.0041 0.0000 0.0000 0.0000
x14 0.1660 0.1553 1.0000 0.4541 0.6310 0.0418 0.9388 0.0326 0.0104 0.3743 0.0258 0.0305 0.0367 0.0022
x15 0.3351 0.3495 0.997 0.5519 0.6779 0.9533 0.9346 0.0893 0.0441 0.3323 0.0507 0.0397 0.6484 0.0008
x16 0.7726 0.2474 0.0009 0.8777 0.9984 0.3022 0.9845 0.4923 0.1526 0.0000 0.1744 0.2454 0.0742 0.0279
x17 0.0093 0.1213 0.0002 0.3189 0.4012 0.0000 0.9322 0.0000 0.0063 0.0002 0.0000 0.0000 0.0000 0.0000
x18 0.9275 0.6390 0.0613 0.9962 0.7358 0.9740 0.9445 0.5392 0.1234 0.0405 0.2647 0.0529 0.3872 0.0043
x19 0.0438 0.0602 0.9995 0.4071 0.6118 0.0672 0.9345 0.0041 0.0009 0.3986 0.0157 0.0271 0.0582 0.0007
x20 0.7385 0.3221 0.0011 0.2924 0.0179 0.0396 0.0180 0.4780 0.1919 0.0000 0.1558 0.0107 0.0072 0.0061
x21 0.0093 0.1213 0.0002 0.3189 0.4012 0.0000 0.9322 0.0000 0.0063 0.0002 0.0000 0.0000 0.0000 0.0000
x22 0.9275 0.6390 0.0613 0.9962 0.7358 0.9740 0.9445 0.5392 0.1234 0.0405 0.2647 0.0529 0.3872 0.0043
x23 0.0438 0.0602 0.9995 0.4071 0.6118 0.0672 0.9345 0.0041 0.0009 0.3986 0.0157 0.0271 0.0582 0.0007
x24 0.7385 0.3221 0.0011 0.2924 0.0179 0.0396 0.0180 0.4780 0.1919 0.0000 0.1558 0.0107 0.0072 0.0061
x25 0.7402 0.5818 0.2684 0.4976 0.0595 0.1883 0.3155 0.4788 0.3112 0.0383 0.2224 0.0331 0.0425 0.0869
x26 0.6588 0.3751 0.0576 0.0437 0.0001 0.0001 0.0001 0.4414 0.2185 0.0049 0.0336 0.0001 0.0000 0.0000
x27 0.2187 0.3108 0.6702 0.0437 0.0026 0.0000 0.0268 0.1772 0.1861 0.1403 0.0336 0.0017 0.0000 0.0089
x28 0.7435 0.3164 0.0019 0.4803 0.0019 0.0674 0.0115 0.4802 0.1890 0.0000 0.2177 0.0013 0.0131 0.0040
x29 0.1819 0.3037 0.997 0.6200 0.9635 0.6075 0.9782 0.0372 0.0348 0.3323 0.0710 0.1461 0.1769 0.0222
x30 0.6712 0.6836 0.9999 0.6200 0.9415 1.0000 0.9491 0.2452 0.1392 0.3567 0.0710 0.1303 0.5039 0.0061
x31 0.6712 0.7951 0.9994 0.5716 0.7851 0.9609 0.9456 0.2452 0.1843 0.3997 0.0563 0.0661 0.6484 0.0047
x32 0.6966 0.9052 0.999 0.6200 0.7732 0.9972 0.9442 0.4593 0.2421 0.3435 0.0710 0.0627 0.4446 0.0042
x33 0.2742 0.9987 0.0000 0.9310 0.9293 0.7227 0.9531 0.2159 0.3413 0.0000 0.2038 0.1229 0.2246 0.0078
x34 0.6764 0.9839 0.0000 0.9780 0.7358 1.0000 0.9408 0.2481 0.3947 0.0000 0.2725 0.0529 0.5317 0.0029
x35 0.6816 0.7326 1.0000 0.5511 0.7651 0.9998 0.9386 0.2510 0.1578 0.3801 0.0505 0.0605 0.5790 0.0021
x36 0.6884 0.9885 0.3308 0.6738 0.8118 0.6377 0.9454 0.2548 0.3158 0.1965 0.0889 0.0741 0.4877 0.0046
x37 0.2356 0.8699 0.001 0.9464 0.9635 0.8917 0.9757 0.1891 0.4007 0.0008 0.2134 0.1461 0.3140 0.0204
x38 0.4934 0.7326 1.0000 0.5511 0.7651 0.9998 0.9386 0.1556 0.1578 0.3801 0.0505 0.0605 0.5790 0.0021
x39 0.3804 0.2677 0.7461 0.1898 0.6396 0.9396 0.9978 0.2855 0.1635 0.1667 0.1126 0.2528 0.3501 0.0526
x40 0.4934 0.7326 1.0000 0.6738 0.8118 0.0188 0.9400 0.1556 0.1578 0.3801 0.0889 0.0741 0.0168 0.0027
x41 0.1915 0.9134 0.9845 0.8759 0.9994 0.5324 0.9869 0.1575 0.4058 0.3040 0.1735 0.2068 0.4159 0.0306
x42 0.6729 0.6390 0.9999 0.6738 0.8118 0.0201 0.9440 0.2462 0.1234 0.3567 0.0889 0.0741 0.0179 0.0041
x43 0.9044 0.9228 0.4887 0.6738 0.8118 0.8071 0.9524 0.5350 0.2537 0.2778 0.0889 0.0741 0.5912 0.0075
x44 0.9699 0.6583 0.9384 0.6738 0.8118 0.2373 0.9500 0.5409 0.1301 0.4423 0.0889 0.0741 0.1960 0.0065
x45 0.9699 0.6583 0.9384 0.6738 0.8118 0.2373 0.9511 0.5409 0.1301 0.4423 0.0889 0.0741 0.1960 0.0069
x46 0.9222 0.7326 0.9991 0.6733 0.8723 1.0000 0.9512 0.5384 0.1578 0.3447 0.0887 0.0957 0.5644 0.0070
x47 0.7806 0.9839 0.496 0.8774 0.9574 0.9732 0.9690 0.4954 0.3947 0.2814 0.1743 0.1413 0.3861 0.0159
x48 0.6123 0.7266 0.9992 0.6738 0.8081 0.0002 0.9465 0.2136 0.1555 0.4020 0.0889 0.0730 0.0002 0.0051
x49 0.9963 0.9987 0.8436 0.6315 0.7732 0.8856 0.9488 0.4996 0.3413 0.4243 0.0747 0.0627 0.6295 0.0060
x50 0.7402 0.9991 0.9854 0.7089 0.8979 0.6768 0.9576 0.4788 0.3711 0.3054 0.1015 0.1068 0.5132 0.0098
x51 0.6643 0.9195 0.9999 0.6315 0.7732 0.5215 0.9455 0.2414 0.2514 0.3567 0.0747 0.0627 0.4082 0.0047
x52 0.0691 0.0439 0.9996 0.4121 0.6214 0.9162 0.9362 0.0087 0.0000 0.3507 0.0167 0.0288 0.6408 0.0013
x53 0.3073 0.5074 1.0000 0.5516 0.7400 0.9980 0.9384 0.0789 0.0829 0.3743 0.0506 0.0539 0.6082 0.0021
x54 0.1215 0.1553 0.9982 0.5089 0.6825 0.9866 0.9376 0.0206 0.0104 0.4086 0.0391 0.0407 0.6380 0.0018
x55 0.6816 0.9821 0.9969 0.6315 0.7732 0.9071 0.9478 0.2510 0.3065 0.4139 0.0747 0.0627 0.6378 0.0056
x56 0.5339 0.7690 0.4592 0.9467 0.9952 0.9316 0.9802 0.3764 0.3771 0.2632 0.2136 0.1876 0.3433 0.0240
x57 0.1998 0.8411 0.8199 0.5519 0.7443 0.9998 0.9386 0.0426 0.2062 0.4171 0.0507 0.0550 0.5786 0.0021
x58 0.8812 0.9098 0.6738 0.8766 0.9887 0.9995 0.9682 0.5294 0.4056 0.3625 0.1739 0.1742 0.5895 0.0154
x59 0.9770 0.7618 0.9928 0.7127 0.8841 1.0000 0.9498 0.5396 0.1698 0.3193 0.1029 0.1006 0.5481 0.0064
x60 0.6816 0.5074 1.0000 0.5218 0.6961 0.1923 0.9428 0.2510 0.0829 0.3743 0.0425 0.0436 0.1604 0.0037
JOURNAL OF OPERATIONS RESEARCH SOCIETY 15

TABLE IV: Evaluation gi (xi ) and Subjective Utility Values Si (xi ) for B

Evaluation Values Subjective Utility


Alt.
g1 (xi ) g2 (xi ) g3 (xi ) g4 (xi ) g5 (xi ) g6 (xi ) g7 (xi ) S1 (xi ) S2 (xi ) S3 (xi ) S4 (xi ) S5 (xi ) S6 (xi ) S7 (xi )
x1 0.5352 0.4067 0.1153 0.1474 0.0028 0.0334 0.1059 0.3900 0.2357 0.0107 0.0923 0.0017 0.0027 0.0378
x2 0.9910 1.0000 0.8150 0.8664 0.9654 0.8325 0.8583 0.5863 0.3655 0.2059 0.1687 0.1296 0.5577 0.0201
x3 0.0001 0.0000 0.1780 0.0637 0.0028 0.4611 0.1851 0.0001 0.0000 0.1128 0.0463 0.0017 0.1301 0.0596
x4 1.0000 0.9999 0.8368 0.8766 0.9681 0.8095 0.8585 0.5276 0.3194 0.2163 0.1739 0.1320 0.5489 0.0201
x5 0.0086 0.0000 0.5040 0.0648 0.0268 0.4611 0.1519 0.0075 0.0000 0.0943 0.0470 0.0149 0.1301 0.0509
x6 0.9968 1.0000 0.7822 0.7988 0.9947 0.9993 0.9083 0.5728 0.3611 0.1912 0.1374 0.1702 0.5150 0.0342
x7 0.0000 0.0380 0.9926 0.4477 0.1403 0.0445 0.3971 0.0000 0.0245 0.3343 0.2084 0.0704 0.0043 0.1042
x8 0.8176 1.0000 0.8688 0.8766 0.9977 0.5128 0.8941 0.5538 0.3878 0.2327 0.1739 0.1820 0.3920 0.0298
x9 0.9994 0.0002 0.0386 0.0648 0.7728 0.0334 0.2185 0.5544 0.0001 0.0018 0.0470 0.2928 0.0027 0.0679
x10 0.0607 0.9524 0.9468 0.8664 0.9981 0.9995 0.9202 0.0499 0.4585 0.2824 0.1687 0.1841 0.5176 0.0382
x11 0.2918 0.3226 0.6876 0.0602 0.0006 0.0468 0.0865 0.2238 0.1903 0.1536 0.0441 0.0004 0.0047 0.0319
x12 0.1071 0.0000 0.9647 0.8469 0.9991 0.9353 0.9407 0.0863 0.0000 0.2979 0.2809 0.2538 0.4089 0.0459
x13 0.9895 0.0097 0.0099 0.0041 0.0000 0.0057 0.0000 0.5883 0.0064 0.0000 0.0041 0.0000 0.0000 0.0000
x14 0.0361 0.0312 1.0000 0.4541 0.6843 0.1684 0.7712 0.0071 0.0021 0.3743 0.0258 0.0331 0.1476 0.0018
x15 0.3009 0.3312 0.9905 0.5519 0.7356 0.8035 0.7644 0.0802 0.0418 0.3302 0.0507 0.0431 0.5465 0.0006
x16 0.9397 0.1422 0.0084 0.8777 0.9997 0.3349 0.8768 0.5987 0.0877 0.0000 0.1744 0.2457 0.0822 0.0249
x17 0.0000 0.0117 0.0041 0.3189 0.4095 0.0221 0.7605 0.0000 0.0006 0.0031 0.0000 0.0000 0.0221 0.0000
x18 0.9950 0.8182 0.1029 0.9962 0.7958 0.8501 0.7807 0.5785 0.1580 0.0680 0.2647 0.0572 0.3380 0.0035
x19 0.0000 0.0004 0.9976 0.4071 0.6627 0.1934 0.7641 0.0000 0.0000 0.3979 0.0157 0.0293 0.1675 0.0006
x20 0.9171 0.2770 0.0095 0.2924 0.0033 0.1658 0.1395 0.5936 0.1651 0.0000 0.1558 0.0020 0.0303 0.0475
x21 0.0000 0.0117 0.0041 0.3189 0.4095 0.0221 0.7605 0.0000 0.0006 0.0031 0.0000 0.0000 0.0221 0.0000
x22 0.9950 0.8182 0.1029 0.9962 0.7958 0.8501 0.7807 0.5785 0.1580 0.0680 0.2647 0.0572 0.3380 0.0035
x23 0.0000 0.0004 0.9976 0.4071 0.6627 0.1934 0.7641 0.0000 0.0000 0.3979 0.0157 0.0293 0.1675 0.0006
x24 0.9171 0.2770 0.0095 0.2924 0.0033 0.1658 0.1395 0.5936 0.1651 0.0000 0.1558 0.0020 0.0303 0.0475
x25 0.9183 0.7458 0.2879 0.4976 0.0258 0.2747 0.3423 0.5940 0.3989 0.0411 0.2224 0.0143 0.0620 0.0943
x26 0.8464 0.3822 0.0989 0.0437 0.0000 0.0482 0.0530 0.5672 0.2226 0.0084 0.0336 0.0000 0.0049 0.0210
x27 0.0945 0.2552 0.6184 0.0437 0.0001 0.0057 0.1536 0.0765 0.1528 0.1295 0.0336 0.0001 0.0000 0.0514
x28 0.9206 0.2660 0.0131 0.4803 0.0001 0.1935 0.1256 0.5946 0.1589 0.0002 0.2177 0.0000 0.0378 0.0436
x29 0.0506 0.2417 0.9905 0.6200 0.9838 0.4936 0.8558 0.0103 0.0277 0.3302 0.0710 0.1492 0.1438 0.0195
x30 0.8592 0.8653 0.9991 0.6200 0.9706 0.9971 0.7890 0.3139 0.1762 0.3564 0.0710 0.1343 0.5024 0.0051
x31 0.8592 0.9488 0.9973 0.5716 0.8440 0.8189 0.7827 0.3139 0.2199 0.3989 0.0563 0.0710 0.5526 0.0039
x32 0.8832 0.9901 0.9959 0.6200 0.8326 0.9488 0.7802 0.5824 0.2648 0.3424 0.0710 0.0675 0.4230 0.0034
x33 0.1835 1.0000 0.0010 0.9310 0.9626 0.5656 0.7966 0.1445 0.3417 0.0008 0.2038 0.1273 0.1758 0.0065
x34 0.8643 0.9997 0.0000 0.9780 0.7958 1.0000 0.7745 0.3170 0.4011 0.0000 0.2725 0.0572 0.5317 0.0024
x35 0.8693 0.9077 1.0000 0.5511 0.8248 0.9869 0.7708 0.3201 0.1956 0.3801 0.0505 0.0652 0.5715 0.0018
x36 0.8757 0.9999 0.3382 0.6738 0.8686 0.5113 0.7823 0.3242 0.3194 0.2009 0.0889 0.0793 0.3911 0.0038
x37 0.1189 0.9808 0.0093 0.9464 0.9838 0.7128 0.8488 0.0955 0.4518 0.0069 0.2134 0.1492 0.2510 0.0177
x38 0.6123 0.9077 1.0000 0.5511 0.8248 0.9869 0.7707 0.1931 0.1956 0.3801 0.0505 0.0652 0.5715 0.0017
x39 0.3936 0.1760 0.6876 0.1898 0.6940 0.7791 0.9507 0.2954 0.1075 0.1536 0.1126 0.2743 0.2903 0.0501
x40 0.6123 0.9077 1.0000 0.6738 0.8686 0.1362 0.7732 0.1931 0.1956 0.3801 0.0889 0.0793 0.1216 0.0022
x41 0.0607 0.9918 0.9647 0.8759 0.9999 0.4521 0.8859 0.0499 0.4407 0.2979 0.1735 0.2069 0.3531 0.0274
x42 0.8609 0.8182 0.9991 0.6738 0.8686 0.1385 0.7799 0.3149 0.1580 0.3564 0.0889 0.0793 0.1235 0.0034
x43 0.9910 0.9936 0.4651 0.6738 0.8686 0.6294 0.7952 0.5863 0.2731 0.2644 0.0889 0.0793 0.4611 0.0063
x44 0.9992 0.8396 0.8955 0.6738 0.8686 0.3014 0.7907 0.5572 0.1659 0.4221 0.0889 0.0793 0.2490 0.0054
x45 0.9992 0.8396 0.8955 0.6738 0.8686 0.3014 0.7927 0.5572 0.1659 0.4221 0.0889 0.0793 0.2490 0.0058
x46 0.9942 0.9077 0.9963 0.6733 0.9203 0.9946 0.7930 0.5805 0.1956 0.3438 0.0887 0.1010 0.5614 0.0058
x47 0.9444 0.9997 0.4711 0.8774 0.9803 0.8480 0.8311 0.5994 0.4011 0.2672 0.1743 0.1447 0.3364 0.0137
x48 0.7928 0.9031 0.9965 0.6738 0.8652 0.0536 0.7843 0.2765 0.1932 0.4009 0.0889 0.0781 0.0511 0.0042
x49 1.0000 1.0000 0.7848 0.6315 0.8326 0.7057 0.7884 0.5014 0.3417 0.3947 0.0747 0.0675 0.5016 0.0050
x50 0.9183 1.0000 0.9664 0.7089 0.9402 0.5353 0.8054 0.5940 0.3715 0.2995 0.1015 0.1118 0.4059 0.0083
x51 0.8522 0.9930 0.9991 0.6315 0.8326 0.4462 0.7826 0.3097 0.2715 0.3564 0.0747 0.0675 0.3493 0.0039
x52 0.0006 0.0001 0.9980 0.4121 0.6736 0.7439 0.7668 0.0001 0.0000 0.3502 0.0167 0.0312 0.5203 0.0011
x53 0.2456 0.6311 1.0000 0.5516 0.8001 0.9568 0.7704 0.0631 0.1031 0.3743 0.0506 0.0583 0.5831 0.0017
x54 0.0099 0.0312 0.9935 0.5089 0.7406 0.8904 0.7692 0.0017 0.0021 0.4067 0.0391 0.0441 0.5758 0.0015
x55 0.8693 0.9997 0.9901 0.6315 0.8326 0.7318 0.7867 0.3201 0.3119 0.4111 0.0747 0.0675 0.5145 0.0046
x56 0.6806 0.9333 0.4412 0.9467 0.9987 0.7664 0.8623 0.4798 0.4577 0.2529 0.2136 0.1883 0.2824 0.0211
x57 0.0702 0.9705 0.7600 0.5519 0.8043 0.9872 0.7707 0.0150 0.2379 0.3867 0.0507 0.0594 0.5713 0.0017
x58 0.9857 0.9911 0.6216 0.8766 0.9963 0.9786 0.8291 0.5922 0.4418 0.3344 0.1739 0.1756 0.5772 0.0132
x59 0.9996 0.9287 0.9808 0.7127 0.9296 0.9990 0.7904 0.5521 0.2070 0.3154 0.1029 0.1058 0.5475 0.0053
x60 0.8693 0.6311 1.0000 0.5218 0.7550 0.2769 0.7779 0.3201 0.1031 0.3743 0.0425 0.0473 0.2310 0.0030
JOURNAL OF OPERATIONS RESEARCH SOCIETY 16

IX. Conclusions where, the information gain g(p(xi )) corresponding to the


Pointing to the commonality and complementarity of occurrence of the ith event is defined as
the expected utility and the information set theories, we g(p(xi )) = e−(a(p(xi ))
3
+b(p(xi ))2 +c(p(xi ))+d)
(A.2)
have developed a framework to give utility information
values and the measures for the evaluation of the same, for all probabilities p(xi ) ∈ [0, 1].
specifically against the background of the entropy concept. The entropy EHA is an expectation of the information
The notion of agent is devised through an information- gain function. Some of the important associated properties
theoretic gain function that considers both the agent’s are
3 2
possibilistic utility and the distribution of the information 1) g(p(xi )) = e−(a(p(xi )) +b(p(xi )) +c(p(xi ))+d) is a con-
source values. Combining the information gain with the tinuous function for all p(xi ) ∈ [0, 1].
information source values, we introduce utility information 2) g(p(xi )) is bounded.
values to give the possibilistic uncertainty in the form of 3) With thePincrease in p(xi ), g(p(xi )) decreases.
information (entropy). The utility information values and 4) EHA = i p(xi )IX (p(xi )) is a continuous function
their evaluations through the proposed utility information for all p(xi ) ∈ [0, 1] and real valued a, b, c, and d
measures, together garner the essence of the utility infor- parameters.
mation theory. 5) If p(xi ) = n1 , ∀xi , then EHA is an increasing function
The proposed framework implements this real world of n.
aspect by considering the entire distribution of the infor- The generalized form of Hanman-Anirban entropy func-
mation source values in the form of its statistical moments tion is given as
for determining the agent’s evaluation of an information 3 2 α
p(xi )e−(a(p(xi )) +b(p(xi )) +c(p(xi ))+d)
X
source value. While the agent’s individualism in the eval- EHA,G =
uation of IX (xi ) is represented through αX in gX (·), the i
distribution of the information source values is assimilated (A.3)
through µX and σX . In the generalized form, the presence of free parameter
We have shown linear and non-linear forms of the α imparts more flexibility to the Hanman-Anirban entropy
proposed utility sets. On the similar lines, it is possible to function in the representation of various information gain
devise many forms of the utility sets through the Hanman- functions. The additional parameter α makes the Hanman-
Anirban entropy framework. Besides, higher forms of the Anirban entropy function adaptive. The parameter based
utility transforms can be derived following the proposed adaptive gain function makes the proposed formalism
approach. For instance, a transform taking the utility amenable to yield different forms through a choice of
information values over a period of time can help an agent parameter values.
to learn from his past mistakes and improve his evaluation The normalized Hanman-Anirban entropy function
scheme. The proposed theory has a very interesting com- EHA,N is defined as
plementarity with the risk theory and the combination of EHA − e−(a+b+c+d)
the two can help to invetigate risk aversion or risk loving EHA,N = (A.4)
λ
attitude of an agent. P −(a(p(xi ))3 +b(p(xi ))2 +c(p(xi ))+d)
The study has shown potential of applications in sev- where, EHA = i p(xi )e
a b c
eral areas such as Economic analysis, insurance, finance, and the constant λ = e−( n3 + n2 + n +d) − e−(a+b+c+d) ; n
consumer behavior, and decision support systems, which is the number of events in the probabilistic experiment
have been kept for a future study. We have focused upon (or the number of states in a system).
the attribute values as the information source values in Further, the generalized normalized Hanman-Anirban
this introductory paper. However, the information source entropy function is
values, can in fact assume multiple forms, say proba- α
EHA,G − e−(a+b+c+d)
bilistic, possibilistic, intuitionistic, or higher order fuzzy EHA,GN = (A.5)
memberships, to name a few. Several forms of the utility λG
3 2 α
information sets and utility transforms can be developed where, EHA,G = i p(xi )e−(a(p(xi )) +b(p(xi )) +c(p(xi ))+d) ;
P
α
depending upon the type of information source values, and a b c
and the constant λG = e−( n3 + n2 + n +d) − e−(a+b+c+d)
α

agent.
The parameters a, b, c, and d in the Hanman-Anirban
entropy function can also be varying when they take a
Appendix A
function as their value. In this form, the Hanman-Anirban
Generalized Hanman-Anirban Entropy Function
entropy function becomes approximate.
The fine details and the properties of the general-
ized Hanman-Anirban entropy function are reproduced Appendix B
here from (Aggarwal and Hanmandlu, 2016). The non- Proofs for the Properties of the Generalized
normalized Hanman-Anirban entropy function is ex- Hanman-Anirban Entropy Function
pressed as X For notational simplicity, we replace p(xi ) by pi in the
EHA = p(xi )g(p(xi )) (A.1)
following proofs.
i
JOURNAL OF OPERATIONS RESEARCH SOCIETY 17

Proof of Property 1 : g(pi ) is a continuous curve and Since a, b, c, d and α are real, k1 and k2 are finite. Hence
assumes a set of finite values even for discrete values of g(pi ) is bounded.
α
pi . Proof of Property 3 : We have k1 = e−d and k2 =
α
e−(a+b+c+d) , as defined in Property 2. The ratio,
Proof of Property 2 : As pi → 0, we obtain k1 e−d
α
α
= −(a+b+c+d)α = e(a+b+c) > 1f or(a + b + c) > 0
α
g(pi ) → e−d = k1 (B.1) k2 e
(B.3)
To prove that g(p) is a decreasing function, we
and as pi → 1
need to show that the derivative of g(p) with respect
α
g(pi ) → e−(a+b+c+d) = k2 (B.2) to p is always negative or zero. The derivative,

∂ ∂ −(a(pi )3 +b(pi )2 +c(pi )+d)α


g(p) = e
∂p ∂p (B.4)
3 2 α
= −e−(a(pi ) +b(pi ) +c(pi )+d) α(a(pi )3 + b(pi )2 + c(pi ) + d)α−1 (3a(pi )2 + 2bpi + c)

3 2 α
Since e−(a(pi ) +b(pi ) +c(pi )+d) > 0 for any 0 < p < 1. n ≥ 1. Then
Hence, when a ≥ 0, b ≥ 0 and c ≥ 0, g(p) always decreases 3 2 α
e−(a(pi ) +b(pi ) +c(pi )+d)
X
E(p) =
for 0 ≥ p ≥ 1. i
X1 α (B.5)
Proof of Property 4 : Since g(pi ) is a continuous a b c
e−( n3 + n2 + n +d) =
X
= (1/n)(hn )
function (Property 1), multiplying it by a finite value pi i
n
retains its continuous nature and the sum of continuous a b c α
functions is also continuous. where, h(n) = e−( n3 + n2 + n +d)
To prove that H(P ) is an increasing function, it is
Proof of Property 5 :
sufficient to prove that h(n) is an increasing function.
Consider the case where p1 = p2 = . . . = pn = 1/n and Towards this end, we take the partial derivative of h(n)

 α−1  
∂  a b c α a b c 3a 2b c
h(n) = e−( n3 + n2 + n +d) α 3 + 2 + + d + + (B.6)
∂n n n n n4 n3 n2


For n ≥ 1 and a ≥ 0, b ≥ 0, c ≥ 0, ∂n h(n) ≥ 0. Therefore, function that is given as
H(p) is an increasing function for a, b, c ≥ 0.
X
EX,P P = p(xi )e−(p(xi )−1) (B.1)
i
Appendix C
Derivation of Important Entropy Functions
from the Generalized Hanman-Anirban Entropy
(2) Shannon Entropy: The exponential gain e−(p(xi )−1)
Function
in (B.1) can be approximated to − log (p (xi )) by taking
the first two terms in their respective series expansions.
It is possible to reduce the generalized Hanman-Anirban
This leads to the Shannon entropy function
entropy function to most of the prominent entropy func-
tions like the Pal and Pal, Shannon, and Luca and Termini
entropies. We derive these entropies using the generalized
X
EX,Sh = − p(xi ) log (p (xi )) (B.2)
Hanman-Anirban entropy. We append PP (Pal and Pal), i
Sh (Shannon) or HA (Hanman-Anirban) to the subscript
of EX to distinguish among the entropies.
(1) Pal and Pal Entropy: Taking, aX = bX = 0; cX = Alternatively, by taking aX = bX = dX = 0, cX = αX =
1; dX = −1 and αX = 1 in (4), we get Pal and Pal entropy 1, p(xi ) = − log (p (xi )) in (4), we obtain
JOURNAL OF OPERATIONS RESEARCH SOCIETY 18

Taking aX = bX = dX = 0, αX = 1, p(xi ) =
X − log (p (xi )) in (4), we obtain
EX,HA = − log (p (xi )) e−(− log(p(xi ))) X
i EX,HA,nl = − log(p(xi )ecX log(p(xi ))
n
i
X (B.3) (B.5)
= − p (xi ) log (p (xi )) X
= − (p(xi ))cX log(p(xi ))
i
i
= EX,Sh
(3) Luca and Termini Fuzzy Entropy: The Luca and
Similarly, the non-linear Shannon entropy is given as Termini fuzzy entropy relation for fuzzy sets is given as
X
EX,Sh,nl = − (p(xi ))q log(p(xi )) (B.4)
i

X
EX,LT = − (µX (xi ) log (µX (xi )) + (1 − µX (xi )) log (1 − µX (xi ))) (B.6)
i

We obtain the Luca and Termini fuzzy entropy from be written as


α
the generalized Hanman-Anirban entropy function in the 0
EHA,G =
X 3 2
(1−p(xi ))e−(a(1−p(xi )) +b(1−p(xi )) +c(1−p(xi ))+d)
following steps. i
(B.7)
In light of (A.3), the generalized Hanman-Anirban en- We add the generalized Hanman-Anirban
tropy function for complement probability (1 − p(xi )) can entropy functions in (A.3) and (B.7) to obtain

0
EHA,G + EHA,G =
X 3 2 α 3 2 α
(B.8)
p(xi )e−(a(p(xi )) +b(p(xi )) +c(p(xi ))+d) + (1 − p(xi ))e−(a(1−p(xi )) +b(1−p(xi )) +c(1−p(xi ))+d)
i

Busemeyer, J.R., Townsend, J.T., 1993. Decision field the-


Clearly, (B.8) is approximated to Luca and Termini ory: A dynamic cognition approach to decision making.
entropy in (B.6) if p(xi ) is replaced with − log(µX (xi )); Psychological Review 100, 432–459.
and the free parameters are taken as α → 1, and a = b = Candeal, J.C., Miguel, J.R.D., Induráin, E., Mehta, G.B.,
d = 0, c = 1. 2001. Utility and entropy. Economic Theory 17, 233–
238.
References Carlsson, C., Fuller, R., 2001. On possiblistic mean value
Aggarwal, M., Hanmandlu, M., 2016. Representing uncer- and variance of fuzzy numbers. Fuzzy Sets and Systems
tainty with information sets. IEEE Trans. Fuzzy Syst. 122, 315–326.
24, 1–15. Carlsson, C., Fuller, R., 2002. Fuzzy reasoning in decision
Andersson, J., Jörnsten, K., Nonás, S.L., Sandal, L., Uboe, making and optimization, in: Studies in Fuzziness and
J., 2013. A maximum entropy approach to the newsven- Soft Computing Series.
dor problem with partial information. European Journal Carlsson, C., Fuller, R., 2011. Possibility for decision.
of Operational Research 228, 190–200. Springer.
Aumann, R.J., 1962. Utility theory without the complete- Chamodrakas, I., Martakos, D., 2012. A utility-based
ness axiom. Econometrica 30, 445 – 462. fuzzy topsis method for energy efficient network selec-
Becker, G.M., DeGroot, M.H., Marschak, J., 1964. Mea- tion in heterogeneous wireless networks. Applied Soft
suring utility by a single-response sequential method. Computing 12, 1929–1938.
Behavioral Science 9, 226 – 232. Clemen, R.T., Reilly, T., 2001. Making Hard Decisions
Bertsimas, D., Hair, A.O., 2013. Learning preferences un- with Decision Tools Suite. 2nd ed. (Duxbury, Pacific
der noise and loss aversion: An optimization approach. Grove, CA).
Operations Research 61, 1190 – 1199. Diederich, A., 1997. Dynamic stochastic models for
Busemeyer, J.R., Townsend, J.T., 1992. Fundamental decision making under time constraints. Journal of
derivations from decision field theory. Mathematical Mathematical Psychology 41, 260–274.
Social Sciences 23, 255–282.
JOURNAL OF OPERATIONS RESEARCH SOCIETY 19

Dubois, D., Gogo, L., Prade, H., Zapico, A., 1998. Pro- Liu, X.C., 1992. Entropy, distance measure and similarity
ceedings of Principles of Knowledge, Representation and measure of fuzzy sets their relation. Fuzzy Sets &
Reasoning (KR ’98), Trento, Italia. chapter Aggregation Systems 53, 305 – 318.
of interacting criteria by means of the discrete Choquet Luca, A.D., Termini, S., 1972. A definition of a non-
integral. pp. 594–607. probabilistic entropy in the setting of fuzzy sets theory.
Dubois, D., Prade, H., 1980a. Fuzzy Sets and Systems: Information Control 20, 301 – 312.
Theory and Applications. Academic Press, New York. Mármol, A.M., Puerto, J., Fernández, F.R., 1998. The
Dubois, D., Prade, H., 1980b. Possibility Theory: An Ap- use of partial information on weights in multicriteria
proach to the Computerized Processing of Uncertainty. decision problems. Journal of Multicriteria Decision
Plenum Press, New York. Analysis 7, 322–329.
Dubois, D., Prade, H., 1987. The mean value of a fuzzy Martinez-Cruz, C., Porcel, C., Bernabe-Moreno, J.,
number. Fuzzy Sets and Systems 24, 279–300. Herrera-Viedma, E., 2015. A model to represent users
Dubois, D., Prade, H., Sabbadin, R., 2001. Decision trust in recommender systems using ontologies and
theoretic foundation of qualitative possibility theory. fuzzy linguistic modeling. Information Sciences 311,
European Journal of Operational Research 128, 478– 102–118.
495. Miyamoto, J.M., 1988. Generic utility theory: Measure-
Echenique, F., Saito, K., 2015. Savage in the market. ment foundations and applications in multiattribute
Econometrica 83, 1467–1495. utility theory. Journal of Mathematical Psychology 32,
Friedman, C., Huang, J., Sandow, S., 2007. A utility-based 357–404.
approach to some information measures. Entropy 9, 1– Nakagawa, Y., James, R., Rego, C., Edirisinghe, C., 2013.
26. Entropy-based optimization of nonlinear separable dis-
Fuller, R., Majlender, P., 2003. On weighted possibilistic crete decision models. Management Science 60, 695–707.
mean and variance of fuzzy numbers. Fuzzy Sets and von Neumann, J., Morgenstern, O., 1944. The Theory of
Systems 136, 363–374. Games and Economic Behavior. Princeton University
Georgescu, I., 2009. Possibilistic risk aversion. Fuzzy Sets Press, Princeton, NJ.
and Systems 160, 2608–2619. Pal, N.R., Bezdek, J.C., 1994. Measuring fuzzy uncer-
Georgescu, I., Kinnunen, J., 2011a. Multidimensional tainty. IEEE Transactions on Fuzzy Systems 2, 107–
possibilistic risk aversion. Mathematical and Computer 118.
Modelling 54, 689–696. Pal, N.R., Pal, S.K., 1992. Some properties of the expo-
Georgescu, I., Kinnunen, J., 2011b. A possibilistic ap- nential entropy. Information Science 66, 113 – 117.
proach to risk aversion. Soft Computing 15, 795–801. Porcel, C., Martinez-Cruz, C., Bernabe-Moreno, J.,
Georgescu, I., Kinnunen, J., 2011c. Possibilistic risk Tejeda-Lorente, A., Herrera-Viedma, E., 2015. Inte-
aversion with many parameters. Procedia Computer grating ontologies and fuzzy logic to represent user-
Science 4, 1735–1744. trustworthiness in recommender systems. Procedia
Glasserman, P., Xu, X., 2013. Robust portfolio control Computer Science 55, 603–612.
with stochastic factor dynamics. Operations Research Roe, R., Busemeyer, J.R., Townsend, J.T., 2001. Multi-
61, 874–893. alternative decision field theory: A dynamic connection-
Grable, J., Lytton, R.H., 1999. Financial risk tolerance ist model of decision-making. Psychological Review 108,
revisited: The development of a risk assessment instru- 370–392.
ment. Financial Services Review 8, 163–181. Russell, S., Norvig, P., 2003. Artificial Intelligence: A
Green, R.C., Srivastava, S., 1986. Expected utility max- Modern Approach. Prentice-Hall, 2nd Edition.
imization and demand behavior. Journal of Economic Sander, W., 1989. On measures of fuzziness. Fuzzy Sets
Theory 38, 313 – 323. and Systems 29, 49 – 55.
Hanmandlu, M., Das, A., 2011. Content-based image Savage, L.J., 1954. The Foundations of Statistics. Wiley,
retrieval by information theoretic measure. Defence New York.
Science Journal 61, 415 – 430. Shannon, C., 1948. A mathematical theory of communi-
Izhakian, Y., 2016. Expected utility with uncertain prob- cation. Bell System Technology Journal 27, 379 – 423.
abilities theory. Journal of Mathematical Economics 69, Townsend, J.T., Busemeyer, J.R., 2010. Dynamic rep-
91–103. resentation of decision-making, in: Mind as Motion.
Keeney, R.L., Raiffa, H., 1976. Decisions with Multiple Cambridge, MA: MIT Press, pp. 101–120.
Objectives: Preferences and Value Trade-offs. Wiley, Weber, M., 1987. Decision making with incomplete infor-
New York. mation. European Journal of Operational Research 28,
Kosko, B., 1986. Fuzzy entropy and conditioning. Infor- 44–57.
mation Science 40, 165 – 174. Xie, W.X., Bedrosian, S.D., 1983. The information in
Kubler, F., Selden, L., Wei, X., 2014. Asset demand fuzzy set and the relation between shannon and fuzzy
based tests of expected utility maximization. American information, in: Proceedings of the 17th annual conf. on
Economic Review 104, 3459 – 3480. Information Sciences and Systems.
Liu, B., 2007. Uncertainty theory. Springer. Xie, W.X., Bedrosian, S.D., 1984. An information measure
JOURNAL OF OPERATIONS RESEARCH SOCIETY 20

for fuzzy sets. IEEE Transactions on Systems, Man, and Zhang, J., Li, S., 2005. Portfolio selection with quadratic
Cybernetics SMC-14, 151–156. utility function under fuzzy environment, in: Proceed-
Yang, J., Qiu, W., 2005a. A measure of risk and a decision- ings of the Fourth International Conference on Machine
making model based on expected utility and entropy. Learning and Cybernetics, Guangzhou, Springer, Berlin,
European Journal of Operational Research 164, 792 – Heidelberg. pp. 18–21.
799. Recent Advances in Scheduling in Computer and Zhang, W.G., Wang, Y.L., Chen, Z.P., Nie, Z.K., 2007.
manufacturing Systems. Possibilistic mean-variance models and efficient frontiers
Yang, J., Qiu, W., 2005b. A measure of risk and a decision- for portfolio selection problem. Information Sciences
making model based on expected utility and entropy. 177, 2787–2801.
European Journal of Operational Research 164, 792– Zhang, W.G., Zhang, X.L., Xiao, W.L., 2009. Portfolio
799. selection under possibilistic mean-variance utility and
Zadeh, L., 1965. Fuzzy sets. Inform. Control 8, 338 – 353. a smo algorithm. European Journal of Operational
Zadeh, L., 1999. Fuzzy sets as a basis for a theory of Research 197, 693–700.
possibility. Fuzzy Sets and Systems 100, 9–34.
Highlights

• A new possibilistic utility framework


• Concept of evaluating agent
• Computation of agent’s perceived utilities
• Information measures for evaluation
• Illustrative examples

You might also like