Professional Documents
Culture Documents
net/publication/329968397
CITATIONS READS
6 102
2 authors, including:
Pratiksha Tiwari
Delhi Institute of Advanced Studies
36 PUBLICATIONS 56 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
A Study of Skill Development Situation and Model Development Relating to Employability for Vocational Education in National Capital Region, India View project
Structural Model for Skill Development and Women Empowerment through Vocation Education View project
All content following this page was uploaded by Pratiksha Tiwari on 03 January 2019.
Keywords: entropy measure, distance, similarity measure, interval valued intuitionistic fuzzy sets
Received: July 21, 2016
This paper presents new axiomatic definitions of entropy measure using concept of probability and
distance for interval-valued intuitionistic fuzzy sets (IvIFSs) by considering degree of hesitancy which is
consistent with the definition of entropy given by De Luca and Termini. Thereafter, we propose some
entropy measures and also derived relation between distance, entropy and similarity measures for
IvIFSs. Further, we checked the performance of proposed entropy and similarity measures on the basis
of intuition and compared with the existing entropy and similarity measures using numerical examples.
Lastly, proposed similarity measures are used to solve problems in the field of pattern recognition and
medical diagnoses.
Povzetek: V prispevku so predstavljene nove aksiomatske definicije entropijske mere za intervalno
intuicionistične mehke množice.
1 Introduction
Fuzzy set theory (Zadeh, 1965) is tool that can handle IvIFSs. Applications of the aforesaid entropy, distance
uncertainty and imprecision effortlessly. Interval-valued, and similarity measures are for recognition of patterns,
intuitionistic, interval-valued intuitionistic fuzzy sets medical diagnoses, and decision making with multiple
(Zadeh (1975), Atanassov (1986), Atanassov & Gargov criteria and expert systems problems. However, most of
(1989)), vague sets (Gau & Buehrer, 1993) and R-fuzzy these distance, similarity or entropy measures do not
sets (Yang, Hinde, 2010) are various generalizations of consider hesitancy index between IvIFSs. Hesitance
Fuzzy sets (FSs). From all these generalizations IvIFSs index play a very important role when membership and
and intuitionistic fuzzy sets (IFSs) are two conventional non-membership degree do not differ much for two
extensions of FSs. IvIFSs are more practical and flexible IvIFSs but their hesitant index does. Some of the authors,
than IFSs as they are characterized by membership and Xu (2007), Xu & Xia (2011), Wu et al. (2014)
non-membership degree range instead of real numbers. It considered hesitancy into the measure of distance,
makes IvIFSs more useful in dealing with real world similarity and entropy developed by them. Since
complexities which arises due to insufficient information, hesitancy index also has a vital role in any decision
lack of data, imprecise knowledge and human nature making as it outclasses the existing methods and deals
wherein range is provided instead of real numbers. with decision process in a better way. Dammak et al.
Distance, entropy and similarity measures are the central (2016) studies some possibility measures in multi-criteria
arenas that are investigated by various researchers under decision making under 𝐼𝑣𝐼𝐹𝐸. Tiwari & Gupta(in press)
intuitionistic and interval-valued fuzzy environment (IFE proposed generalized entropy and similarity measure for
and IvFE). These measures identify the similarity or 𝐼𝑣𝐼𝐹𝑆 with application in decision making. Zang et al.
dissimilarity between two FSs. Till date, vivid entropy, (2016) defined some operations on 𝐼𝑣𝐼𝐹𝑆𝑠 and proposed
distance or similarity measures are presented by various some aggregation operators for 𝐼𝑣𝐼𝐹𝑆𝑠 w.r.t. the
investigators. Some of these research findings are restricted interval Shapley function with application in
mentioned as follows: Xu (2007 a, b) introduced the multi-criteria decision making. In this paper we have
concept of similarity between IvIFSs along with some developed some of the distance, entropy and similarity
distance measure. Zang et al. (2009) defined a entropy measures by taking all the three degrees in account and
axiomatically for interval-valued fuzzy sets (IvFSs) and applied it to pattern recognition and medical diagnoses
discussed relation between entropy and similarity under 𝐼𝑣𝐼𝐹𝐸.
measures. Xu and Yager (2009) studied preference This work is organized in various sections. Section 2
relation and defined similarity measure under IvFE and has basic definition and operations on 𝐼𝑣𝐼𝐹𝑆𝑠. Section 3,
interval-valued intuitionistic environment (IvIFE). Wei presents the relationship between distance and entropy
et al (2011) derived a generalized measure of entropy for measures along with example to check the performance
IvIFSs. Cosine similarity measures for IvIFSs are defined of entropy measures on the basis of intuition. A relation
by both Ye (2012) and Singh (2012). Sun & Liu (2012), between measure of entropy and similarity measure is
Hu & Li (2013), Zhang et al. (2014) proposed entropy proposed in Section 4. Further, comparison of new
and similarity measure along with their relationship for similarity measures with the few existing one in done.
618 Informatica 42 (2018) 617–627 P. Tiwari et al.
From axiomatic definition of distance and similarity ∀A, B ∈ IvIFSs(Ω), where D is measure of
measures it is clear that S (A, B) = 1 − D(A, B) where A distance.
and B are IvIFSs , D and S are distance and similarity 4. E(A) = E(A ̅), where A
̅ is complement of A,
measure for IvIFSs respectively. where A, B ∈ IvIFSs(Ω).
2.1 Entropy measure for IvIFSs In the next section, we derive a relation which relates
measure of distance and entropy for IvIFSs, which
In 1972, De Luca and Termini defined measure of satisfies all the axioms of the definition of entropy.
entropy for FSs. Hung & Yang (2006) extended
definition for IFSs considering hesitancy degree. The
following definition for entropy is an extension of 3 Relation between measure of
definition of entropy proposed by Hung & Yang (2006) distance and entropy
for IvIFSs.
Here, we develop a technique which obtains entropy
Definition 4: A real valued function E: IvIFSs(Ω) → measure for IvIFSs which satisfies the aforementioned
[0,1] is termed as measure of entropy under IvIFE, if properties.
below mentioned axioms are satisfied:
Theorem 2: Let Dj , j= 1, … ,6 be the above-mentioned
1. E(A) = 0, ∀A ∈ ℂ(Ω); six distance measure equations (1)-(6) between IvIFSs,
2. E(A) = 1, iff 𝑀𝐴 (𝑥𝑖 ) = 𝑁𝐴 (𝑥𝑖 ) = 𝐻𝐴 (𝑥𝑖 ) = 1 1 1 1 1 1
then, Ej (A) = 1 − 3Dj (A, 〈[ , ] , [ , ] , [ , ]〉), j=
1 1 3 3 3 3 3 3
[ , ] , ∀ 𝑥𝑖 ∈ Ω; 1, … ,6 for any A ∈ IvIFSs(Ω) are measure of entropy of
3 3
3. E(A) ≤ E(B), if A is less fuzzy than B; IvIFSs.
4. E(A) = E(A ̅), where A ̅ is complement of A,
Proof: We prove that E𝑗 (A), for j = 1, … ,6 satisfies
where A, B ∈ IvIFSs(Ω).
conditions given by definition 5.
Above definition is steady with description of measure of Property1): If A ∈ ℂ(Ω) ⇒ 𝐴(𝑥𝑖 ) = 〈[1,1], [0,0], [0,0]〉
entropy given by De Luca & Termini (1972). As it is or 𝐴(𝑥𝑖 ) = 〈[0,0], [1,1], [0,0]〉, ∀ 𝑥𝑖 ∈ 𝛺, then for j=
known that complete description of an IvIFS A ∈ Ω has 1, … ,6
three degrees membership, non-membership and 1 1 1 1 1 1 1
hesitancy with 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) + 𝑁𝑉𝐴𝑈 (𝑥𝑖 ) + 𝐻𝑉𝐴𝐿 (𝑥𝑖 ) = 1 Dj (A, 〈[ , ] , [ , ] , [ , ]〉) = .
3 3 3 3 3 3 3
and 𝑀𝑉𝐴𝐿 (𝑥𝑖 ) + 𝑁𝑉𝐴𝐿 (𝑥𝑖 ) + 𝐻𝑉𝐴𝑈 (𝑥𝑖 ) = 1 with Thus, Ej (A) = 0
0 ≤ 𝑀𝑉𝐴𝑈 (𝑥𝑖 ), 𝑁𝑉𝐴𝑈 (𝑥𝑖 ), 𝐻𝑉𝐴𝐿 (𝑥𝑖 ),
𝑀𝑉𝐴𝐿 (𝑥𝑖 ), 𝑁𝑉𝐴𝐿 (𝑥𝑖 ), 𝐻𝑉𝐴𝑈 (𝑥𝑖 ) ≤ 1. By taking all the Property 2): For all j = 1, … ,6, Ej (A) = 1
three in to consideration we may assume them as 1 1 1 1 1 1
probability measure. Therefore the entropy is maximum ⟺ 1 − 3Dj (A, 〈[ , ] , [ , ] , [ , ]〉) = 1
3 3 3 3 3 3
when all the variables are equal (i.e. 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) = 1 1 1 1 1 1
1 ⟺ 3D𝑗 (A, 〈[ , ] , [ , ] , [ , ]〉) = 0
𝑁𝑉𝐴𝑈 (𝑥𝑖 ) = 𝐻𝑉𝐴𝐿 (𝑥𝑖 ) = and 𝑀𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑁𝑉𝐴𝐿 (𝑥𝑖 ) = 3 3 3 3 3 3
3 1 1 1 1 1 1
1
𝐻𝑉𝐴𝑈 (𝑥𝑖 ) = ) and zero (minimum) when only one ⟺ A = 〈[ , ] , [ , ] , [ , ]〉
3 3 3 3 3 3 3
variable is exists (i.e. 𝑀𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) =
Property3): Let A and B be any two IvIFSs and
1, 𝑁𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑁𝑉𝐴𝑈 (𝑥𝑖 ) = 0, 𝐻𝑉𝐴𝐿 (𝑥𝑖 ) = 𝐻𝑉𝐴𝑈 (𝑥𝑖 ) = 0 1 1 1 1 1 1
or 𝑀𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) = 0, 𝑁𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑁𝑉𝐴𝑈 (𝑥𝑖 ) = Dj (A, 〈[ , ] , [ , ] , [ , ]〉) ≥
3 3 3 3 3 3
1, 𝐻𝑉𝐴𝐿 (𝑥𝑖 ) = 𝐻𝑉𝐴𝑈 (𝑥𝑖 ) = 1 1 1 1 1 1 1
Dj (B, 〈[ , ] , [ , ] , [ , ]〉) then
or 𝑀𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) = 0, 𝑁𝑉𝐴𝐿 (𝑥𝑖 ) = 3 3 3 3 3 3
1 1 1 1 1 1
𝑁𝑉𝐴𝑈 (𝑥𝑖 ) = 0, 𝐻𝑉𝐴𝐿 (𝑥𝑖 ) = 𝐻𝑉𝐴𝑈 (𝑥𝑖 ) = 1). 1 − 3Dj (A, 〈[ , ] , [ , ] , [ , ]〉)
Again we extend the definition of entropy given by Zang 3 3 3 3 3 3
1 1 1 1 1 1
et al. (2014) based on distance for IvIFSs. In following ≥ 1 − 3Dj (B, 〈[ , ] , [ , ] , [ , ]〉)
definition we have considered degree of hesitation, 3 3 3 3 3 3
which is not considered by other definitions of entropy. ⟹ Ej (A) ≤ Ej (B), for all j= 1, … ,6
3 1
Since E3 (A) > E3 (B) and E6 (A) > E6 (B) which
𝐸3 (𝐴) = 1 − ∑𝑛𝑖=1 [|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − | ∨ indicates that E3 and E6 are consistent with the intuition.
4𝑛 3
1 1
|𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − | + |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − | ∨
3 3
1 1
3.1 Comparison of existing entropy
|𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − | + |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − | ∨ measure with proposed entropy
3 3
1
|𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − |] measures
3
We compared performance of existing entropy measures
𝐸4 (𝐴) = 1 − with the proposed measures with the help of an example.
|𝑀𝑉𝐴𝐿 (𝑥𝑖 )−1/3|+|𝑀𝑉𝐴𝑈 (𝑥𝑖 )−1/3| Let A be an IvIFS, then
,
2
3 |𝑁𝑉𝐴𝐿 (𝑥𝑖 )−1/3|+|𝑁𝑉𝐴𝑈 (𝑥𝑖 )−1/3|
𝐸𝑍𝐽 (𝐴) =
∑𝑛 𝑚𝑎𝑥 ,
2𝑛 𝑖=1 2 1 𝑚𝑖𝑛(𝑀𝑉𝐴𝐿 (𝑥𝑖 ),𝑁𝑉𝐴𝐿 (𝑥𝑖 ))+𝑚𝑖𝑛(𝑀𝑉𝐴𝑈 (𝑥𝑖 ),𝑁𝑉𝐴𝑈 (𝑥𝑖 ))
|𝐻𝑉𝐴𝐿 (𝑥𝑖 )−1/3|+|𝐻𝑉𝐴𝑈 (𝑥𝑖 )−1/3| ∑𝑛=1 ,
𝑛 𝑚𝑎𝑥(𝑀𝑉𝐴𝐿 (𝑥𝑖 ),𝑁𝑉𝐴𝐿 (𝑥𝑖 ))+𝑚𝑎𝑥(𝑀𝑉𝐴𝑈 (𝑥𝑖 ),𝑁𝑉𝐴𝑈 (𝑥𝑖 ))
{ 2 }
Proof: We prove that Ej (A), for j = 1, … ,6 satisfies Definition 6 : For any two IvIFSs A and B in Ω, such
that both A and B are defined by the triplet
conditions given by definition 5.
〈𝑥𝑖 , [𝑀𝑉𝐴𝐿 (𝑥𝑖 ), 𝑀𝑉𝐴𝑈 (𝑥𝑖 )], [𝑁𝑉𝐴𝐿 (𝑥𝑖 ), 𝑁𝑉𝐴𝑈 (𝑥𝑖 )]〉 and
〈𝑥𝑖 , [𝑀𝑉𝐵𝐿 (𝑥𝑖 ), 𝑀𝑉𝐵𝑈 (𝑥𝑖 )], [𝑁𝑉𝐵𝐿 (𝑥𝑖 ), 𝑁𝑉𝐵𝑈 (𝑥𝑖 )]〉 respec
Property 1): If A ∈ ℂ(Ω) ⇒ 𝐴(𝑥𝑖 ) = 〈[1,1], [0,0], [0,0]〉
tively. we define an IvIFSs ∅(A, B)using A and B as
or 𝐴(𝑥𝑖 ) = 〈[0,0], [1,1], [0,0]〉, ∀ 𝑥𝑖 ∈ 𝛺, then for j=
given below:
1, … ,6 1
1 1 1 1 1 1 𝑀𝑉∅(𝐴,𝐵)𝐿 (𝑥𝑖 )= {1 − [𝑚𝑎𝑥(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3
Sj (A, 〈[ , ] , [ , ] , [ , ]〉) |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|, |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3 3 3 3 3 3
1 1 1 1 1 1 |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|, |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨
= 1 − Dj (A, 〈[ , ] , [ , ] , [ , ]〉)
3 3 3 3 3 3 |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)]1/2 };
2 1
= . 𝑀𝑉∅(𝐴,𝐵)𝑈 (𝑥𝑖 )= {1 − [𝑚𝑎𝑥(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3
3
Thus, Ej (A) = 0 |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|, |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨
|𝑁𝐴𝑈 (𝑥𝑖 ) − 𝑁𝐵𝑈 (𝑥𝑖 )|, |𝐻𝐴𝐿 (𝑥𝑖 ) − 𝐻𝐵𝐿 (𝑥𝑖 )| ∨
|𝐻𝐴𝑈 (𝑥𝑖 ) − 𝐻𝐵𝑈 (𝑥𝑖 )|)]};
Property 2): For all 𝑗 = 1, … ,6, 𝐸𝑗 (𝐴) = 1 1
1 1 1 1 1 1 𝑁𝑉∅(𝐴,𝐵)𝐿 (𝑥𝑖 )= {1 + [𝑚𝑖𝑛(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3
⟺ 3Sj (A, 〈[ , ] , [ , ] , [ , ]〉) − 2 = 1 |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|, |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3 3 3 3 3 3
622 Informatica 42 (2018) 617–627 P. Tiwari et al.
|𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|, |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐶𝐿 (𝑥𝑖 )| ≥ |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )|,
|𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)]2 }; |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐶𝑈 (𝑥𝑖 )| ≥ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|;
1 |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐶𝐿 (𝑥𝑖 )| = |2(𝑀𝑉𝐶𝐿 (𝑥𝑖 ) −
𝑁𝑉∅(𝐴,𝐵)𝑈 (𝑥𝑖 )= {1 + [𝑚𝑖𝑛(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨ and
3
|𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|, |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨ 𝑀𝑉𝐴𝐿 (𝑥𝑖 )) + 2(𝑁𝑉𝐶𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐴𝐿 (𝑥𝑖 ))| ≥
|2(𝑀𝑉𝐵𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐴𝐿 (𝑥𝑖 )) + 2(𝑁𝑉𝐵𝐿 (𝑥𝑖 ) −
|𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|, |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨
|𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)]}. 𝑁𝑉𝐴𝐿 (𝑥𝑖 ))| = |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )|,
Similarly, we have |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐶𝑈 (𝑥𝑖 )| ≥
Theorem 4: E(∅(A, B)) be a similarity measure for |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|
IvIFSs Aand B, where E is an entropy. So, we have
1 1
𝑀𝑉∅(𝐴,𝐵) (𝑥𝑖 ) ≤ 𝑀𝑉∅(𝐴,𝐶) (𝑥𝑖 ) ≤ [ , ] and 𝑁𝑉∅(𝐴,𝐵) (𝑥𝑖 ) ≥
Proof: To prove that E(∅(A, B)) is a measure of 3 3
1 1
similarity, we need to prove property given by definition 𝑁𝑉∅(𝐴,𝐶) (𝑥𝑖 ) ≥ [ , ] for any 𝑥𝑖 ∈ 𝛺. ⟹ ∅(A, C) ⊆
3 3
3 holds . 1 1 1 1 1 1
∅(A, B) ⊆ 〈[ , ] , [ , ] , [ , ]〉
3 3 3 3 3 3
Property 1): If 𝐴 ∈ ℂ(𝛺) ⇒ 𝐴(𝑥𝑖 ) = 〈[1,1], [0,0], [0,0]〉 Similarly, we have ⟹ ∅(A, C) ⊆ ∅(B, C) ⊆
1 1 1 1 1 1
or 𝐴(𝑥𝑖 ) = 〈[0,0], [1,1], [0,0]〉, for any 𝑥𝑖 ∈ 𝛺, then 〈[ , ] , [ , ] , [ , ]〉. Thus
3 3 3 3 3 3
𝑀𝑉∅(𝐴,𝐴̅)𝐿 (𝑥𝑖 ) = 0 = 𝑀𝑉∅(𝐴,𝐴̅)𝑈 (𝑥𝑖 ); 1 1 1 1 1 1
D (∅(A, B), 〈[ , ] , [ , ] , [ , ]〉) ≤
And 𝑁𝑉∅(𝐴,𝐴̅)𝐿 (𝑥𝑖 ) = 1 = 𝑁𝑉∅(𝐴,𝐴̅)𝑈 (𝑥𝑖 ) 3 3
1 1
3 3
1 1
3 3
1 1
Thus, ∅(𝐴, 𝐴̅) = {〈𝑥𝑖 , [0,0], [1,1], [0,0]〉⁄𝑥𝑖 ∈ Ω} D (∅(A, C), 〈[ , ] , [ , ] , [ , ]〉)
3 3 3 3 3 3
⟹ S(A, A ̅) = E(∅(A, A ̅)) = 0 1 1 1 1 1 1
and D (∅(B, C), 〈[ , ] , [ , ] , [ , ]〉) ≤
3 3 3 3 3 3
1 1 1 1 1 1
Property 2): Assume that S(A, B) = 1 ⇒ E(∅(A, B)) = 1 D (∅(A, C), 〈[ , ] , [ , ] , [ , ]〉).
3 3 3 3 3 3
1 1 So form definition of entropy corresponding to distance
⟺ 𝑀𝑉∅(𝐴,𝐵) (𝑥𝑖 ) = 𝑁𝑉∅(𝐴,𝐵) (𝑥𝑖 ) = 𝐻𝑉∅(𝐴,𝐵) (𝑥𝑖 ) = [ , ]
3 3 function, we get E(∅(A, C)) ≤ E(∅(A, B)) and
⟺ 𝑚𝑎𝑥(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| E(∅(A, C)) ≤ E(∅(B, C))
∨ |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|, |𝑁𝑉𝐴𝐿 (𝑥𝑖 )
or S(∅(A, C)) ≤ S(∅(A, B)) and S(∅(A, C)) ≤
− 𝑁𝑉𝐵𝐿 (𝑥𝑖 )|
∨ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|, |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) S(∅(B, C)). ∎
− 𝐻𝑉𝐵𝐿 (𝑥𝑖 )|
∨ |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|) = 0 Corollary 1: Let E be an entropy measure for IvIFSs and
and 𝑚𝑖𝑛(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − ∅(A, B)be an IvIFS defined on two IvIFSs A and
𝑀𝑉𝐵𝑈 (𝑥𝑖 )|, |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − ̅̅̅̅̅̅̅̅̅
B acaccording to definition 6, then E(∅(A, B)) measure
𝑁𝑉𝐵𝑈 (𝑥𝑖 )|, |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − of similarity for 𝐴, 𝐵 ∈ 𝐼𝑣𝐼𝐹𝑆𝑠(𝛺).
𝐻𝑉𝐵𝑈 (𝑥𝑖 )|) = 0
⟺ |𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )| Proof: Proof followed from the definition of complement
= 0, interval-valued intuitionistic fuzzy sets and theorem 4.
|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )| = 0, ∎
and |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )| = Definition 7: Let 𝐴, 𝐵 ∈ 𝐼𝑣𝐼𝐹𝑆𝑠(𝛺) , we can define
0. 𝐼𝑣𝐼𝐹𝑆 η(A, B)using A, B as follows:
1
⟺ 𝑀𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑀𝑉𝐵𝐿 (𝑥𝑖 ), 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) = 𝑀𝑉𝐵𝑈 (𝑥𝑖 ), 𝑀𝑉𝜂(𝐴,𝐵)𝐿 (𝑥𝑖 )= {1 + [𝑚𝑖𝑛((|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3
𝑁𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑁𝑉𝐵𝐿 (𝑥𝑖 ), 𝑁𝑉𝐴𝑈 (𝑥𝑖 ) = 𝑁𝑉𝐵𝑈 (𝑥𝑖 ) |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|𝛼 ), (|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨
and 𝐻𝑉𝐴𝐿 (𝑥𝑖 ) = 𝐻𝑉𝐵𝐿 (𝑥𝑖 ), 𝐻𝑉𝐴𝑈 (𝑥𝑖 ) = 𝐻𝑉𝐵𝑈 (𝑥𝑖 ). |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , (|𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨
⟺ A = B. 2
|𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 )] };
1
Property 3): ∅(𝐴, 𝐵) = ∅(𝐵, 𝐴) by definition of 𝑀𝑉𝜂(𝐴,𝐵)𝑈 (𝑥𝑖 )= {1 + [𝑚𝑖𝑛((|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3
𝑀𝑉∅(𝐴,𝐵)𝐿 (𝑥𝑖 ), 𝑀𝑉∅(𝐴,𝐵)𝑈 (𝑥𝑖 ),𝑁𝑉∅(𝐴,𝐵)𝐿 (𝑥𝑖 ), |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|𝛼 ), (|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨
𝑁𝑉∅(𝐴,𝐵)𝑈 (𝑥𝑖 ) for any 𝑥𝑖 ∈ 𝛺 |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , (|𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨
⟹ 𝐸(∅(𝐴, 𝐵)) = 𝐸(∅(𝐵, 𝐴)) |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 )]};
⟺ 𝑆(𝐴, 𝐵) = 𝑆(𝐵, 𝐴) 1
𝑁𝑉𝜂(𝐴,𝐵)𝐿 (𝑥𝑖 )= {1 − [𝑚𝑎𝑥((|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨
3
Property 4): Let A, B and C be any three IvIFSs such that |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|𝛼 ), (|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨
A ⊆ B ⊆ C for any 𝑥𝑖 ∈ 𝛺, we have 𝑀𝑉𝐴 (𝑥𝑖 ) ≤ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , (|𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨
1/2
𝑀𝑉𝐵 (𝑥𝑖 ) ≤ 𝑀𝑉𝐶 (𝑥𝑖 ),𝑁𝑉𝐴 (𝑥𝑖 ) ≥ 𝑁𝑉𝐵 (𝑥𝑖 ) ≥ 𝑁𝑉𝐶 (𝑥𝑖 ) or |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 )] };
𝑀𝑉𝐴𝐿 (𝑥𝑖 ) ≤ 𝑀𝑉𝐵𝐿 (𝑥𝑖 ) ≤ 𝑀𝑉𝐶𝐿 (𝑥𝑖 ),𝑁𝑉𝐴𝐿 (𝑥𝑖 ) ≥ 1
𝑁𝑉𝜂(𝐴,𝐵)𝑈 (𝑥𝑖 )= {1 − [𝑚𝑎𝑥((|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨
𝑁𝑉𝐵𝐿 (𝑥𝑖 ) ≥ 𝑁𝑉𝐶𝐿 (𝑥𝑖 ) and 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) ≤ 𝑀𝑉𝐵𝑈 (𝑥𝑖 ) ≤ 3
𝑀𝑉𝐶𝑈 (𝑥𝑖 ),𝑁𝑉𝐴𝑈 (𝑥𝑖 ) ≥ 𝑁𝑉𝐵𝑈 (𝑥𝑖 ) ≥ 𝑁𝑉𝐶𝑈 (𝑥𝑖 ). |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|𝛼 ), (|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨
⟹ |𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐶𝐿 (𝑥𝑖 )| ≥ |𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )|, |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , (|𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨
|𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐶𝑈 (𝑥𝑖 )| ≥ |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|; |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 )]},
Where 𝛼 ∈ [1, ∞[ and 𝑥𝑖 ∈ 𝛺.
Entropy, Distance and Similarity Measures under... Informatica 42 (2018) 617–627 623
Theorem 5: For any two 𝐼𝑣𝐼𝐹𝑆𝑠 A and B, E(η(A, B)) is a and |𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐶𝐿 (𝑥𝑖 )| = |2(𝑀𝑉𝐶𝐿 (𝑥𝑖 ) −
measure of similarity, where E is an entropy measure. 𝑀𝑉𝐴𝐿 (𝑥𝑖 )) + 2(𝑁𝑉𝐶𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐴𝐿 (𝑥𝑖 ))| ≥
|2(𝑀𝑉𝐵𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐴𝐿 (𝑥𝑖 )) + 2(𝑁𝑉𝐵𝐿 (𝑥𝑖 ) −
Proof: To prove that E(η(A, B)) is a measure of 𝑁𝑉𝐴𝐿 (𝑥𝑖 ))| = |𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )|,
similarity, we need to prove property given by definition Similarly, we have |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐶𝑈 (𝑥𝑖 )| ≥
3 holds . |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|
So, we have from definition 7
Property 1): If 𝐴 ∈ ℂ(𝛺) ⇒ 𝐴(𝑥𝑖 ) = 〈[1,1], [0,0], [0,0]〉 1 1
𝑀𝑉𝜂(𝐴,𝐶) (𝑥𝑖 ) ≥ 𝑀𝑉𝜂(𝐴,𝐵) (𝑥𝑖 ) ≥ [ , ] and 𝑁𝑉𝜂(𝐴,𝐶) (𝑥𝑖 ) ≤
or 𝐴(𝑥𝑖 ) = 〈[0,0], [1,1], [0,0]〉, for any 𝑥𝑖 ∈ 𝛺, then 3 3
1 1
𝑀𝑉∅(𝐴,𝐴̅)𝐿 (𝑥𝑖 ) = 1 = 𝑀𝑉∅(𝐴,𝐴̅)𝑈 (𝑥𝑖 ); 𝑁𝑉𝜂(𝐴,𝐵) (𝑥𝑖 ) ≤ [ , ] for any 𝑥𝑖 ∈ 𝛺. ⟹ η(A, C) ⊇
3 3
and 𝑁𝑉∅(𝐴,𝐴̅)𝐿 (𝑥𝑖 ) = 0 = 𝑁𝑉∅(𝐴,𝐴̅)𝑈 (𝑥𝑖 ) 1 1 1 1 1 1
η(A, B) ⊇ 〈[ , ] , [ , ] , [ , ]〉
̅) = {〈𝑥𝑖 , [1,1], [0,0], [0,0]〉⁄𝑥𝑖 ∈ Ω} 3 3 3 3 3 3
Thus, ∅(A, A Similarly, we have ⟹ η(A, C) ⊇ η(B, C) ⊇
⟹ S(A, A ̅) = E(∅(A, A ̅)) = 0 1 1 1 1 1 1
〈[ , ] , [ , ] , [ , ]〉. Thus
3 3 3 3 3 3
Property 2): Assume that S(A, B) = 1 1 1 1 1 1 1
D (η(A, B), 〈[ , ] , [ , ] , [ , ]〉)
Then E(η(A, B)) = 1 3 3 3 3 3 3
1 1 1 1 1 1 1 1
⟺ 𝑀𝑉𝜂(𝐴,𝐵) (𝑥𝑖 ) = [ , ] = 𝑁𝑉𝜂(𝐴,𝐵) (𝑥𝑖 ) ≤ D (η(A, C), 〈[ , ] , [ , ] , [ , ]〉)
3 3 3 3 3 3 3 3
1 1 1 1 1 1
= 𝐻𝑉𝜂(𝐴,𝐵) (𝑥𝑖 ) and D (η(B, C), 〈[ , ] , [ , ] , [ , ]〉) ≤
3 3 3 3 3 3
(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , D (η(A, C), 〈[1 , 1] , [1 , 1] , [1 , 1]〉).
3 3 3 3 3 3
⟺ 𝑚𝑖𝑛 ( (|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , )So from definition of entropy corresponding to distance
𝛼
(|𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|) function, we get E(∅(A, C)) ≤ E(∅(A, B)) and
=0
E(η(A, C)) ≤ E(η(B, C))
and or S(η(A, C)) ≤ S(η(A, B)) and S(η(A, C)) ≤
(|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , S(η(B, C)). ∎
𝑚𝑎𝑥 ( (|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 , )
(|𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) − 𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 Corollary 2: Let E be an entropy for 𝐼𝑣𝐼𝐹𝑆𝑠 and
=0 η(A, B) be an 𝐼𝑣𝐼𝐹𝑆 defined on 𝐴, 𝐵 ∈ 𝐼𝑣𝐼𝐹𝑆𝑠(𝛺) as
defined in definition 7, then E(η(A, ̅̅̅̅̅̅̅̅̅
B)) is measure of
⟺ (|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑀𝑉𝐵𝐿 (𝑥𝑖 )|
𝛼
∨ |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )|) = 0, similarity for 𝐴, 𝐵 ∈ 𝐼𝑣𝐼𝐹𝑆𝑠(𝛺).
(|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 Proof: Proof followed from the definition of complement
= 0, of 𝐼𝑣𝐼𝐹𝑆𝑠 and theorem 5. ∎
and (|𝐻𝑉𝐴𝐿 (𝑥𝑖 ) − 𝐻𝑉𝐵𝐿 (𝑥𝑖 )| ∨ |𝐻𝑉𝐴𝑈 (𝑥𝑖 ) −
𝐻𝑉𝐵𝑈 (𝑥𝑖 )|)𝛼 = 0. 4.1 Weighted similarity measure
⟺ 𝑀𝑉𝐴𝐿 (𝑥𝑖 ) = 𝑀𝑉𝐵𝐿 (𝑥𝑖 ), 𝑀𝑉𝐴𝑈 (𝑥𝑖 ) Let w = (w1 , w2 , … , wn )T be the weights provided to
= 𝑀𝑉𝐵𝑈 (𝑥𝑖 ), 𝑁𝑉𝐴𝐿 (𝑥𝑖 ) each element 𝑥𝑖 ∈ 𝛺, 𝑖 = 1,2, … , 𝑛. Then the weighted
= 𝑁𝑉𝐵𝐿 (𝑥𝑖 ), 𝑁𝑉𝐴𝑈 (𝑥𝑖 ) = 𝑁𝑉𝐵𝑈 (𝑥𝑖 ) similarity measure based on the aforesaid similarity
and 𝐻𝑉𝐴𝐿 (𝑥𝑖 ) = 𝐻𝑉𝐵𝐿 (𝑥𝑖 ), 𝐻𝑉𝐴𝑈 (𝑥𝑖 ) = 𝐻𝑉𝐵𝑈 (𝑥𝑖 ). measures are defined as 𝑆(𝐴, 𝐵) =
⟺ A = B. ∑𝑛𝑖=1 𝑤𝑖 𝑆(𝐴(𝑥𝑖 ), 𝐵(𝑥𝑖 )), where 𝑤𝑖 ≥ 0 and ∑𝑛𝑖=1 𝑤𝑖 = 1.
1
• 𝑆𝐻𝐿 (𝐴, 𝐵) = 1 − ∑𝑛𝑖=1|𝑀𝑉𝐴𝐿 (𝑥𝑖 ) −
4𝑛
5 Applications of proposed
𝑀𝑉𝐵𝐿 (𝑥𝑖 )| + |𝑀𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑀𝑉𝐵𝑈 (𝑥𝑖 )| + similarity measures
|𝑁𝑉𝐴𝐿 (𝑥𝑖 ) − 𝑁𝑉𝐵𝐿 (𝑥𝑖 )| + |𝑁𝑉𝐴𝑈 (𝑥𝑖 ) − 𝑁𝑉𝐵𝑈 (𝑥𝑖 )|is
Here the proposed similarity measures are applied to
given by Hu and Li (2013).
some of the situation that deals with imperfect
information.
• 𝑆𝑆 (𝐴, 𝐵) =
(𝑀𝑉𝐴𝐿 (𝑥𝑖 )+𝑀𝑉𝐴𝑈 (𝑥𝑖 ))(𝑀𝑉𝐵𝐿 (𝑥𝑖 )+𝑀𝑉𝐵𝑈 (𝑥𝑖 ))
[ ]
1 +(𝑁𝑉𝐴𝐿 (𝑥𝑖 )+𝑁𝑉𝐴𝑈 (𝑥𝑖 ))(𝑁𝑉𝐵𝐿 (𝑥𝑖 )+𝑁𝑉𝐵𝑈 (𝑥𝑖 )) 5.1 Pattern recognition
∑𝑛 is
𝑛 𝑖=1 2 2 Here we use an example of pattern recognition
√(𝑀𝑉𝐴𝐿 (𝑥𝑖 )+𝑀𝑉𝐴𝑈 (𝑥𝑖 )) +(𝑁𝑉𝐴𝐿 (𝑥𝑖 )+𝑁𝑉𝐴𝑈 (𝑥𝑖 ))
2 2
considered by Xu (2007) and adapted by Wei et al.
√
[ (𝑀𝑉𝐵𝐿 (𝑥𝑖 )+𝑀𝑉𝐵𝑈 (𝑥𝑖 )) +(𝑁𝑉𝐵𝐿(𝑥𝑖 )+𝑁𝑉𝐵𝑈 (𝑥𝑖 )) ] (2011) and Wu et al.(2014) for classification of building
introduced by Singh(2012) material.
• 𝑆𝑆𝑢 (𝐴, 𝐵) = Example: There are four types of building materials
|𝑀𝑉𝐴𝐿 (𝑥𝑖 )−𝑀𝑉𝐵𝐿 (𝑥𝑖 )|⋁|𝑁𝑉𝐴𝐿 (𝑥𝑖 )−𝑁𝑉𝐵𝐿 (𝑥𝑖 )| 𝐴𝑖 , 𝑖 = 1,2,3,4 and an anonymous building material B,
1 𝑛 +|𝑀𝑉𝐴𝑈 (𝑥𝑖 )−𝑀𝑉𝐵𝑈 (𝑥𝑖 )|∨|𝑁𝑉𝐴𝑈 (𝑥𝑖 )−𝑁𝑉𝐵𝑈 (𝑥𝑖 )|
∑ which is characterized by the IvIFSs defined on 𝑋 =
𝑛 𝑖=1 3−𝑚𝑖𝑛{ |𝑀𝑉𝐴𝐿 (𝑥𝑖 )−𝑀𝑉𝐵𝐿 (𝑥𝑖 )|⋁|𝑁𝑉𝐴𝐿 (𝑥𝑖 )−𝑁𝑉𝐵𝐿 (𝑥𝑖 )|, }
|𝑀𝑉𝐴𝑈 (𝑥𝑖 )−𝑀𝑉𝐵𝑈 (𝑥𝑖 )|∨|𝑁𝑉𝐴𝑈 (𝑥𝑖 )−𝑁𝑉𝐵𝑈 (𝑥𝑖 )| {𝑥1 , 𝑥2 , … , 𝑥12 } with weighted vector
0.1, 0.05, 0.08, 0.06, 0.03, 0.07, T
w=( )
is proposed by Sun & Liu (2012). 0.09, 0.12, 0.15, 0.07, 0.13, 0.05
and we have the data given as follows by Xu (2007).
To review the performance of similarity measures let “𝐴1
us consider an example. Consider the following IvIFSs 〈𝑥 , [0.1,0.2], [0.5, 0.6]〉, 〈𝑥2 , [0.1,0.2], [0.7, 0.8]〉,
A = {𝑥𝑖 , 〈[0.5,0.5], [0.5,0.5], [0,0]〉, 𝑥𝑖 ∈ Ω}, ( 1 )
〈𝑥3 , [0.5,0.6], [0.3, 0.4]〉, 〈𝑥4 , [0.8,0.9], [0.0, 0.1]〉,
B = {𝑥𝑖 , 〈[0.3,0.4], [0.4,0.5], [0.1,0.3]〉, 𝑥𝑖 ∈ Ω},
〈𝑥 , [0.4,0.5], [0.3, 0.4]〉, 〈𝑥6 , [0.0,0.1], [0.8, 0.9]〉,
C = {𝑥𝑖 , 〈[0.3,0.3], [0.3,0.3], [0.4,0.4]〉, 𝑥𝑖 ∈ Ω}, = ( 5 )
〈𝑥7 , [0.3,0.4], [0.5, 0.6]〉, 〈𝑥8 , [1.0,1.0], [0.0, 0.0]〉,
D = {𝑥𝑖 , 〈[0.6,0.6], [0.4,0.4], [0, 0]〉, 𝑥𝑖 ∈ Ω}
〈𝑥 , [0.2,0.3], [0.6, 0.7]〉, 〈𝑥10 , [0.4,0.5], [0.4, 0.5]〉,
Intuitively, it is clear that A is more similar to D than ( 9 )
{ 〈𝑥11 , [0.7,0.8], [0.1, 0.2]〉, 〈𝑥12 , [0.4,0.5], [0.4, 0.5]〉 }
B and C. The result corresponding to measure of 𝐴2
similarity measures given in Table 3: 〈𝑥 , [0.5,0.6], [0.3, 0.4]〉, 〈𝑥2 , [0.6,0.7], [0.1, 0.2]〉,
( 1 )
AB AC AD 〈𝑥3 , [1.0,1.0], [0.0, 0.0]〉, 〈𝑥4 , [0.1,0.2], [0.6, 0.7]〉,
SW 0.83333 0.69308 0.8181 〈𝑥 , [0.0,0.1], [0.8, 0.9]〉, 〈𝑥6 , [0.7,0.8], [0.1, 0.2]〉,
= ( 5 )
SHL 0.9 0.8 0.9 〈𝑥7 , [0.5,0.6], [0.3, 0.4]〉, 〈𝑥8 , [0.6,0.7], [0.2, 0.3]〉,
SS 0.99227 0.9 0.98058 〈𝑥 , [1.0,1.0], [0.0, 0.0]〉, 〈𝑥10 , [0.1,0.2], [0.7, 0.8]〉,
( 9 )
SSu 0.89655 0.8571 0.9310 { 〈𝑥11 , [0.0,0.1], [0.8, 0.9]〉, 〈𝑥12 , [0.7,0.8], [0.1, 0.2]〉 }
S1 (ϕ) 0.7450 0.73722 0.8646 𝐴3
S2 (ϕ) 0.78806 0.761886 0.89594 〈𝑥 , [0.4,0.5], [0.3, 0.4]〉, 〈𝑥2 , [0.6,0.7], [0.2, 0.3]〉,
( 1 )
S3 (ϕ) 0.72614 0.68377 0.84188 〈𝑥3 , [0.9,1.0], [0.0, 0.0]〉, 〈𝑥4 , [0.0,0.1], [0.8, 0.9]〉,
S4 (ϕ) 0.78806 0.74188 0.89594 〈𝑥 , [0.0,0.1], [0.8, 0.9]〉, 〈𝑥6 , [0.6,0.7], [0.2, 0.3]〉,
= ( 5 )
S5 (ϕ) 0.68210 0.62283 0.84391 〈𝑥7 , [0.1,0.2], [0.7, 0.8]〉, 〈𝑥8 , [0.2,0.3], [0.6, 0.7]〉,
S6 (ϕ) for p=2 0.77639 0.77141 0.87090 〈𝑥 , [0.5,0.6], [0.2, 0.4]〉, 〈𝑥10 , [1.0,1.0], [0.0, 0.0]〉,
( 9 )
S1 (η) 0.93205 0.89426 0.98708 { 〈𝑥11 , [0.3,0.4], [0.4, 0.5]〉, 〈𝑥12 , [0.0,0.1], [0.8, 0.9]〉 }
S2 (η) 0.95217 0.92075 0.99184
S3 (η) 0.91759 0.85853 0.98418 𝐴4
S4 (η) 0.95217 0.92075 0.99184 〈𝑥 , [1.0,1.0], [0.0, 0.0]〉, 〈𝑥2 , [1.0,1.0], [0.0, 0.0]〉,
( 1 )
S5 (η) 0.92825 0.88113 0.98776 〈𝑥3 , [0.8,0.9], [0.0, 0.1]〉, 〈𝑥4 , [0.7,0.8], [0.1, 0.2]〉,
S6 (η) for p=2 0.93292 0.89590 0.98709 〈𝑥 , [0.0,0.1], [0.7, 0.9]〉, 〈𝑥6 , [0.0,0.1], [0.8, 0.9]〉,
= ( 5 )
Table 3: Comparison of Similarity Measures. 〈𝑥7 , [0.1,0.2], [0.7, 0.8]〉, 〈𝑥8 , [0.1,0.2], [0.7, 0.8]〉,
〈𝑥 , [0.4,0.5], [0.3, 0.4]〉, 〈𝑥10 , [1.0,1.0], [0.0, 0.0]〉,
From the similarity measures listed in table 3, we can see ( 9 )
{ 11 , [0.3,0.4], [0.4, 0.5]〉, 〈𝑥12 , [0.0,0.1], [0.8, 0.9]〉 }
〈𝑥
that
SW , SHL and SS are inconsistent with intuition where as 𝐵=
SSu , Sj (ϕ) and Sj (η), j = 1, … , 6. 〈𝑥 , [0.9,1.0], [0.0, 0.0]〉, 〈𝑥2 , [0.9,1.0], [0.0, 0.0]〉,
In this section we have derived a relation between ( 1 )
〈𝑥3 , [0.7,0.8], [0.1, 0.2]〉, 〈𝑥4 , [0.6,0.7], [0.1, 0.2]〉,
entropy and similarity measure. Then we defined some 〈𝑥 , [0.0,0.1], [0.8, 0.9]〉, 〈𝑥6 , [0.1,0.2], [0.7, 0.8]〉,
similarity measures, compared its performance with ( 5 )
〈𝑥7 , [0.1,0.2], [0.7, 0.8]〉, 〈𝑥8 , [0.1,0.2], [0.7, 0.8]〉,
existing similarity measures. In section 5 we applied
〈𝑥 , [0.4,0.5], [0.3, 0.4]〉, 〈𝑥10 , [1.0,1.0], [0.0, 0.0]〉,
proposed similarity measures to draw conclusion in ( 9 )
pattern recognition and medical diagnoses. { 〈𝑥11 , [0.3,0.4], [0.4, 0.5]〉, 〈𝑥12 , [0.0,0.1], [0.7, 0.9]〉 }
”
Entropy, Distance and Similarity Measures under... Informatica 42 (2018) 617–627 625