You are on page 1of 13

Knowledge-Based Systems 180 (2019) 62–74

Contents lists available at ScienceDirect

Knowledge-Based Systems
journal homepage: www.elsevier.com/locate/knosys

Ordinal consensus measure with objective threshold for


heterogeneous large-scale group decision making✩

Ming Tang a , Xiaoyang Zhou b , Huchang Liao a,c , , Jiuping Xu a , Hamido Fujita d,e ,
Francisco Herrera c,f
a
Business School, Sichuan University, Chengdu 610064, China
b
School of Economics and Management at Xidian University, Xi’an, China
c
Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada 18071, Spain
d
Faculty of Information Technology, Ho Chi Minh City University of Technology (HUTECH), Ho Chi Minh City, Viet Nam
e
Faculty of Software and Information Science, Iwate Prefectural University, Iwate 020-0193, Japan
f
Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia

article info a b s t r a c t

Article history: Because of the increasing complexity of real-world decision-making environment, there is a trend that
Received 8 March 2019 a large number of decision-makers are becoming involved in group decision making problems. In large-
Received in revised form 11 May 2019 scale group decision making problems, owing to various backgrounds and psychological cognition, it
Accepted 11 May 2019
is natural to use heterogeneous representation forms (quantitative or qualitative) to express distinct
Available online 15 May 2019
preference information for different decision-makers. In this paper, we investigate the consensus
Keywords: reaching process in the environment of heterogeneous large-scale group decision making. A novel
Large-scale group decision making ordinal consensus measure with an objective threshold based on preference orderings is proposed.
Heterogeneous This process contains five parts: (1) obtaining ordinal preferences; (2) classifying all decision-makers
Ordinal consensus into several subgroups using the ordinal k-means clustering algorithm; (3) measuring consensus levels
k-means clustering of subgroups and the global group using novel ordinal consensus indexes; (4) providing suggestions
Preference relation for decision-makers to revise preferences using feedback strategies; (5) obtaining final decision result.
An illustrative example is provided to verify the implementation of the proposed consensus model.
Lastly, we discuss the approaches to determine the appropriate number of clusters and the initialization
center points for the proposed ordinal k-means clustering algorithm.
© 2019 Elsevier B.V. All rights reserved.

1. Introduction Anyway, in the past several decades, most developed con-


sensus measures [3–5] only considered a small number of DMs.
Consensus plays a critical role in group decision making (GDM) However, with the increasing complexity of social environment,
since it can ensure that the decision result is supported by all GDM problems that involve a large number of DMs become
decision-makers (DMs). Technically, consensus means the full popular nowadays in many fields such as emergency decision-
agreement or unquestioning attitude among all DMs [1]. This making [6,7], fossil fuel control [8], and human resource manage-
concept is however too strict and often cannot be achieved com- ment [9]. Generally speaking, when the number of DMs in a GDM
pletely in real-world situations. Saint and Lawson [1] held that problem is more than 20, then this problem can be regarded as
the consensus can be described as a state of a mutual agreement a large-scale GDM (LSGDM) [6] problem. Because of the various
among a group of DMs and all opinions have been conveyed backgrounds, attitudes and perceptions, it is more difficult to
to the satisfaction of them. Later, the concept of soft consensus achieve an agreement among all DMs for LSGDM problems than
was proposed to soften the hard consensus and a comprehensive for small-scale GDM problems. Therefore, the consensus process
overview on soft consensus can be found [2]. is important and challengeable for LSGDM problems.
In LSGDM, an investigating issue is that DMs tend to pro-
✩ No author associated with this paper has disclosed any potential or vide various forms of evaluation information owing to distinct
pertinent conflicts which may be perceived to have impending conflict with researching fields or different amount of information mastered
this work. For full disclosure statements refer to https://doi.org/10.1016/j.knosys.
by them. The formats of evaluation information may be given
2019.05.019.
∗ Corresponding author at: Business School, Sichuan University, Chengdu as quantitative or qualitative natures. We call the LSGDM prob-
610064, China. lems with different information formats as heterogeneous LSGDM
E-mail address: liaohuchang@163.com (H. Liao). problems. Heterogeneous LSGDM problems are common in real

https://doi.org/10.1016/j.knosys.2019.05.019
0950-7051/© 2019 Elsevier B.V. All rights reserved.
M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74 63

life situations and the heterogeneous LSGDM model is more con- calculating deviation degree also needs a unification process,
sistent with actual situations than traditional GDM models. For or the computed ideal solution and negative ideal solution are
a heterogeneous problem, various formats of evaluation informa- relative. Therefore, managing consensus reaching process (CRP)
tion could be given as: in the environment of heterogeneous LSGDM is a big researching
challenge.
(1) Quantitative nature. In such situation, DMs use various Bearing these facts in mind, this paper develops a novel or-
quantitative representation forms to express their decision dinal consensus measure with feedback strategies to manage
information, such as fuzzy number [10], intuitionistic fuzzy heterogeneous LSGDM problems. If we study soft consensus, an
set (IFS) [11], interval-valued IFS (IVIFS) [12] and hesitant acceptable consensus threshold should be given. (6) Most existing
fuzzy set (HFS) [13]. papers set the consensus threshold according to subjective opin-
(2) Qualitative nature. In such situation, DMs use multiple ion. This study concentrates on conquering the aforementioned
qualitative representation forms to express their decision issues. The contributions of this paper can be highlighted as
information, such as linguistic term set (LTS) [14], 2-tuple follows:
linguistic model [15], hesitant fuzzy linguistic terms set (i) We propose a novel ordinal consensus measure based on
(HFLTS) [16,17], continuous interval-valued LTS (CIVLTS) preference orderings instead of preference relations. This method
[18], and probabilistic linguistic term set (PLTS) [19]. is different from the two kinds of methods mentioned above. That
is, it does not have transformation process and does not need to
Inspired by the work on historical account of fuzzy sets [20],
find the ideal solutions. Thus, it can deal with the problems (1),
we choose formats aforementioned to analyze in this study.
(2), (3), (4) and (5).
There are two main ways to deal with heterogeneous infor-
(ii) We develop an approach to determine the consensus
mation in existing literature. One train of thinking is to unify
threshold according to the number of alternatives in a decision
non-homogeneous information into the 2-tuple linguistic model
since this kind of representation model conducts the processes problem, which can solve the problem (6).
of computing with words (CWW) conveniently without loss of To do so, the rest of this paper is organized as follows. Sec-
original information. Herrera et al. [21] initially used this method tion 2 reviews different representation forms and existing ordinal
to manage non-homogeneous information with three heteroge- consensus measures. A consensus framework for heterogeneous
neous forms: fuzzy preference relations, interval-valued pref- LSGDM problem is developed in Section 3. The ordinal consensus
erence relations and linguistic preference relations. Similarly, reaching process is presented in Section 4. Section 5 provides
Martínez et al. [22] proposed several functions to transform the an illustrative example to verify the application of the ordinal
non-homogeneous information into a unified form. The other idea consensus model. Discussions on how to determine the number
to deal with heterogeneous information is based on the distances of clusters and initialization cluster center points are provided in
to the ideal solution and the negative ideal solution. Zhang Section 6. The paper ends with concluding remarks in Section 7.
and Lu [23] proposed an approach to compute the distances
between alternatives and ideal solutions, and then ranked the 2. Preliminaries
alternatives according to the closeness coefficient values of them.
This method also need a unification process to transform real In this section, we provide a brief review on the knowledge
numbers, interval numbers and linguistic values into triangular that will be used in the rest of this study.
fuzzy numbers, and then, obtains the ranking of alternatives
considering the distance between each alternative in the col- 2.1. Heterogeneous representation models
lective preference relation and both the positive and negative
ideal solutions. Li et al. [24] used this idea to solve multiple The core idea of FS [10] is to extend the eigenfunction valued
attribute problems. The difference between Refs. [23] and [24] 1 or 0 into a membership function with values in the real unit
is that Li et al. [24]’s method does not need a unification process. interval [0, 1]. It can be used to depict the fuzzy degree of an
This approach calculates the positive and negative ideal solutions element belonging to a set. In view that FS only contains mem-
based on preference relations for each criterion. Pinilla et al. [25] bership information, it cannot characterize the uncertain degree
provided a comparison work among three methods proposed in of human perceptions. Thus, many extensions of FS have been
Refs. [21,23] and [24], and used them in evaluating sustainable developed from different perspectives. Atanassov [11] introduced
energy policies. Recently, Li et al. [26] developed a model to the concept of IFS which extends the classical FS by consider-
integrate four heterogeneous information (real numbers, interval ing membership degree, non-membership degree and hesitancy
numbers, triangular fuzzy numbers and trapezoidal fuzzy num- degree, simultaneously. After that, he proposed IVIFS [12] in
bers) by calculating the deviation degree between each individual which membership degree, non-membership degree and hesi-
matrix and the collective matrix obtained by the power average tancy degree are given as a unit interval [0, 1]. In 2010, Torra [13]
operator. introduced the definition HFS, which allows people to hesitate
It is observed that many extensions of representation models among several possible membership degrees.
such as CIVLTS, HFLTS and PLTS have been developed. These All above representation models can take effect when the eval-
extensions usually have complex mathematical formulas and can uation information on alternatives are measured in quantitative
express people’s cognitive information deeply. In this regard, the nature. The linguistic variables introduced by Zadeh [14] are often
methods that transform different representation forms (three or used in qualitative environment. Let S = {s0 , s1 , . . . , sg } be a
four kinds) into a unified form may cause many problems: (1) it finite and ordered discrete LTS with odd cardinality, where si
is difficult to develop different functions to unify these multiple denotes a possible value for a linguistic variable and g is the
models into a common format; (2) original information may be granularity of the LTS. Many scholars made their contributions on
lost in the transformation process, which may further lead to the qualitative representation models. Since in the CWW process
unreasonable decision results; (3) the transformed formats may the results cannot match the linguistic terms in the initial LTS
be inconsistent with DMs’ initial opinions; (4) in the environment S exactly, Herrera and Martínez [15] introduced the 2-tuple lin-
of LSGDM, the burden of transforming all representation forms guistic model to prevent information loss. The 2-tuple linguistic
into a uniform form and then conducting the consensus reach- model is expressed by a pair of values (si , α ), where si represents
ing process is very heavy; (5) the ideal solution method about a linguistic term and denotes the symbolic translation. Xu [27]
64 M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74

Fig. 1. Framework of LSGDM integrating consensus reaching process.

proposed a subscript-symmetric additive LTS as S = {sα |α = Let r = (r1 , r2 , . . . , rn )T be a preference ranking and ri be the
−τ , . . . , −1, 0, 1, . . . , τ }, where s−τ and sτ are lower and upper order of xi . For instance, r = (2, 4, 1, 3)T means that alternative x1
limits of linguistic terms (for HFLTS, CIVLTS and PLTS, we use this is assigned to a rank of 2th, alternative x2 to 4th, and so on. Cook
kind of LTS). and Seiford [33] developed an approach to obtain a compromise
Sometimes, DMs may hesitate among several possible linguis- ranking for a group. For two rankings r (1) and r (2) , their distance
tic terms when giving linguistic preference evaluation informa- is:
tion. To depict such cases, the HFLTS [16] was proposed as an n
ordered finite subset of consecutive linguistic terms of S. Liao d(r (1) , r (2) ) = |ri(1) − ri(2) |

(1)
et al. [17] defined the mathematical form of HFLTS, and then ex-
i=1
tended the HFLTS into the CIVLTS [18] to overcome the drawback
of HFLTS that it may lose information in some cases. Another Then, the consensus ranking γi is the one which minimizes the
limitation of HFLTS is that all linguistic terms in an HFLE are total absolute distance:
treated equally. In many decision situations, DMs may prefer to m n

|ri(k) − γi |
∑ ∑
one or more linguistic terms. To address this issue, Pang et al. [19] min (2)
introduced the PLTS. All the above representation models used in k=1 i=1
this paper are summarized in Table 1.
Let X = {x1 , x2 , . . . , xn } be a finite set of alternatives and E = where r (k) is the rank provided by the kth DM.
{e1 , e2 , . . . , em } be m DMs. The task of these DMs is to evaluate Using preference orderings in CRP with iterative process for
the alternatives and provide their preference information. Prefer- GDM problems is a new topic. The origin of ordinal CRP can be
ence relations (pairwise comparison matrices) are powerful and found in Ref. [37], in which a comparison method of alterna-
efficient tools to express DM’s preferences over alternatives. Up to tives’ positions between two preference vectors was introduced
now, different kinds of preference relations have been developed to measure the consensus level. Liao et al. [38] proposed an
under heterogeneous situations. A brief overview of nine kinds of ordinal consensus measure when dealing with GDM problems
preference relations used in this study is provided in Table 2. Note with IFPRs.
that the continuous interval-valued linguistic preference relations In view that the process of deriving priority vectors from pref-
(CIVLPR) is firstly proposed in this study. erence relations is not the emphasis of our study, the aggregation
operators used in this paper are provided in Appendix.
2.2. Ordinal consensus: a brief review
3. Framework of heterogeneous LSGDM with consensus
How to aggregate a set of ordinal preferences (rankings) of
alternatives to a consensus has been studied by many scholars.
Generally, there are two processes for GDM problems: one is
Problems of this domain arise naturally in many areas including
the CRP and the other is the selection process. The aim of the CRP
the evaluation of objects or a preferential voting situation. Rank-
ing problems can be divided into two basic categories: cardinal is to achieve mutual agreement among all DMs. For an LSGDM
problems and ordinal problems [33]. Cardinal ranking can express problem, the clustering process is essential, which can reduce
not only the dominance of an alternative over another, but also the size of decision-making problems, based on which the cost
the preference intensities. Ordinal ranking does not consider the and complexity can be reduced [39]. Furthermore, we can find
preference degrees. This paper studies the latter one. common opinion patterns such as the subgroups with high sim-
For ordinal rankings, several methods have been developed ilar judgment information through clustering process, in which
to obtain a consensus, such as the simple majority rule [34] one member of the group can be identified as a spokesperson
and the Kendall’s method [35]. A more popular method is based representing the subgroup [40]. The aim of the selection process
on the distance measure which first defines a distance function, is to obtain a final decision result. It can be achieved through
and then determines a consensus ranking that best agree with aggregating individuals’ or subgroups’ preference relations into
all DMs’ rankings. A comprehensive review on distance-based a collective one and then deduce the final ranking, or through
ordinal consensus can be found in Ref. [36]. aggregating the rankings of each individual DMs and then fusing
M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74 65

Table 1
Heterogeneous representation models.
Representation Uncertain measures Symbol Definition
models
FS [10] Crisp membership degrees F A mapping F : X → [0, 1]
IFS [11] Membership degrees, non-membership degrees and A Given by A = {⟨x, µA (x), υA (x)⟩|x ∈ X } where µA : X → [0, 1] and
hesitant degrees νA : X → [0, 1] such that 0 ≤ µA + νA ≤ 1.
IVIFS [12] Interval-valued membership degrees, non-membership A Given by A = {⟨x, MA (x), NA (x)⟩|x ∈ X }, where
degrees and hesitant degrees MA : X → D[0, 1], NA : X → D[0, 1] with 0 ≤ MAU (x) + NAU (x) ≤ 1
HFS [13] Membership degrees defined as a set of possible values E A function that when applied to X returns a subset of [0, 1].
LTS [14] Linguistic terms S S = {s0 , s1 , . . . , sg } where si denotes a possible value for a
linguistic variable and g is the granularity
2-tuple [15] Pairs of values with linguistic terms and numerical values (si , α ) A function ∆: [0, g ] {→ S × [−0.5, 0.5),
s i = round(β )
∆(β ) = (si , α ), with i where round(·) is
α = β − i, α ∈ [0.5, 0.5)
the usual round operation, si has the closest index label to β , and
α is called as a symbolic translation.
HFLTS [16,17] Several possible consecutive linguistic terms HS HS = {⟨x, hS (x)⟩ |x ∈ X }, where
hS (x) = {sϕl (x)|sϕl (x) ∈ S , ϕl ∈ {−τ , . . . , 0, . . . , τ }, l = 1, 2, . . . L(x)}
with sϕl (x) (l = 1, 2, . . . , L(x)) being the continuous terms in S.
CIVLTS [18] Intervals of virtual linguistic terms HS H S = {⟨xi , hS (xi )⟩|xi ∈ X }, where hS (xi ) is a subset in continuous
interval-valued form of S
{ ∑lL(p) }
PLTS [19] Probabilistic distribution of several linguistic terms L(p) L(p) = L(k) (p(k) )|L(k) ∈ S , p(k) ≥ 0, k = 1, 2, . . . , lL(p) , k=1 p(k) ≤ 1
where L(k) (p(k) ) is the linguistic term L(k) associated with its
probability p(k) and lL(p) is the number of linguistic terms in L(p).

Table 2
Heterogeneous preference relations.
Preference relations Abbreviation Symbol Matrix
Fuzzy preference relations [28] FPR P P = (pij )n×n , with membership function µP : X × X → [0, 1]
Intuitionistic fuzzy preference relations [29] IFPR R R = (rij )n×n with rij = (µij , νij )
Interval-valued intuitionistic fuzzy preference relations [29] IVIFPR R R = (r ij )n×n with r ij = (Mij , Nij )
Hesitant fuzzy preference relations [30] HFPR B B = (bij )n×n , with bij = {bsij |s = 1, 2, . . . , lbij }
Linguistic preference relations [3] LPR G µG : X × X → S, µG (xi , xj ) = gij , ∀xi , xj ∈ X
2-tuple linguistic preference relations [15] 2TLPR T A set of 2-tuples, characterized by µT : X × X → S × [−0.5, 0.5)

σ (s) ⏐
Hesitant fuzzy linguistic preference relations [31] HFLPRs H H = (hij )n×n , with hij = { hij ⏐ s = 1, 2, . . . , lhij }
σ (s) ⏐

Continuous interval-valued linguistic preference relations CIVLPR H H = (hij )n×n with hij = { hij ⏐ s = 1, 2, . . . , lhij }
(k) (k)
Probabilistic linguistic preference relation [32] PLPR Q Q = (Lij (p))n×n with Lij = {Lij (pij )|k = 1, 2, . . . , lL(p) }

the sub-rankings to a final one. The latter is the research content k-means clustering method. In this phase, we introduce
of this paper. two consensus indexes: the subgroup ordinal consensus
The framework of the heterogeneous LSGDM model proposed
index (SOCI) and the global ordinal consensus index (GOCI),
in this paper is presented in Fig. 1. As illustrated in Fig. 1, the
which measure the consensus levels of each subgroup and
framework contains five essential phases.
the global group. Details can be found in Section 4.2.
(1) The first phase is to obtain preference orderings. A group (4) The fourth phase is the CRP based on the feedback mech-
of DMs evaluate a set of alternatives and provide hetero- anism. Usually, a consensus threshold λ should be deter-
geneous preference relations. Then, different aggregation
mined in advance. If the consensus level does not reach the
operators are used to aggregate these preferences relations
and obtain preference orderings. threshold, then the feedback mechanism is used to improve
(2) The second phase is the clustering process. As a classical it. The feedback mechanism contains two sets of rules:
clustering method, the k-means clustering algorithm [41] identification rules and direction rules. The identification
is simple and most widespread in existing literature. The rules are used to identify sub-groups, alternatives and pairs
advantage of the k-means clustering method is its easy
of alternatives that contribute less in reaching a high-
implementation and high efficiency because its calculation
complexity is o(nKI) [42], where n is the number of objects, level consensus. The direction rules are used to provide
K is the number of clusters and I is the number of itera- suggestions for DMs in different sub-groups to adjust their
tions. In this paper, we will extend the k-means clustering evaluations. Based on this phase, an expected consensus
algorithm into the environment of preference orderings, level can be achieved. Details can be found in Section 4.3.
and propose ordinal k-means clustering algorithm. Details
(5) The last phase is to obtain the final decision result. Once
can be found in Section 4.1.
(3) The third phase is the ordinal consensus checking process the consensus degree reaches the expected level, we can
based on subgroups generated from the proposed ordinal know a comprise ranking and the optimal alternative.
66 M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74

4. Ordinal consensus process for heterogeneous LSGDM distance measure takes the extreme case in which a DM has no
consensus with the group.
This section presents the framework of our proposed model Step 2.3. Assign each point to the nearest clustering center.
regarding five phases: derivation of preference orderings, clus- After calculating the distances, each point should be assigned
tering process, consensus checking process, CRP and selection to the nearest clustering center. That is, if d(e(k) , Zl (I)) satisfies
process. Since the first phase and the last phase are not the d(r (k) , Zl (I)) = min{d(e(k) , Zl (I)), k = 1, 2, . . . , m}, then e(k) ∈ Cl .
innovative points of this study, we discuss the middle three Step 2.4. Update the clustering center of each cluster. We use
phases in details. Section 4.1 introduces the k-means clustering the preference vector to calculate the clustering center.
(k) (k) (k)
algorithm whose input data are ordinal preferences. Section 4.2 Let ω(k) = (ω1 , ω2 , . . . , ωn )T be the preference vector of
gives the consensus measures. The iterative feedback adjustment DM ek . The preference vector is obtained by:
strategy is discussed in Section 4.3. n−1

ωi(k) = (n − ri(k) )/

i (4)
4.1. Ordinal k-means clustering process i=1

For instance, suppose that a DM ranks five alternatives x =


Clustering is a process to divide a set of physical or abstract
{x1 , x2 , . . . , x5 } and gives his ranking as r = (3, 1, 2, 5, 4)T . Then
objects into multiple classes that compose of similar objects.
his preference vector is ω = (0.2, 0.4, 0.3, 0, 0.1)T .
Dividing DMs into several clusters according to their evalua-
Based on Eq. (4), the preference ordering can be obtained
tion information is a fundamental process in handling LSGDM
adopting the descending order of elements in the preference vec-
problems [9]. It can reduce the computational complexity and
tor. For instance, the preference ordering for ω = (0.2, 0.4, 0.3, 0,
guarantee the accuracy of results in aggregation process. Ac- 0.1)T is r = (3, 1, 2, 5, 4)T .
tually, one significant difference between traditional GDM and Next, the clustering center can be obtained. The preference
LSGDM is the clustering process. Many clustering algorithms such vector of subgroup Cl is
as fuzzy c-means (FCM) clustering algorithm [8], fuzzy equiva-
#Cl
lence relation [39], broad-first-search-neighbor method [43] and (Cj ) 1 ∑
self-organizing maps [44] have been used to solve the LSGDM ωi = ωi(lt) (5)
#Cl
problems. In this study, we adopt the k-means clustering al- t =1

gorithm to classify ordinal preferences. The core idea of the where #Cl is the number of DMs in the lth cluster. ωi = (lt)

k-means clustering algorithm is to minimize the distance from


(ω1 , ω2 , . . . , ωn )T is the preference vector of the tth expert in
(lt) (lt) (lt)

all samples to their center and to achieve convergence by iter- sub-group Cl .


ations. Euclidean distance and cosine distance are most popular After dividing DMs into several subgroups, the next thing is
distance measures used in the k-means clustering algorithm. This to assign weight to each cluster. Generally, the number of DMs
study uses the Euclidean distance. Next, we address the detailed in a group reflects its importance. Let ζl be the weight of cluster
explanations of this algorithm. Cl . If all DMs are given equal importance weights, the weight of
Step 1. Initializing process. As the start of this algorithm, some a subgroup can be given as:
initializations are given. The iteration number I is set to 1.
Step 2.1. Select initialization points. The first thing we should ζl = #Cl /m (6)
do in the iteration process is to randomly select the initial clus-
where m is the number total number of DMs.
tering points (In Section 6, detailed discussion will be given for
Based on the above analysis, we can provide the pseudo-code
this issue).
of the k-means clustering algorithm:
Step 2.2. Compute the distance from each center to each point
according to the chosen distance metric. The Euclidean distance
4.2. Consensus checking process with ordinal consensus measures
between two orderings of alternatives is given as:

(k) In this section, the ordinal consensus is introduced to measure
− ri(c ) )2
∑n
i=1 (ri the consensus level of subgroups and the global group.
dEuc = (3)
2[(n − 1)2 + (n − 3)2 + · · ·]
(k) 4.2.1. Consensus measure
where ri is the rank of the ith alternative provided by DM ek and
(c ) Generally, most existing consensus measures are calculated
ri is the rank of the ith alternative of the clustering centroid. based on the distance measures between individual preference
The denominator of Eq. (3) is a standardization operation. The matrix and the collective one or based on the similarity de-
maximum difference between two rankings can be represented gree among preference matrices. While in heterogeneous LSGDM
as 2[(n − 1) + (n − 3) + · · ·] (i.e., the positions of alternatives problems, if we use this method, a transformation process is
in two rankings are totally opposite). If n is even, then the last inevitable because the precondition of these two methods is
number of (n − 1) + (n − 3) + · · · is 1; if n is odd (except 1), then that the individual preference matrices have a unified form. To
the last number of (n − 1) + (n − 3) + · · · is 2. For instance, for define various transformation rules is a heavy task in the LSGDM
three alternatives {x1 , x2 , x3 }, the maximum difference between environment. Furthermore, valuable information may be lost in
two rankings is 4 (r (1) = (1, 2, 3)T , r (2) = (3, 2, 1)T ). Similarly, the transformation process. The transformed preference relations
the maximum difference between two rankings when there are may be contrary to DMs’ original perceptions.
four alternatives is 8 (r (3) = (1, 2, 3, 4)T , r (4) = (4, 3, 2, 1)T ). In In this regard, based on preference orderings, we propose two
analogous, the largest quadratic difference between two rankings consensus indexes which can overcome the above defects.
is 2[(n − 1)2 + (n − 3)2 +· · ·]. For example, the quadratic difference
between r (3) and r (4) is 20. Suppose that there are four alter- Definition 1. The SOCI of a subgroup Cl to the global group CG
natives: {x1 , x2 , x3 , x4 }. The orderings of these four alternatives is:
provided by a DM is r (5) = (3, 2, 1, 4) and the ranking of the √
∑n (Cl )
clustering center is: r (c ) = (3, 1, 2, 4). Then the distance between (Cl ) i=1 (ri − riG )2
SOCI =1− (7)
these two rankings is 0.3162. Obviously, 0 ≤ dEuc ≤ 1. The 2[(n − 1)2 (n
+ − 3)2 + · · ·]
M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74 67

(C )
where ri l is the position of alternative xi provided by the sub- simulation analysis to determine the consensus threshold [6]. In
group Cl ; riG is the position of alternative xi of the global group this paper, we use simulation analysis with the help of MATLAB
CG . R2016a software package to determine the value of λ.
In view that our consensus measure is based on the rankings
Clearly, SOCI (Cl ) ∈ [0, 1]. A larger value of SOCI (Cl ) indicates
of alternatives, the consensus threshold should be related to the
a higher ordinal consensus level of Cl with regard to the global
number of alternatives. Here, the concept of max non-full con-
group. If SOCI (Cl ) = 1, then the preference ordering of the
sensus level is proposed. For ordinal consensus, a full consensus
subgroup Cl is fully consistent with the global group.
means the ranking of a DM is exactly same as the ranking of the
Based on Eq. (7), the GOCI can be defined as:
⎛ √ ⎞ group. Except the full consensus, the largest consensus level of a
K ∑n (Cl ) subgroup to the global group is that the sequence of two adjacent

i=1 (ri − riG )2
GOCI = ζl ⎝ 1 − ⎠ (8) schemes are opposite. For instance, for alternatives {x1 , x2 , x3 , x4 },
2[(n − 1)2 + (n − 3)2 + · · ·] the ranking of the global group is r (G) = (1, 2, 3, 4)T . Undoubt-
l=1
edly, the degree of consensus of subgroup C1 is 1 if its ranking is
where ζl is the weight of the subgroup Cl .
r (1) = (1, 2, 3, 4)T . Except this case, the largest consensus level
of C1 appears when its ranking is r (2) = (2, 1, 3, 4)T or r (2) =
Example 1. Let r (C1 ) = (3, 1, 2, 4), r (C2 ) = (2, 3, 4, 1), r (C3 ) =
(1, 3, 2, 4)T or r (2) = (1, 2, 4, 3)T . In these cases, the value of
(1, 3, 2, 4), r G = (3, 2, 1, 4). Suppose that three subgroups have
SOCI (C1 ) is 0.6838. Therefore, it is not realistic to set the value of λ
equal weights. Then, SOCI (C1 ) = 0.6838, SOCI (C2 ) = 0, SOCI (C3 ) =
as 0.9 or 0.8 when there are four alternatives in a decision-making
0.6127, GOCI = 0.4322.
problem.
Fig. 2 presents the change of max non-full consensus level
Remark 1. The differences between our proposal and the existing
associated to the number of alternatives. Actually, most exist-
ordinal consensus indexes [38] are: (1) our proposal is based on
ing literature [45–48] in the field of LSGDM used less than 6
the Euclidean distance; (2) the extreme case (SOCI (Cl ) = 0) is
alternatives to analyze. As we can see from Fig. 2, when the
considered, i.e., orders of alternatives are completely opposite; (3)
number of alternatives is less than 6, the value of max non-full
the consensus threshold varies with the number of alternatives
consensus level is smaller than 0.8. We can make λ a little smaller
(see next subsection).
than the value of the max non-full consensus level because some
4.2.2. Predefined consensus threshold heterogeneous space can be reserved for subgroups. Furthermore,
In real-world situations, it is not realistic to require subgroups it is too strict to set the value of λ as 0.9 or 0.8 when there are
to have full agreement with the global group. Generally, we can small number of alternatives. If λ is too high, then more iterations
set an acceptable threshold λ for GOCI. If GOCI ≥ λ, then the of the CRP would be needed and much time would be consumed.
global group achieves an acceptable consensus level. We should Therefore, 0.75 or 0.8 are appropriate if we use the proposed
note that, a low value of consensus threshold may cause the de- consensus model to solve LSGDM problems. When the number
cision result to be controversial before a satisfactory compromise of alternatives is more than 6, then 0.8 could be an appropriate
is obtained. However, a high value of consensus threshold may choice for λ. In LSGDM or emergency environment, the consensus
cause waste of time and consumption of resources. The value of threshold should be lower.
λ deserves to research and it depends on specific problems. For
instance, when a decision problem is extremely crucial, a larger 4.3. CRP based feedback adjustment strategy
value of λ such as 0.9 should be set; while in emergency situa-
tions, because of the time limitation, a softer threshold should If GOCI does not reach the preset threshold, then adjustment
be used such as 0.8 [7]. In LSGDM problems, a smaller value strategies should be adopted to improve it. There are two types of
of λ may be acceptable. If there are many DMs, various ideas adjustment strategies: one is the automatic optimization method;
and opinions would appear, and the consensus process will be the other is the feedback optimization method. The automatic
complex and time consuming. There are some studies which used optimization method does not have interactions with DMs, while
68 M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74

unwilling to change their opinions. One method to address these


situations is to exclude these DMs from the group [4]. However,
this kind of method may lead to the loss of information. Another
effective approach is to remove some opinions of the DMs instead
of removing the DMs from the whole group. This is not the focus
of our study. Readers can refer to Ref. [49] for details.

5. Illustrative example

In this section, an illustrative example is used to demon-


strate the implementation of the proposed LSGDM consensus
model. Then, a comparison work to explain the advantage of our
proposed model is provided.

5.1. Case description

Suppose that the municipal government decide to add a new


light line to public transportation system in the city of Chongqing,
China. A set of 20 DMs E = {e1 , e2 , . . . , e20 } are grouped to eval-
Fig. 2. Max non-full consensus levels based on different numbers of alternatives. uate five construction routes. In the evaluating process, different
DMs use heterogeneous preference representation models to ex-
press their preference information. DMs e1 , e2 , e3 and e4 provide
the feedback optimization method provides suggestions for DMs their opinions by FPRs, e5 and e6 provide opinions by IFPRs, e7
and let them make some revisions on their preferences. The and e8 provide their opinions by IVIFPRs, e9 and e10 provide their
advantage of the automatic optimization method is that it is opinions by HFPRs, e11 and e12 provide their opinions by LPRs,
time-saving, while the feedback optimization method can com- e13 and e14 provide their opinions by HFLPRs, e15 and e16 provide
municate with DMs whenever necessary and thus overcome the their opinions by CIVLPRs, e17 and e18 provide their opinions
limitation that the consensus result is only a calculated one. by PLPRs, e19 and e20 provide their opinions by 2-tuple PLRs.
If time allows, the feedback strategy would be better than the The experts that use linguistic representation models adopt the
automatic optimization method. In this section, we introduce a following LTS: S = {s−4 = extremely bad, s−3 = very bad, s−2 =
feedback mechanism to reach an acceptable consensus degree. bad, s−1 = slightly s0 = medium, s1 = slightly good, s2 =
The proposed feedback mechanism contains two sets of rules: good, s3 = very good, s4 = extremely good}. As discussed in
identification rules and direction rules. Identification rules are Section 4.2.2, we set λ as 0.75.
used to identify clusters, alternatives and pairs of alternatives that All experts’ preference relations are listed as in Box I.
contribute less in reaching a high-level consensus. Using the corresponding aggregation operators given in Ap-
(1.1) Identification rule for clusters. It is used to identify the pendix, all ordinal preferences can be obtained and presented as
cluster Cl that does not reach the predefined threshold γ , which follows:
can be expressed as
r (1) = (2, 3, 4, 5, 1)T , r (2) = (3, 5, 4, 1, 2)T , r (3) = (2, 3, 4, 5, 1)T ,
C = {Cl |SOCI (Cl ) < γ , l = 1, 2, . . . , K } (9)
r (4) = (2, 3, 5, 4, 1)T , r (5) = (1, 3, 5, 2, 4)T ,
(1.2) Identification rule for alternatives. It is used to identify
the alternatives that should be modified by Cl , which can be r (6) = (2, 3, 4, 5, 1)T , r (7) = (3, 4, 2, 5, 1)T , r (8) = (4, 3, 5, 1, 2)T ,
expressed as
(Cl )
r (9) = (1, 3, 4, 5, 2)T , r (10) = (2, 4, 5, 3, 1)T ,
AL = {xi |max{| riG − ri |}, i = 1, 2, . . . , n} (10)
Note that this phase only identifies one pair of alternatives in r (11) = (4, 2, 3, 5, 1)T , r (12) = (2, 3, 4, 5, 1)T ,
each iterative round. r (13) = (3, 2, 4, 5, 1)T , r (14) = (1, 5, 3, 4, 2)T ,
(1.3) Identification rule for pairs of alternatives. For any alter-
native xi ∈ AL, this rule identifies pairwise alternatives (xi , xj ) r (15) = (5, 1, 3, 4, 2)T ,
whose mutual preference relationships farthest with the global
group’s. The positions which should be modified are identified r (16) = (2, 5, 4, 3, 1)T , r (17) = (4, 3, 2, 5, 1)T ,
as: r (18) = (3, 1, 2, 5, 4)T , r (19) = (1, 5, 3, 2, 4)T ,
(Cl ) (Cl )
POi = {(i, j)|xi ∈ AL ∧ max |(ri − rj )− (riG − rjG )
|} (11) r (20) = (5, 1, 2, 3, 4)T .
The direction rules are used to provide suggestions for clusters
to adjust their preferences. Based on the relationships between 5.2. Solving the case by the ordinal consensus model
(C ) (C )
(ri l − rj l ) and (riG − rjG ), the direction rules are designed as
follows: Next, the proposed consensus model is adopted to handle this
(C ) (C ) heterogeneous LSGDM problem.
(2.1) Direction rule 1. If (ri l − rj l ) < (riG − rjG ), then DMs in
subgroup Cl should increase the assessment associated with pair First round
of alternatives (xi , xj ). Firstly, the ordinal k-means clustering algorithm is used to
(C ) (C ) divide these 20 DMs into several subgroups. Here, we set the
(2.2) Direction rule 2. If (ri l − rj l ) > (riG − rjG ), then DMs in
value of K as 3. The initial clustering centers are: e1 , e7 and e15 .
subgroup Cl should increase the assessment associated with pair
of alternatives (xi , xj ). These three clustering centers are selected randomly. Discussions
Based on the above discussions, we present the pseudo code of about how to improve the methods of determining the value of K
the CRP as follows. Note. In some cases, the selected DMs may be and select initial clustering centers will be provided in Section 6.
To save space, we omit the calculation process. The three clusters
M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74 69

(9)
are: C1 = {e1 , e10 , e11 , e16 }, C2 = {e2 , e3 , e4 , e5 , e7 , e8 , e12 , e14 , r = (2, 3, 4, 5, 1)T , r (10) = (2, 3, 5, 4, 1)T ,
e17 }, C3 = {e6 , e9 , e13 , e15 , e18 }.
(11)
Therefore, the preference vectors of these three clusters can be r = (3, 2, 4, 5, 1)T , r (12) = (3, 2, 4, 5, 1)T ,
obtained using Eq. (4): ω(C1,1 ) = (0.31, 0.18, 0.07, 0.08, 0.36)T , (13)
ω(C2,1 ) = (0.225, 0.075, 0.225, 0.175, 0.3)T , ω(C3,1 ) = (0.08, r = (3, 2, 4, 5, 1), r (14) = (2, 3, 4, 5, 1), r (15) = (4, 1, 3, 5, 2)T ,
0.34, 0.20, 0.14, 0.24)T
(16)
The ordinal preferences of these three clusters are obtained r = (3, 2, 4, 5, 1)T , r (17) = (4, 2, 3, 5, 1)T ,
based on three preference vectors: r (c1,1 ) = (2, 3, 5, 4, 1)T , r
(18)
= (3, 2, 1, 5, 4)T , r (19) = (2, 4, 5, 3, 1)T ,
r (c2,1 ) = (2, 5, 3, 4, 1)T , r (c3,1 ) = (5, 1, 3, 4, 2)T . (20)
The weight vector of these three clusters is obtained using r = (4, 1, 3, 5, 2)T .
Eq. (6): ζ = (0.5, 0.2, 0.3)T . Next, we still set the clustering centers as e1 , e7 and e15 . The
Thus, the preference vector of the global group is: ω(G) = new clusters are: C1,2 = {e1 , e3 , e4 , e6 , e11 , e12 , e13 , e16 }, C2,2 =
(0.21, 0.22, 0.1, 0.1, 0.37)T . Then the ordinal preference of the {e2 , e5 , e8 , e7 , e9 , e10 , e14 , e19 }, C3,2 = {e15 , e17 , e18 , e20 }.
global group is: r (G) = (3, 2, 4, 5, 1)T . Therefore, the preference vectors of these three clusters are:,
After obtaining the group ranking, the next thing is to calculate
the consensus indexes. Using Eq. (7), three COCIs are obtained: ω(C1,2 ) = (0.2, 0.3, 0.0875, 0.0125, 0.4)T ,
COCI (C1,1 ) = 0.6838, COCI (C2,1 ) = 0.4523, COCI (C3,1 ) = 0.5528. ω(C2,2 ) = (0.2875, 0.1625, 0.0375, 0.1625, 0.35)T ,
Since GOCI = 0.5982 < λ, the feedback mechanism is used to
improve the degree of consensus. ω(C3,2 ) = (0.125, 0.35, 0.25, 0, 0.275)T .
(1) Identification rule for clusters. Clusters that should make
The ordinal preferences of these three clusters are:
modifications are C1,1 , C2,1 , C3,1 .
(2) Identification rule for alternatives. According to Eq. (10), r (C1,2 ) = (3, 2, 4, 5, 1)T , r (C2,2 ) = (2, 3, 5, 4, 1)T ,
the alternatives that should be modified by clusters are: C1,1 : x1 ,
r (C3,2 ) = (4, 1, 3, 5, 2)T .
C2,1 : x2 , C3,1 : x1 .
(3) Identification rule for pairs of alternatives. According to The weight vector of these clusters is ζ = (0.4, 0.4, 0.2)T .
Eq. (11), the pairs of alternatives that need to modify are: C1,1 : (x1 , Thus, the preference vector of the global group is ω(G) =
x2 ), C2,1 : (x2 , x3 ), C3,1 : (x1 , x4 ) (0.22, 0.255, 0.19, 0.085, 0.34)T . Then, the ordering of the alter-
Next, we use direction rules to provide suggestions for clus- natives corresponding to the global group is r (G) = (3, 2, 4, 5, 1)T .
ters. Based on the pairs of alternatives presented above and the After obtaining the group ranking, the next thing is to calculate
directions rules, we obtain the subgroups that should increase the degree of consensus. Using Eq. (7), three COCIs are obtained
the preferences over pairs of alternatives are C2 and C3 , and as COCI (C1,2 ) = 1, COCI (C2,2 ) = 0.7764, COCI (C3,2 ) = 0.6838. Since
the subgroup that should decrease the preferences over pairs of GOCI = 0.8473 > λ, the CRP is terminated. The final ranking of
alternatives is C1 . five alternatives is x5 > x2 > x1 > x3 > x4 .
Second round
Suppose that all DMs make the adjustments according to the 5.3. Comparative analysis
suggestions and provide their modified preference relations. To
save space, the preference relations are not presented. We give According to the classification standard adopted in Refs. [50],
the updated ordinal preferences as: there are three kinds of heterogeneous problems. The first one re-
(1)
r = (3, 2, 4, 5, 1)T , r (2) = (3, 4, 5, 1, 2)T , r (3) = (3, 2, 4, 5, 1)T , lates different preference structures, such as additive preference
(4)
relations, multiplicative preference relations, preference order-
r = (3, 2, 5, 4, 1)T r (5) = (1, 3, 5, 2, 4)T , ings and utility functions [5]; the second one appears when DMs
have different backgrounds and different levels of knowledge [6–
(6)
r = (3, 2, 4, 5, 1)T , r (7) = (2, 3, 4, 5, 1)T r (8) = (3, 4, 5, 2, 1)T , 8]; the third one relates to different expressions, such as fuzzy
70 M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74

⎪0.5 0.6 0.7 0.8 0.4⎪ 0.5 0.7 0.6 0.2 0.4⎪ 0.5 0.7 0.8 0.9 0.4⎪
⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎪ ⎪ ⎪
. . . . . . . . . . . . . . .
⎪ ⎪ ⎪ ⎪ ⎪
0 4 0 5 0 6 0 7 0 3 0 3 0 5 0 4 0 1 0 2 0 3 0 5 0 6 0 8 0 3

⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪

⎨ ⎬ ⎨ ⎬ ⎨ ⎬
P (1) = 0.3 0.4 0.5 0.6 0.2 , P (2) = 0.4 0.6 0.5 0.2 0.3 , P (3) = 0.2 0.4 0.5 0.6 0.3 ,
⎪0.2 0.3 0.4 0.5 0.1⎪ 0.8 0.9 0.8 0.5 0.6⎪ 0.1 0.2 0.4 0.5 0.1⎪

⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪ ⎪
⎪ ⎪ ⎪
⎪ ⎪

0.6 0.7 0.8 0.1 0.5 0.6 0.8 0.7 0.4 0.5 0.6 0.7 0.7 0.9 0.5
⎩ ⎭ ⎩ ⎭ ⎩ ⎭

⎪0.5 0.7 0.9 0.8 0.4⎪ (0.5, 0.5, 0) (0.7, 0.2, 0.1) (0.9, 0.1, 0) (0.6, 0.2, 0.2) (0.8, 0.2, 0) ⎪
⎧ ⎫ ⎧ ⎫
⎪ ⎪
⎨0.3 0.5 0.7 0.6 0.3⎪ ⎨(0.2, 0.7, 0.1) (0.5, 0.5, 0) (0.7, 0.1, 0.2) (0.4, 0.5, 0.1) (0.6, 0.1, 0.3)⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪

⎪ ⎬ ⎪ ⎬
P (4) = 0.1 0.3 0.5 0.4 0.1 , R(5) = (0.1, 0.9, 0) (0.1, 0.7, 0.2) (0.5, 0.5, 0) (0.2, 0.7, 0.1) (0.4, 0.5, 0.1) ,
0.2 0.4 0.6 0.5 0.2⎪ (0.2, 0.6, 0.2) (0.5, 0.4, 0.1) (0.7, 0.2, 0.1) (0.5, 0.5, 0) (0.7, 0.2, 0.1)⎪

⎪ ⎪ ⎪
⎪ ⎪

⎪ ⎪
⎪ ⎪
⎪ ⎪

0.6 0.7 0.9 0.8 0.5 (0.2, 0.8, 0) (0.1, 0.6, 0.3) (0.5, 0.4, 0.1) (0.2, 0.7, 0.1) (0.5, 0.5, 0)
⎩ ⎭ ⎩ ⎭

(0.5, 0.5, 0) (0.6, 0.2, 0.2) (0.7, 0.1, 0.2) (0.8, 0.1, 0.1) (0.4, 0.5, 0.1)⎪
⎧ ⎫

⎨(0.2, 0.6, 0.2) (0.5, 0.5, 0) (0.6, 0.1, 0.3) (0.7, 0.1, 0.2) (0.3, 0.6, 0.1)⎪

⎪ ⎪

⎪ ⎬
(6)
R = (0.1, 0.7, 0.2) (0.1, 0.6, 0.3) (0.5, 0.5, 0) (0.6, 0.2, 0.2) (0.2, 0.7, 0.1) ,
⎪(0.1, 0.8, 0.1) (0.1, 0.7, 0.2) (0.2, 0.6, 0.2) (0.5, 0.5, 0) (0.1, 0.8, 0.1)⎪

⎪ ⎪
⎪ ⎪

(0.5, 0.4, 0.1) (0.6, 0.3, 0.1) (0.7, 0.2, 0.1) (0.1, 0.8, 0.1) (0.5, 0.5, 0)
⎩ ⎭

⎪ ([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0.6, 0.7], [0.2, 0.3], [0, 0.2]) ([0.3, 0.5], [0.4, 0.5], [0, 0.3]) ([0.7, 0.8], [0.1, 0.2], [0, 0.2]) ([0.2, 0.3], [0.6, 0.7], [0, 0.2]) ⎪
⎧ ⎫

⎨([0.2, 0.3], [0.6, 0.7], [0, 0.2]) ([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0.2, 0.3], [0.5, 0.6], [0.1, 0.3]) ([0.6, 0.7], [0.2, 0.3], [0, 0.2]) ([0.1, 0.2], [0.6, 0.7], [0.1, 0.3])⎪
⎪ ⎪


(7 )

R = ([0.4, 0.5], [0.3, 0.5], [0, 0.3]) ([0.5, 0.6], [0.2, 0.3], [0.1, 0.3]) ([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0.8, 0.8], [0.1, 0.2], [0, 0.1]) ([0.4, 0.5], [0.5, 0.5], [0.0.1]) ,
([0.1, 0.2], [0.7, 0.8], [0, 0.2]) ([0.2, 0.3], [0.6, 0.7], [0, 0.2]) ([0.8, 0.8], [0.1, 0.2], [0, 0.1]) ([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0, 0.1], [0.8, 0.9], [0, 0.1]) ⎪

⎪ ⎪

⎪ ⎪

([0.2, 0.3], [0.6, 0.7], [0, 0.2]) ([0.6, 0.7], [0.1, 0.2], [0.1, 0.3]) ([0.5, 0.5], [0.4, 0.5], [0.0.1]) ([0.8, 0.9], [0, 0.1], [0, 0.1]) ([0.5, 0.5], [0.5, 0.5], [0, 0])
⎩ ⎭

([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0.3, 0.4], [0.4, 0.5], [0.1, 0.3]) ([0.5, 0.6], [0.3, 0.4], [0, 0.2]) ([0.1, 0.2], [0.7, 0.8], [0, 0.1]) ([0.2, 0.3], [0.6, 0.7], [0, 0.2]) ⎪
⎧ ⎫

⎨([0.4, 0.5], [0.3, 0.4], [0.1, 0.3]) ([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0.6, 0.7], [0.2, 0.3], [0, 0.2]) ([0.2, 0.3], [0.6, 0.7], [0, 0.2]) ([0.3, 0.4], [0.4, 0.5], [0.1, 0.3])⎪

⎪ ⎪


(8 )

R = ([0.3, 0.4], [0.5, 0.6], [0, 0.2]) ([0.2, 0.3], [0.6, 0.7], [0, 0.2]) ([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0, 0.2], [0.6, 0.7], [0.1, 0.4]) ([0.2, 0.3], [0.5, 0.6], [0.1, 0.3]) ,
([0.7, 0.8], [0.1, 0.2], [0, 0.2]) ([0.6, 0.7], [0.2, 0.3], [0, 0.2]) ([0.6, 0.7], [0, 0.2], [0.1, 0.4]) ([0.5, 0.5], [0.5, 0.5], [0, 0]) ([0.5, 0.6], [0.3, 0.4], [0, 0.2]) ⎪

⎪ ⎪

⎪ ⎪

([0.6, 0.7], [0.2, 0.3], [0, 0.2]) ([0.4, 0.5], [0.3, 0.4], [0.1, 0.3]) ([0.5, 0.6], [0.2, 0.3], [0.1, 0.3]) ([0.3, 0.4], [0.5, 0.6], [0, 0.2]) ([0.5, 0.5], [0.5, 0.5], [0, 0])
⎩ ⎭

{0.5} {0.6, 0.7} {0.7, 0.8} {0.5} 0.5, 0.6, 0.7 {0.5} {0.5, 0.7} {0.7, 0.8} {0.5} {0.3, 0.4}
⎛ ⎞ ⎛ ⎞
⎜ {0.4, 0.3} {0.5} {0.5} {0.7, 0.8} {0.3, 0.4} ⎟ ⎜{0.5, 0.3} {0.5} {0.5} {0.4, 0.5} 0.1, 0.2, 0.3⎟
B(9) = ⎜
⎜ {0.3, 0.2} {0.5} {0.5} {0.5, 0.6} {0.2, 0.3} ⎟ ⎟,B
(10)
⎜{0.3, 0.2} {0.5} {0.5} {0.2, 0.4} 0, 0.1 ⎟ ⎟,
⎜ ⎟ ⎜ ⎟
=⎜
⎝ {0.5} {0.3, 0.2} {0.5, 0.4} {0.5} {0.1, 0.3} ⎠ ⎝ {0.5} {0.6, 0.5} {0.8, 0.6} {0.5} {0.5} ⎠
0.5, 0.4, 0.3 {0.7, 0.6} {0.8, 0.7} {0.9, 0.7} {0.5} {0.7, 0.6} 0.9, 0.8, 0.7 1, 0.9 {0.5} {0.5}
⎧ ⎫ ⎧ ⎫

⎪ s0 s−2 s−1 s1 s−3 ⎪⎪ ⎪
⎪ s0 s1 s2 s3 s−1 ⎪⎪
⎨ s2 s0 s1 s3 s−1 ⎪ ⎨s−1 s0 s1 s2 s−2 ⎪

⎪ ⎪ ⎪
⎪ ⎪
⎬ ⎬
G(11) = s1 s−1 s0 s2 s−2 , G(12) = s−2 s−1 s0 s1 s−3 ,
⎪s−1 s−3 s−2 s0 s−4 ⎪ ⎪s−3 s−2 s−1 s0 s−4 ⎪

⎪ ⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪ ⎪

s3 s1 s2 s4 s0 s1 s2 s3 s4 s0
⎩ ⎭ ⎩ ⎭

s−2 , s−1 s2 , s3 s0 , s1 ⎪
⎧ ⎫ ⎧ ⎫
⎪ s0 s1 s2 s−3 ⎪ ⎪ s0 s4 s3
⎨s2 , s1 s1 , s2 , s3 s−3 , s−2 , s−1 s−3 , s−2 ⎪
⎪ ⎪ ⎪ ⎪
s0 s3 s−1 ⎪ ⎨ s−4 s0 s−1

⎪ ⎪ ⎪
⎪ ⎪
⎬ ⎬
H (13) = s−1 s−1 , s−2 , s−3 s0 s1 s−2 , s−3 , H (14) = s−2 , s−3 s3 , s2 , s1 s0 s1 s−1 , s0 ,
s s−3 s1 s0 s−3 , s−4 ⎪ s s1 s−1 s0 s−2 ⎪
⎪ ⎪ ⎪ ⎪
⎩ −2 ⎩ −3

⎪ ⎪ ⎪
⎪ ⎪
⎪ ⎪ ⎪ ⎪
s3 s1 s2 , s3 s3 , s4 s0 s0 , s−1 s3 , s2 s1 , s0 s2 s0
⎭ ⎭

[s 0 , s 0 ] [s−4 , s−3.5 ] [s−2.3 , s−2 ] [s−1 , s−1 ] [s−3.5 , s−3 ]⎪ [s0 , s0 ] [s2 , s0.5 ] [s0.5 , s2 ] [s0 , s1 ] [s−0.5 , s0 ] ⎪
⎧ ⎫ ⎧ ⎫
⎪ ⎪
⎨ [ s 4 , s 0 .5 ] [s0 , s0 ] [s2 , s0.5 ] [ s3 , s3 ] [s0 , s0.5 ] ⎪ ⎨[s−2 , s−2.5 ] [s0 , s0 ] [s−1 , s0 ] [s−2.5 , s−2 ] [s−4 , s−3.5 ]⎪

⎪ ⎪
⎪ ⎪
⎪ ⎪

⎪ ⎪
(15) (16)
⎬ ⎬
H = [s0.5 , s2 ] [s−2 , s−2.5 ] [s0 , s0 ] [s1 , s1 ] [s−1.5 , s−1 ] , H = [s−1.5 , s−2 ] [s1 , s0 ] [s0 , s0 ] [s−1 , s−0.5 ] [s−3 , s−3 ] ,
⎪ [s 1 , s 1 ] [s−3 , s−3 ] [s−1 , s−1 ] [s0 , s0 ] [s−2 , s−2 ] ⎪ [s , s ] [s0.5 , s2 ] [s1 , s0.5 ] [s0 , s0 ] [s−2 , s−2 ] ⎪
⎪ ⎪ ⎪ ⎪
⎩ 0 −1

⎪ ⎪ ⎪
⎪ ⎪
⎪ ⎪ ⎪
[ s 0 .5 , s 3 ] [s0 , s−0.5 ] [s0.5 , s1 ] [ s2 , s2 ] [s0 , s0 ] [s0.5 , s0 ] [ s 4 , s 0 .5 ] [s3 , s3 ] [s2 , s2 ] [ s0 , s0 ]
⎩ ⎭ ⎭

{s−1 (0.8)} {s−1 (0.3), s−2 (0.7)} {s0 (0.4), s1 (0.6)}


⎛ ⎞
{s0 (1)} {s−3 (1)}
⎜ {s1 (0.8)} {s0 (1)} {s−1 (0.8), s−2 (0.2)} {s2 (1)} {s−2 (0.8), s−1 (0.2)}⎟
Q (17) = ⎜
⎜ {s1 (0.3), s2 (0.7)} {s1 (0.8), s2 (0.2)} {s2 (0.5), s3 (0.5)} {s−2 (0.8)} ⎟,
⎜ ⎟
{s0 (1)}
⎝{s0 (0.4), s−1 (0.6)} {s−2 (0.5), s−3 (0.5)} {s−4 (0.8), s−3 (0.2)}⎠

{s−2 (1)} {s0 (1)}
{s3 (1)} {s2 (0.8), s1 (0.2)} {s2 (0.8)} {s4 (0.8), s3 (0.2)} {s0 (1)}
{s−1 (0.5), s0 (0.5)} {s1 (0.4), s2 (0.6)} {s1 (0.8)}
⎛ ⎞
{s0 (1)} {s−2 (1)}
⎜ { s 2 (1)} {s 0 (1) } {s 1 (0 .8) , s 2 (0 .2) } { s 3 (0 .2) , s 4 (0 .8) } { s 2 (0 .4) , s 3 (0 .6) } ⎟
Q (18) = ⎜
⎜ {s1 (0.5), s0 (0.5)} {s−1 (0.8), s−2 (0.2)} {s3 (0.5)} {s2 (0.8)} ⎟,
⎜ ⎟
{s0 (1)}
⎝{s−1 (0.4), s−2 (0.6)} {s−3 (0.2), s−4 (0.8)} {s−3 (0.5)} {s−1 (0.7), s0 (0.2)}⎠

{s0 (1)}
{s−1 (0.8)} {s−2 (0.4), s−3 (0.6)} {s−2 (0.8)} {s1 (0.7), s0 (0.2)} {s0 (1)}
(s0 , 0) (s4 , −0.25) (s2 , 0.5) (s1 , 0.25) (s3 , 0) (s0 , 0) (s−4 , 0.5) (s−3 , 0) (s−2 , 0.25) (s−1 , −0.25)
⎛ ⎞ ⎛ ⎞
⎜ (s−4 , 0.25) (s0 , 0) (s−2 , 0.25) (s−3 , 0.5) (s−1 , 0.5) ⎟ ⎜ (s4 , −0.5) (s0 , 0) (s1 , 0.25) (s2 , 0.5) (s3 , 0) ⎟
⎟ ⎜
T (19) = ⎜
⎜ (s−2 , −0.5) (s2 , −0.25) (s0 , 0) (s−1 , 0) (s1 , −0.25)⎟ ⎟,T
(20)
⎜ (s3 , 0) (s−1 , −0.25) (s0 , 0) (s1 , 0.5) (s2 , −0.25) ⎟
⎟.
⎜ ⎟
=⎜
⎝(s−1 , −0.25) (s3 , −0.5) (s1 , 0) (s0 , 0) (s2 , 0) ⎠ ⎝(s2 , −0.25) (s−2 , −0.5) (s−1 , −0.5) (s0 , 0) (s1 , 0) ⎠
(s−3 , 0) (s1 , −0.5) (s−1 , 0.25) (s−2 , 0) (s0 , 0) (s1 , 0.25) (s−3 , 0) (s−2 , 0.25) (s−1 , 0) (s0 , 0)

Box I.

numbers, interval numbers and linguistic data [21–23,26]. This are focuses in heterogeneous problems. The proposed method
study belongs to the third framework. Table 3 provides compar- deals with 9 kinds of expression forms including some cognitive
isons between this study and other related studies under the third complex information. Note that other expression forms that do
framework. not appear in this study can also be handled using our method.
As shown in Table 3, most existing studies only solve het- The proposed method can also deal with the first kind of problem
erogeneous problems with three or four expression forms. Fuzzy efficiently. Furthermore, most previous related works can only
numbers, real numbers, interval numbers and linguistic values solve small-scale decision-making problems, while the method
M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74 71

Table 3
Comparison between the proposed method and the other related works.
Ref. Expression forms Processing method Scale
[21] Fuzzy numbers, IVFS, LTS Transformation Small
[22] Fuzzy numbers, IVFS, LTS Transformation Small
[23] Fuzzy numbers, real numbers, IVFS, Ideal solution Small
multi-granular linguistic labels
[24] Fuzzy numbers, real numbers, LTS Ideal solution Small
[26] Real numbers, IVFS, trapezoidal fuzzy Ideal solution Small
numbers
[51] IFS, trapezoidal fuzzy numbers, IVFS, Ideal solution Small
real numbers
[52] Multiplicative fuzzy numbers, fuzzy Ideal solution Small
numbers, LTS, IFS
[43] Preference orderings, utility Ordinal consensus Large
functions, multiplicative fuzzy
numbers, fuzzy numbers
This study Fuzzy numbers, IFS, IVIFS, HFS, LTS, Ordinal consensus Large
HFLTS, CIVLTS, PLTS, 2-tuple
linguistic values
Fig. 3. Relationship between SSE and the number of clusters.

proposed in this study can be used in large-scale or small-scale more expensive clustering algorithms [53]. Second, it is easy to
context. Ref. [43] also used the ordinal consensus model to deal implement. This clustering algorithm can be implemented by
with heterogeneous LSGDM problems with four kinds of prefer- almost every data mining software. Third, it is flexible because
ence representation structures: preference orderings, utility func- almost all aspects of it can be modified, such as the distance mea-
tions, additive preference relations and multiplicative preference sure, the termination criterion and the initialization [53]. These
relations. The main difference between our proposed model with advantages are not available to some aforementioned clustering
Ref. [43]’s model is the way to obtain the ordinal consensus algorithms such as the fuzzy equivalence relation [39] and broad-
measure. We calculate the degree of consensus based on the first-search-neighbor method [43]. However, these are also some
Euclidean distance (See Eqs. (7) and (8)). Zhang et al. [43] first drawbacks for the k-means clustering algorithm. First, the value
expressed the preference ordering with an n × n matrix OP = of K needs to be given in advance. In many real-world situations,
(opij )n×n , where opij = 1 if ri < rj , opij = 0 if ri > rj and data need to be clustered into several categories. When the value
opij = 0.5 if ri = rj . For instance, let r (k) = (2, 1, 5, 6, 4, 3)T and of K is not known, we usually need to combine other algorithms
r (c ) = (1, 2, 5, 6, 3, 4)T . Then, we can obtain OP (k) and OP (c ) based to obtain it. Second, this algorithm is sensitive to initial center
on r (k) and r (c ) : points. The clustering results can be influenced by the selection
0.5 of different initial points. Third, the noise and isolated points
⎛ ⎞
0 1 1 1 1
⎜1 0.5 1 1 1 1 ⎟ have great influence on the clustering results. In this sense, the
⎜0 0 0.5 1 0 0 ⎟ outlier-detection is essential.
⎜ ⎟
OP (k) =⎜ ⎟ and
⎜0 0 0 0.5 0 0 ⎟
⎝0 0 1 1 0.5 0 ⎠ 6.1. Determination of K
0 0 1 1 1 0 .5
There are some studies that focused on the methods to deter-
0.5
⎛ ⎞
1 1 1 1 1 mine the optimal clustering number. In this section, we use two
⎜0 0.5 1 1 1 1 ⎟ indicators, i.e., the sum of squared errors (SSE) and the silhouette
⎜0 0 0.5 1 0 0 ⎟
⎜ ⎟
OP (c ) =⎜ ⎟. coefficient, to select the value of K .
⎜0 0 0 0.5 0 0 ⎟ (1) Sum of squared errors
⎝0 0 1 1 0.5 1 ⎠ With the increasing of the clustering numbers, the division
0 0 1 1 0 0.5 of samples will be more detailed and the aggregation degree of
Ref. [43] used the concept of individual satisfaction. Suppose each cluster will increase, which further lead to the value of SSE
that ek concerns all alternatives. Then, the degree of consensus of becoming smaller. When the number of clusters, K , is smaller
4
ek is 1 − 5× 6
= 0.8667 based on their consensus measure. Our than the true clustering number (optimal clustering number), the
consensus measure can increase of K will greatly increase the aggregation degree of each
√ be obtained by calculating an Euclidean
cluster. Thus, the value of SSE will decline sharply. When K is
distance directly: 1 − 1+1+1+1
= 0.7610. Therefore, our pro-
2×(25+9+1) close to the true clustering number, the change of SSE will be
posed model is efficient with regard to the consensus measure. very slight with the increase of K . We can use this idea to select
Different methods have their own emphases. In other words, one the optimal clustering number. The SSE is given as:
method cannot performance better than other methods in all
K
aspects. 2
|ω(k) − ω(Cl ) |
∑ ∑
SSE = (12)
l=1 k∈Cl
6. Discussion on the clustering algorithm
Fig. 3 presents the relationship between the number of clus-
The k-means clustering algorithm is a division method that ters and the SSE. When the number of clusters is smaller than 5,
has been widely used in many fields. Compared to other cluster- the value of SSE decreases rapidly. When the number of clusters
ing algorithms, the k-means clustering method has many advan- is more than 6, the value of SSE tends to be stable. As Fig. 3 shows,
tages. First, it is time consuming. For instance, the fuzzy c-means 5 or 6 may be an appropriate number of clusters.
clustering algorithm has a time complexity o(nK 2 I) while the k- (2) Silhouette coefficient
means clustering has a time complexity that is linear in n, K , I. Silhouette coefficient is a method to evaluate the quality of
In view of this, the k-means clustering can be used to initialize clustering results. It was proposed by Rousseeuw in 1987 [54].
72 M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74

Table 4
Initial center points associated with different number of clusters.
Numberof clusters Initial center points
1 e4
2 e4 , e18
3 e4 , e17 , e18
4 e4 , e6 , e17 , e18
5 e4 , e6 , e11 , e17 , e18
6 e4 , e6 , e10 , e11 , e17 , e18
7 e3 , e4 , e6 , e10 , e11 , e17 , e18
8 e3 , e4 , e6 , e10 , e11 , e12 , e17 , e18
9 e3 , e4 , e6 , e10 , e11 , e12 , e15 , e17 , e18
10 e3 , e4 , e6 , e10 , e11 , e12 , e15 , e16 , e17 , e18

In this study, we use the max–min method [55]. In the max–


min method, one point is randomly selected as the first center
point. Then, the point that is farthest from the first center point is
selected as the second center point. Next, calculate the distances
of the remaining points to the first two points and select the point
that has the maximum nearest distance as the third point and so
Fig. 4. Relationship between the number of clusters and silhouette coefficient. on, until K points are selected.
In this section, we propose an improved max–min method to
improve the selection method of the first point.
Step 1. Calculate the average distance of each point to all other
The Silhouette coefficient combines cohesion and separation to
points, and select the point that has the smallest average distance
measure how the sample is consistent with its cluster. Cohesion
as the first center point.
reflects the degree to which a sample is similar to its cluster while
Step 2. Calculate the distances of all other points to the first
separation indicates how similar a sample is to other clusters.
point, and select the point that is furthest to the first center point
The value of the Silhouette coefficient falls in [−1, 1]. A higher
as the second center point.
value of the Silhouette coefficient denotes that the sample is well
Step 3. Calculate the distances of the remaining points to the
affiliated with its cluster. We use Euclidean distance to calculate
first two points and select the point that has maximum nearest
the silhouette coefficient.
distance as the third center point, and so on, until K points are
Let a(k) be the average distance between DM ek and all other
selected.
DMs in a same cluster Cl , ek ∈ Cl . A smaller a(k) reflects a better
This method avoids the randomness of the selection of the first
attachment of ek to its cluster, Cl . Let b(k) be the shortest average
point. The point with maximum density is selected. Next, we use
distance between ek and all points in cluster Ch , h ̸ = l. Then, the this method to select the initialization center points of the LSGDM
Silhouette coefficient s(k) can be calculated as: problem in this study. Results are presented in Table 4.
b(k) − a(k)
s(k) = (13) 7. Conclusion
max{a(k), b(k)}
or Due to the increasing complexity of real-world decision envi-
1 − a(k)/b(k), if a(k) < b(k);
{
ronment, heterogeneous LSGDM problems become popular nowa-
s(k) = 0, if a(k) = b(k); (14) days. This paper focused on the ordinal consensus process in
b(k)/a(k) − 1, if a(k) > b(k). the environment of heterogeneous LSGDM. We extended the
k-means clustering algorithm considering preference orderings.
It is clearly that −1 ≤ s(k) ≤ 1. The average value of s(k)
Based on the characteristics of heterogeneous LSGDM problems,
can be used to measure the quality of the clustering results.
we proposed the ordinal consensus measures and discussed the
Different numbers of clusters can generate different values of s(k).
feedback adjustment methods to improve the degree of consen-
Therefore, this criteria can be used to identify the optimal number
sus. The advantages of our proposed method can be summarized
of clusters.
as follows:
As Fig. 4 shows, the value of the Silhouette coefficient reaches
its maximum level when the number of clusters is 7. Based (1) The ordinal consensus measure provided a new way to
on Fig. 3, we can find that the slope is relatively flat from 6 measure the consensus degree for heterogeneous LSGDM
to 7. Therefore, combining Figs. 3 and 4, 6 clusters may be an problems. It did not need to calculate the distance of in-
appropriate choice. dividual preference relations to the collective preference
relation. Thus, this consensus measure had advantage in
6.2. Selecting the initial center points computational complexity.
(2) It had wide application range. Obtaining the ranking of al-
Selecting the initial center points is another key issue for ternatives was an essential step in GDM. For heterogeneous
the k-means clustering algorithm. A common method to select LSGDM problems, this method not only can accommodate
the initial center points is to choose them randomly. While this expression domains used in this paper, but can deal with
method may lead to low quality of clustering results. Another other types of information such as the unbalanced ex-
approach is to run repeatedly, using different random initial pression model and the multi-granularity linguistic term
center points and combining the SSE or the Silhouette coefficient sets [56].
to find the appropriate center points. However, this approach is (3) The results were objective. Most existing papers chosen the
very inefficient especially when the number of objects is large. consensus threshold subjectively. This study determined
We can also use some other clustering algorithms such as the the consensus threshold using simulation analysis based on
hierarchy clustering method to help select initial center points. the number of alternatives.
M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74 73

In the near future, we will use other kinds of clustering algo- [6] X.H. Xu, Z.J. Du, X.H. Chen, Consensus model for multi-criteria large-group
rithms to solve the heterogeneous LSGDM problems and investi- emergency decision making considering non-cooperative behaviors and
minority opinions, Decis. Support Syst. 79 (2015) 150–160.
gate the time complexity and efficiency of them. The proposed
[7] X.H. Xu, X.Y. Zhong, X.H. Chen, Y.J. Zhou, A dynamical consensus method
consensus model can also be combined with other consensus based on exit–delegation mechanism for large group emergency decision
methods such as the CRP in the social network [57] or the CRP making, Knowl.-Based Syst. 86 (2015) 237–249.
considering costs [58]. In addition, other decision theories such [8] I. Palomares, L. Martínez, F. Herrera, A consensus model to detect and
as bounded rationality theory [59,60] can be investigated for manage non-cooperative behaviors in large scale group decision making,
IEEE Trans. Fuzzy Syst. 22 (3) (2014) 516–530.
heterogeneous LSGDM problems. Using game theory to analyze
[9] B.S. Liu, Y.H. Shen, Y. Chen, X.H. Chen, Y.M. Yang, A two-layer weight
DMs’ behavior in heterogeneous LSGDM problem would also be determination method for complex multi-attribute large-group decision-
a good research topic. making experts in a linguistic environment, Inform. Fusion 23 (2015)
156–165.
Acknowledgments [10] L.A. Zadeh, Fuzzy sets, Inform. Control 8 (3) (1965) 338–353.
[11] K.T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems 20 (1)
(1986) 87–96.
The work was supported by the National Natural Science [12] K.T. Atanassov, G. Gargov, Interval valued intuitionistic fuzzy sets, Fuzzy
Foundation of China (No. 71771156), the 2019 Soft Science Sets and Systems 31 (3) (1989) 343–349.
Project of Sichuan Science and Technology Department (No. [13] V. Torra, Hesitant fuzzy sets, Int. J. Intell. Syst. 25 (6) (2010) 529–539.
2019JDR0141), the 2019 Sichuan Planning Project of Social Sci- [14] L.A. Zadeh, The concept of a linguistic variable and its application to
approximate reasoning-III, Inform. Sci. 9 (1975) 43–80.
ence (No. SC18A007), and the Graduate Student’s Research and
[15] F. Herrera, L. Martínez, A 2-tuple fuzzy linguistic representation model for
Innovation Fund of Sichuan University, China (No. 2018YJSY038). computing with words, IEEE Trans. Fuzzy Syst. 8 (6) (2000) 746–752.
[16] R.M. Rodríguez, L. Martínez, F. Herrera, Hesitant fuzzy linguistic term sets
Appendix for decision making, IEEE Trans. Fuzzy Syst. 20 (1) (2012) 109–119.
[17] H.C. Liao, Z.S. Xu, X.J. Zeng, J.M. Merigó, Qualitative decision making with
(a) Intuitionistic fuzzy weighted average (IFWA) operator [28]: correlation coefficients of hesitant fuzzy linguistic term sets, Knowl.-Based
Syst. 76 (2015) 127–138.
IWAA(A1 , . . . , An ) = η1 A1 + · · · + η2 A2 [18] H.C. Liao, X.L. Wu, X.D. Liang, J.B. Yang, D.L. Xu, F. Herrera, A continuous
⟨ ⟩ interval-valued linguistic ORESTE method for multi-criteria group decision
n n
ηi making, Knowl.-Based Syst. 153 (2018) 65–77.
(1 − µAi )ηi ,
∏ ∏
= 1− νAi [19] Q. Pang, H. Wang, Z.S. Xu, Probabilistic linguistic term sets in
i=1 i=1 multi-attribute group decision making, Inform. Sci. 369 (2016) 128–143.
[20] H. Bustince, E. Barrenechea, M. Pagola, A historical account of types of
(b) Interval-Valued IFWA (IVIFWA) operator [28]: fuzzy sets and their relationships, IEEE Trans. Fuzzy Syst. 24 (1) (2016)
179–194.
IVIWAA(A1 , . . . , An ) [21] F. Herrera, L. Martínez, P.J. Sánchez, Managing non-homogeneous infor-
⟨[ n mation in group decision making, European J. Oper. Res. 166 (1) (2005)
(1 − µAL )ηi ,

= η1 A1 + · · · + η2 A2 = 1− 115–132.
i [22] L. Martínez, J. Liu, D. Ruan, J.B. Yang, Dealing with heterogeneous in-
i=1
] [ ]⟩ formation in engineering evaluation processes, Inform. Sci. 177 (2007)
n n n
1533–1542.
U ηi L,ηi U ,ηi
∏ ∏ ∏
1− (1 − µ Ai
) , νA , νA [23] G.Q. Zhang, J. Lu, An integrated group decision-making method dealing
i i
i=1 i=1 i=1
with fuzzy preferences for alternatives and individual judgments for
selection criteria, Group Decis. Negot. 12 (2003) 501–515.
(c) 2-tuple weighted average [15]: [24] D.F. Li, Z.G. Huang, G.H. Chen, A systematic approach to heterogeneous
( ∑n ( ∑n multi-attribute group decision making, Comput. Ind. Eng. 59 (2010)
∆−1 (si , αi ) · ηi i=1 βi · ηi
) )
561–572.
x =∆ =∆
e i=1
∑n n
i=1 ηi i=1 ηi
∑ [25] M. Espinilla, I. Palomares, L. Martínez, D. Ruan, A comparative study
of heterogeneous decision analysis approaches applied to sustainable
(d) Hesitant fuzzy linguistic weighted average (HFLWA) oper- energy evaluation, Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 20 (2012)
ator [61]: 159–174.
[26] G.X. Li, G. Kou, Y. Peng, A group decision making model for integrating
HFLWA(H1 , . . . , Hn ) = η1 H1 + · · · + η2 H2 heterogeneous information, IEEE Trans. Syst. Man Cybern. Syst. 48 (6)
n
{ } (2018) 982–992.
= ⊕ (ηi Hi ) = ∪sα1 ∈H1 ,...,sαn ∈Hn s∑ni=1 ηi αi [27] Z.S. Xu, Uncertain linguistic aggregation operators based approach to
i=1 multiple attribute group decision making under uncertain linguistic
environment, Inform. Sci. 168 (1) (2004) 171–184.
(e) Probability linguistic average (PLA) operator [19]: [28] S.A. Orlovsky, Decision-making with a fuzzy preference relation, Fuzzy Sets
PLWA(L1 (p), L2 (p), . . . , Ln (p)) and Systems 1 (3) (1978) 155–167.
[29] Z.S. Xu, R.R. Yager, Intuitionistic and interval-valued intuitionistic fuzzy
= (η1 L1 (p) ⊕ η2 L2 (p) ⊕ · · · ⊕ ηn Ln (p)) = ∪L(k) ∈L {η1 p(1k) L(1k) } preference relations and their measures of similarity for the evaluation of
1 1(p) agreement within a group, Fuzzy Optim. Decis. Mak. 8 (2) (2009) 123–139.
⊕ ∪L(k) ∈L {η2 p(2k) L(2k) } ⊕ · · · ⊕ ∪L(k) ∈L {η3 p(3k) L(3k) } [30] H.C. Liao, Z.S. Xu, M.M. Xia, Multiplicative consistency of hesitant fuzzy
2 2(p) 3 3(p) preference relation and its application in group decision making, Int. J. Inf.
Technol. Decis. 13 (1) (2014) 47–76.
References [31] B. Zhu, Z.S. Xu, Consistency measures for hesitant fuzzy linguistic
preference relations, IEEE Trans. Fuzzy Syst. 22 (1) (2014) 35–45.
[1] S. Saint, J.R. Lawson, Rules for Reaching Consensus: A Modern Approach [32] Y.X. Zhang, Z.S. Xu, H. Wang, H.C. Liao, Consistency-based risk assessment
to Decision Making, Jossey-Bass, San Francisco, 1994. with probabilistic linguistic preference relation, Appl. Soft Comput. 49
[2] E. Herrera-Viedma, F.J. Cabrerizo, J. Kacprzyk, W. Pedrycz, A review of soft (2016) 817–833.
consensus models in a fuzzy environment, Inform. Fusion 17 (2014) 4–13. [33] W.D. Cook, L.M. Seiford, Priority ranking and consensus formation, Manage.
[3] F. Herrera, E. Herrera-Viedma, J.L. Verdegay, A model of consensus in Sci. 24 (16) (1978) 1721–1732.
group decision making under linguistic assessments, Fuzzy Sets Syst. 78 [34] K. Inada, The simple majority rule, Econometrica 37 (3) (1969) 490–506.
(1) (1986) 73–87. [35] M. Kendall, Rank Correlation Methods, third ed., Hafner, New York, 1962.
[4] H.C. Liao, Z.S. Xu, X.J. Zeng, J.M. Merigó, Framework of group decision [36] W.D. Cook, Distance-based and ad hoc consensus models in ordinal
making with intuitionistic fuzzy preference information, IEEE Trans. Fuzzy preference ranking, European J. Oper. Res. 172 (2) (2006) 369–385.
Syst. 23 (4) (2015) 1211–1227. [37] E. Herrera-Viedma, F. Herrera, F. Chiclana, A consensus model for multi-
[5] Y. He, Z.S. Xu, A consensus reaching model for hesitant information with person decision making with different preference structures, IEEE Trans.
different preference structures, Knowl.-Based Syst. 135 (2017) 99–112. Syst. Man Cybern. A. 32 (3) (2002) 394–402.
74 M. Tang, X. Zhou, H. Liao et al. / Knowledge-Based Systems 180 (2019) 62–74

[38] H.C. Liao, Z.M. Li, X.J. Zeng, W.S. Liu, A comparison of distinct consensus [49] H.C. Liao, Z.S. Xu, X.J. Zeng, D.L. Xu, An enhanced consensus reaching
measures for group decision making with intuitionistic fuzzy preference process in group decision making with intuitionistic fuzzy preference
relations, Int. J. Comput. Int. Sys. 10 (2017) 456–469. relations, Inform. Sci. 329 (2016) 274–286.
[39] Z.Z. Ma, J.J. Zhu, K. Ponnambalam, S.T. Zhang, A clustering method [50] I.J. Pérez, F.J. Cabrerizo, S. Alonso, E. Herrera-Viedma, A new consen-
for large-scale group decision-making with multi-stage hesitant fuzzy sus model for group decision making problems with non-homogeneous
linguistic terms, Inf. Fusion 50 (2019) 231–250. experts, IEEE Trans. Syst. Man Cybern. Syst. 44 (4) (2014) 494–498.
[40] I. Palomares, LGDM approaches and models: A literature review, in: Large [51] S.P. Wan, D.F. Li, Fuzzy LINMAP approach to heterogeneous MADM consid-
Group Decision Making, Springer, Cham, 2018. ering comparisons of alternatives with hesitation degrees, Omega 41 (6)
[41] M.E. Celebi, H.A. Kingravi, P.A. Vela, A comparative study of efficient (2013) 925–940.
initialization methods for the k-means clustering algorithm, Expert Syst. [52] Z. Zhang, C.H. Cao, An approach to group decision making with hetero-
Appl. 40 (1) (2013) 200–210. geneous incomplete uncertain preference relations, Comput. Ind. Eng. 71
[42] S.H. Al-Harbi, V.J. Rayward-Smith, Adapting k-means for supervised (2014) 27–36.
clustering, Appl. Intell. 24 (3) (2006) 219–226. [53] M.E. Celebi, H.A. Kingravi, P.A. Vela, A comparative study of efficient
[43] H.J. Zhang, Y.C. Dong, E. Herrera-Viedma, Consensus building for the het- initialization methods for the k-means clustering algorithm, Expert Syst.
erogeneous large-scale GDM with the individual concerns and satisfactions, Appl. 40 (2013) 200–210.
IEEE Trans. Fuzzy Syst. 26 (2) (2018) 884–898. [54] P.J. Rousseeuw, Silhouettes: a graphical aid to the interpretation and
[44] Y.J. Xu, X.W. Wen, W.C. Zhang, A two-stage consensus method for validation of cluster analysis, J. Comput. Appl. Math. 20 (1987) 53–65.
large-scale multi-attribute group decision making with an application to [55] T.F. Gonzalez, Clustering to minimize the maximum intercluster distance,
earthquake shelter selection, Comput. Ind. Eng. 116 (2018) 113–129. Theoret. Comput. Sci. 38 (2–3) (1985) 293–306.
[45] B.S. Liu, Q. Zhou, R.X. Ding, I. Palomares, F. Herrera, Large-scale [56] H. Fujita, A. Gaeta, V. Loia, F. Orciuoli, Resilience analysis of critical
group decision making model based on social network analysis: trust infrastructures: a cognitive approach based on granular computing, IEEE
relationship-based conflict detection and elimination, European J. Oper. Trans. Cybern. 49 (5) (2019) 1835–1848.
Res. 275 (2019) 737–754. [57] Y.C. Dong, Q.B. Zha, H.J. Zhang, et al., Consensus reaching in social network
[46] X.H. Xu, Z.J. Du, X.H. Chen, C.G. Cai, Confidence consensus-based model group decision making: Research paradigms and challenges, Knowl.-Based
for large-scale group decision making: a novel approach to managing Syst. 162 (15) (2018) 3–13.
non-cooperative behaviors, Inform. Sci. 477 (2019) 410–427. [58] J. Wu, Q. Sun, H. Fujita, F. Chiclana, An attitudinal consensus degree
[47] R.M. Rodríguez, Á. Labella, G.D. Tré, L. Martínez, A large scale consen- to control feedback mechanism in group decision making with different
sus reaching process managing group hesitation, Knowl.-Based Syst. 159 adjustment cost, Knowl.-Based Syst. 164 (15) (2019) 265–273.
(2018) 86–97. [59] H.A. Simon, Bounded rationality and organizational learning, Organ. Sci. 2
[48] X. Liu, Y.J. Xu, R. Montes, R.X. Ding, F. Herrera, Alternative ranking- (1) (1991) 125–134.
based clustering and reliability index-based consensus reaching process [60] X.L. Tian, Z.S. Xu, H. Fujita, Sequential funding the venture project or not?
for hesitant fuzzy large scale group decision making, IEEE Trans. Fuzzy a prospect consensus process with probabilistic hesitant fuzzy preference
Syst. 27 (1) (2019) 159–171. information, Knowl.-Based Syst. 161 (1) (2018) 172–184.
[61] Z.M. Zhang, C. Wu, Hesitant fuzzy linguistic aggregation operators and their
applications to multiple attribute group decision making, J. Intell. Fuzzy
Syst. 26 (5) (2014) 2185–2202.

You might also like