You are on page 1of 551

AIDING DECISIONS

WITH MULTIPLE CRITERIA


Essays in Honor of Bernard Roy
Bernard Roy
AIDING DECISIONS
WITH MULTIPLE CRITERIA
Essays in Honor of 8ernard Roy

Edited by

DENIS BOUYSSOU
ERIC JACQUET-LAGREZE
PATRICE PERNY
ROMAN SLOWINSKI
DANIEL VANDERPOOTEN
PHILIPPE VINCKE

" SPRINGER SCIENCE+BUSINESS MEDIA, LLC


ISBN 978-1-4613-5266-2 ISBN 978-1-4615-0843-4 (eBook)
DOI 10.1007/978- 1-4615-0843-4

....
~" Electronic Services <hup://www.wkap.nl>

Library of Congress Cataloging-in-Publication Data

A C.I.P. Catalogue record for this book is available from


the Library of Congress.

© Springer Science+Business Media New York 2002


Originally published by Kluwer Academic Publishers in 2002
Softcover reprint ofthe hardcover I stedition 2002

AII rights reserved. No pan of this publication may be reproduced, stored in a


retrieva1 system or transmiued in any fonn or by any means, mechanical, photo-
copying, recording, or otherwise, without the prior wriuen permission of the
publisher,Springer Science+Business Media , LLC

Prinred an acid-free paper.


Contents

Preface ix

Selected Publications of Bernard Roy 1

Part I Memories of Early Career and Impact of Early Works


of Bernard Roy

Bernard Roy, Forty Years of Esteem and Friendship 17


J. Lesourne

Connectivity, Transitivity and Chromaticity:


The Pioneering Work of Bernard Roy in Graph Theory 23
P. Hansen, D. de Werra

Part II Philosophy and Epistemology of Decision-Aiding

Decision-Aid Between Tools and Organisations 45


A. David

Talking About the Practice of MCDA 71


V. Belton, J. Pictet

Multi-Criteria Decision-Aid in a Philosophical Perspective 89


J.L. Genard, M. Pirlot

Part III Theory and Methodology of Multi-Criteria


Decision-Aiding

A Characterization of Strict Concordance Relations 121


D. Bouyssou, M. Pirlot

From Concordance / Discordance to the Modelling


of Positive and Negative Reasons in Decision Aiding 147
A. Tsoukids, P. Perny, P. Vincke
VI AIDING DECISIONS WITH MULTIPLE CRITERIA

Exploring the Consequences of IlI!Precise Information


in Choice Problems Using ELECTRE 175
L. C. Dias, J. Climaco

Modelling in Decision Aiding 195


D. Vanderpooten

On the Use of Multicriteria Classification Methods: a Simulation Study 211


M. Doumpos, C. Zopounidis

Ordinal Multiattribute Sorting and Ordering in the Presence


of Interacting Points of View 229
M. Roubens

Part IV Preference Modeling

Multiattribute Interval Orders 249


P. C. Fishburn

Preference Representation by Means of Conjoint Measurement


and Decision Rule Model 263
S. Greco, B. Matarazzo, R. Slowinski

Towards a Possibilistic Logic Handling of Preferences 315


S. Benferhat, D. Dubois, H. Prade

Empirical Comparison of Lottery- and Rating-Based


Preference Assessment 339
O. Franzese, M.R. McCord

Risk Attitudes Appraisal and Cognitive Coordination


in Decentralized Decision Systems 357
B. Munier

Logical Foundation of Multicriteria Preference Aggregation 379


R. Bisdorff

Part V Applications of Multi-Criteria Decision-Aiding

A Study of the Interactions Between the Energy System


and the Economy Using TRIMAP 407
C.H. Antunes, C. Oliveira, J. CZ{maco

Multicriteria Approach for Strategic Town Planning 429


C.A. Bana e Costa, M.L. da Costa-Lobo, I.A. Ramos, J.C. Vansnick

Measuring Customer Satisfaction for Various Services


Using Multicriteria Analysis 457
Y. Siskos, E. Grigoroudis
Contents Vll

Management of the Future 483


J.P. Brans, P.L. Kunsch, B. Mareschal

Part VI Multi-Objective Mathematical Programming

Methodologies for Solvine; Multi-Objective


Combinatorial OptimIzation Problems 505
J. Teghem

Outcome-Based Neighborhood Search (ONS) 527


W. Habenicht

Searching the Efficient Frontier in Data Envelopment Analysis 543


P. Korhonen
Preface

This volume is a Festschrift in honor of Bernard Roy at the occasion


of his retirement.

Bernard Roy is Professor at the Universite Paris Dauphine. He is the


founder and former director of LAMSADE, a research group centered on
the theme of decision aiding. Bernard Roy holds a Doctorate in Math-
ematics from the Universite de Paris (1961). After an extensive con-
sulting experience at SEMA, he joined the Universite Paris-Dauphine
in 1972 and created LAMSADE. He founded in 1975 the EURO Work-
ing Group "Multicriteria Aid for Decisions" which invariably held two
annual meetings since then. He is Doctor Honoris Causa from several
prestigious universities. He received the EURO Gold medal (the highest
distinction granted by EURO, the Association of European Operational
Research Societies) in 1992 and the MCDM Gold Medal granted by the
International MCDM Society in 1995. He is the author of several books
and hundreds of research papers. Bernard Roy has been the advisor of
numerous graduate and doctoral students.

The main contributions of Bernard Roy are focused on two broad themes:

• Graph Theory with path-breaking contributions on the theory of


flows in networks and project scheduling,

• Multiple Criteria Decision Aiding with the invention of the family


of ELECTRE methods and methodological contribution to decision-
aiding which lead to the creation of the so-called" European School
of MCDA".
x AIDING DECISIONS WITH MULTIPLE CRITERIA

This extremely brief biographical sketch does not do much justice to


the real influence of Bernard Roy. He is one of the early promoters of
Operational Research techniques in France. Everyone who approached
him during his career has certainly been impressed by the clarity and
the rigour of his thoughts combined with a passion for real-world appli-
cations.

We think that the influence of Bernard Roy is well reflected by the qual-
ity and the variety of the contributions that are gathered in this volume.
In order to make a volume of reasonable size, the editors chose not to
solicit contributions from the Graph Theory community. Had it not
been the case, two volumes would probably have been necessary. We
were really impressed by the willingness of everyone who was contacted
to participate in the project. This reflects the real impact of Bernard
Roy on the scientific community of his time - in our opinion much better
than a long list of his various distinctions.

Besides this Preface which is immediately followed by a list of Bernard


Roy's main publications, this volume has five main parts.

Part I contains two papers related to the early career of Bernard Roy
when, working at SEMA, he developed many new techniques and con-
cepts in Graph Theory in order to cope with complex real-world prob-
lems. Jacques Lesourne, former director of SEMA, recalls the role of
Bernard Roy in popularizing Operational Research techniques in France
as well as his role in the development of SEMA. Dominique de Werra
and Pierre Hansen reflect on the influence of Bernard Roy's contribu-
tion in Graph Theory. More than 30 years after the publication of his
well-known books, this influence is still present.

The rest of the book consists of contributions related to the second part
of the career of Bernard Roy - to "Multi-Criteria Decision-Aiding".

Part II of the book is devoted to Philosophy and Epistemology of Decision-


Aiding. Albert David explores two questions related to decision aiding
in organizations: what decision aiding tools are, and which concepts can
be used to analyse and understand the dynamics of their introduction
into organizations. Valerie Belton and Jacques Pictet chose an orig-
inal form of dialogue between a MCDA practitioner and a potential
PREFACE Xl

client in order to address many issues of philosophy and process being


of relevance to the practice of MCDA. Jean-Luis Genard and Marc
Pirlot reflect on the epistemological status of models and recommenda-
tions, and situate decision-aid within a philosophical perspective basing
on Habermas' theory of orders of validity.

Part III includes contributions on Theory and Methodology of Multi-


Criteria Decision-Aiding. Based on general framework for conjoint mea-
surement that allows intransitive preferences, Denis Bouyssou and
Marc Pirlot characterize strict concordance relations used in outrank-
ing methods. Alexis Tsoukias, Patrice Perny and Philippe Vincke
present a possible generalization of Roy's concordance/discordance prin-
ciple by introducing concepts of positive and negative reasons of pref-
erence formulated in terms of four-valued logic. Luis Dias and Joao
Climaco propose a method for getting robust recommendations with
ELECTRE Is when DM specifies a set of multiple acceptable combina-
tions of values of such parameters like weights or veto thresholds. Daniel
Vanderpooten emphasizes the central role of modeling in decision aid-
ing and proposes to adopt a perspective justifying, in a given decision
context, choices at different stages of the modeling process. Michael
Doumpos and Constantin Zopounidis show in a simulation study
that the preference disaggregation approach is also attractive for multi-
criteria classification problems. Marc Roubens is using the Choquet
integral to deal with ordinal multiattribute sorting and ordering prob-
lems in the presence of interactive points of view and compares this
approach with a rule based methodology.

Part IV is devoted to Preference Modeling. Peter Fishburn opens


this part with a paper characterizing a simple additive-utility threshold
representation for preferences on multiattribute alternatives in which
the marginal preference relation on each attribute is an interval order.
Salvatore Greco, Benedetto Matarazzo and Roman Slowhiski
investigate the equivalence of preference representation by general con-
joint measurement and by decision rule model in multicriteria choice
and ranking problems; in order to represent hesitation in preference
modeling, two approaches are considered: dominance-based rough set
approach and four-valued logic for which an axiomatic foundation is
given. Salem Benferhat, Didier Dubois and Henri Prade relate
different ways of expressing preferences which are not usual in the cur-
Xll AIDING DECISIONS WITH MULTIPLE CRITERIA

rent decision-aiding practice; depending on the case, they suggest using


particular types of constraints on utility functions, or a set of priori-
tized goals revealed by logical propositions, or an ordered set of possible
choices reaching the same level of satisfaction; these different expres-
sion modes can be handled by possibilistic logic. Oscar Franzese and
Mark McCord investigate the performance of direct rating, probabil-
ity equivalent, and lottery equivalent assessment techniques for a set of
individuals in terms of the ability of the techniques to reproduce indif-
ference between two-criteria outcomes previously judged to be indiffer-
ent. Bertrand Munier examines the risk attitude appraisal and cogni-
tive coordination in decentralized decision system using as a supporting
example the maintenance system in nuclear power plants. Raymond
Bisdorff introduces a semiotical foundation of the concordance principle
which allows to extend it and its associated coherence axioms imposed
on the family of criteria to redundant criteria and to missing evaluations.

Part V groups Applications of Multi-Criteria Decision-Aiding. Carlos


Henggeler Antunes, Carla Oliveira and Joao Climaco present
a study of interactions between the energy system and the national
economy using the TRIMAP interactive environment. Carlos Bana
e Costa, Manuel da Costa-Lobo, Isabel Ramos and Jean-Claude
Vansnick present a case study of strategic planning for the town of
Barcelos using multicriteria decision-aiding approach. Yannis Siskos
and Evangelos Grigoroudis describe applications of a preference dis-
aggregation model based on the principle of ordinal regression anal-
ysis to measuring customer satisfaction in different types of business
organizations. Jean-Pierre Brans, Pierre Kunsch and Bertrand
Mareschal propose a decision-aiding procedure based on
PROMETHEE-GAIA and system dynamics to select appropriate man-
agement strategies for socio-economic systems.

Part VI includes contributions on Multi-Objective Mathematical Pro-


gramming. Jacques Teghem presents an overview of approaches de-
veloped by his research team to deal with multi-objective combinato-
rial optimization problems; exact (direct and two-phase) methods are
followed by metaheuristic methods based on Simulated Annealing and
Tabu Search. Walter Habenicht presents an enumerative approach
based on quad trees to discrete vector optimization; different neighbor-
hood concepts in outcome space are considered from the viewpoint of
PREFACE Xlll

convergence and complexity. This part and the whole book ends with
the paper by Pekka Korhonen on free searching over the efficient fron-
tier in Data Envelopment Analysis; the search is useful when preference
information is desired to incorporate into efficiency analysis.

The editors wish to extend their warmest thanks to all the contributing
authors. This book is a fruit of friendly co-operation between editors
and authors, motivated by a joint will of celebrating Bernard Roy. The
editors had the privilege of working closely with Bernard Roy during
many years. The authors invited to contribute a paper are also close to
him for various reasons.

We also wish to acknowledge the valuable help of Dominique Fran~ois


and Dominique Champ-Brunet who prepared the list of publications
of Bernard Roy, and to Barbara Wolynska who prepared the camera-
ready manuscript.

A unique copy of this book, binded artistically by Anna Ostanello, will


be handed to Bernard Roy, prior to publication, at the 54th Meeting of
the EURO Working Group "Multicriteria Aid for Decisions" in Durbuy
(Belgium) on October 4, 2001.

Denis Bouyssou
Eric Jacquet-Lagreze
Patrice Perny
Roman Slowinski
Daniel Vanderpooten
Philippe Vincke

Paris-Poznan-Brussels, July 2001


SELECTED PUBLICATIONS OF BERNARD ROY

1. Books
Bernard Roy, Denis Bouyssou, Aide multicritere a la decision : Methodes
et cas, Paris, Economica, mai 1993,695 pages.
Bernard Roy, Multicriteria Methodology for Decision Analysis, Kluwer
Academic Publishers, 1996
(original version in French: Methodologie multicritere d'aide a la
decision, Paris, Economica, 1985, 423 pages; Polish translation :
Wielokryterialne wspomaganie decyzji, Wydawnictwa Naukowo-
Techniczne, Warszawa, 1990,281 pages).
Bernard Roy, Algebre moderne et theorie des graphes orientees vers les
sciences economiques et sociales :
- Tome 1: Notions et resultats fondamentaux, Paris, Dunod, 1969,
518 pages.
- Tome 2: Applications et problemes specifiques, Paris, Dunod, 1970,
784 pages.
Bernard Roy (in collaboration), Les problemes d'ordonnancement -
Applications et methodes, Paris, Dunod, Monographie de Recherche
Operationnelle, 1964 (German translation : Ablaufplanung -
Anwendungen und Methoden, Oldenburg Verlag, 1968).

2. Edited Volumes
Alberto Colorni, Massimo Paruccini, Bernard Roy (eds.), A-MCD-A - 25th
year, EURO Working group, Multiple Criteria Decision Aiding, EUR
Report, The European Commission, Ispra 2001.
Bernard Roy, Combinatorial Programming: Methods and Applications
Dordrecht, Holland, D. Reidel Publishing Company, 1975.
Bernard Roy, La decision : ses disciplines, ses acteurs, Presses
Universitaires de Lyon, Monographie de l'AFCET, 1983.
2 AIDING DECISIONS WITH MULTIPLE CRITERIA

3. Papers in Refereed Journals


Jose Figueira, Bernard Roy, Determining the weights of criteria in the
ELECTRE methods with a revised Simos' procedure, European Journal
of Operational Research (to be published)
(see also Universite Paris-Dauphine, Document du LAMSADE nO 109,
juillet 1998, 45 pages, in French).
Bernard Roy, Philippe Vincke, The case of the vanishing optimum revisited
again, Journal of Multi-Criteria Decision Analysis 7, 1998, 351.
Bernard Roy, A missing link in OR-DA: Robustness analysis, Foundations
of Computing and Decision Sciences Vol. 23, No.3, 1998, 141-160.
Bernard Roy, Daniel Vanderpooten, An overview on «The European School
of MCDA: Emergence, basic features and current works», European
Journal of Operational Research 99, 1997,26-27.
Jean-Charles Pomerol, Bernard Roy, Camille Rosenthal-Sabroux,
Developing an «intelligent» DSS for the multicriteria evaluation of
railway timetables: problems and issues, Revue des Systemes de
Decision, Volume 5 , nO 3-4, 1996, 249-267
(see also The International Society for Decision Support Systems, Third
International Conference, Conference Proceedings Volume 1, IDSS '95,
June 22-23, 1995, 161-172).
Bernard Roy, Vincent Mousseau, A theoretical framework for analysing the
notion of relative importance of criteria, Journal of Multi-Criteria
Decision Analysis, Vol. 5, 1996,145-159.
Bernard Roy, Daniel Vanderpooten, The European school of MCDA:
Emergence, basic features and current works, Journal of Multi-Criteria
Decision Analysis, Vol. 5, 1996, 22-38. Response to F.A. Lootsma's
Comments on this paper, Journal of Multi-Criteria Decision Analysis,
Vol. 5, 1996, 165-166.
Jean-Dominique Lenard, Bernard Roy, Multi-item inventory control: A
multicriteria view, European Journal of Operational Research 87, 1995,
685-692.
Jean-Charles Pomerol, Bernard Roy, Camille Rosenthal-Sabroux, Amin
Saad, An «intelligent» DSS for the multicriteria evaluation of railway
timetables, Foundations of Computing and Decision Sciences, Vol. 20,
No.3, 1995,219-238
(see also The International Society for Decision Support Systems, Third
Selected Publications of Bernard ROY 3

International Conference, Conference Proceedings Volume I, IDSS '95,


June 22-23, 1995, 162-172).
Bernard Roy, On operational research and decision aid, European Journal of
Operational Research 73, 1994,23-26.
Bernard Roy, Roman Slowinski, Criterion of distance between technical
programming and socio-economic pnonty, RAIRO Recherche
Operationnelle, Vol. 27, nO 1, 1993,45-60.
Bernard Roy, Decision science or decision-aid science?, European Journal
of Operational Research, Volume 66, Number 2, April 1993, 184-203
(see also Revue Internationale de Systemique, Vol. 6, N° 5, 1992,497-
529, in French).
Bernard Roy, Denis Bouyssou, Decision-aid: An elementary introduction
with emphasis on multiple criteria, Investigaci6n Operativa, Volume 3,
Nos 2-3-4, Agosto-Diciembre 1993, 175-190
(see also Journal of Information Science and Technology, Special Issue
«Multicriteria Decision Support Systems», Volume 2, Number 2,
January 1993, 109-123).
Patrice Perny, Bernard Roy, The use of fuzzy outranking relations In
preference modelling, Fuzzy Sets and Systems 49, 1992,33-53.
Bernard Roy, Roman Slowinski, Wiktor Treichel, Multicriteria
programming of water supply systems for rural areas, Water Resources
Bulletin, Vol. 28, nO 1, February 1992, 13-31
(see also Keith W. Ripel (ed.): Multiple objective decision making in
water resources, Awra Monograph Series No. 18, 1992, 13-31).
Bernard Roy, The outranking approach and the foundations of ELECTRE
methods, Theory and Decision 31, 1991, 49-73.
Bernard Roy, Decision-aid and decision-making, European Journal of
Operational Research 45, 1990,324-331
(see also C.A. Bana e Costa (Ed.), Readings in Multiple Criteria
Decision Aid, Springer Verlag, 1990, 155-183).
Bernard Roy, Main sources of inaccurate determination, uncertainty and
imprecision in decision models, Mathematical and Computer Modelling,
Vol. 12, No. 10111, 1989, 1245-1254
(see also Bertrand R. Munier, Melvin F. Shakun (eds.), Compromise,
Negotiation and Group Decision, D. Reidel Publishing Company, 1988,
42-62.
4 AIDING DECISIONS WITH MULTIPLE CRITERIA

Bernard Roy, Philippe Vincke, Pseudo-orders: Definition, properties and


numerical representation, Mathematical Social Sciences 14, 1987, 263-
274.
Bernard Roy, Meaning and validity of interactive procedures as tools for
decision making, European Journal of Operational Resarch 31, 1987,
297-303.
Bernard Roy, Denis Bouyssou, Comparison of two decision-aid models
applied to a nuclear power plant siting example, European Journal of
Operational Research 25, 1986,200-215
(see also Y.Y. Haimes, V. Chanking (eds.), Decision Making with
Multiple Objectives, Lecture Notes in Economics and Management
Systems, Vol. 242, Springer-Verlag, 1984, 482-494, and Marches,
Capital et Incertitude, Essais en l'Honneur de Maurice Allais, sous la
direction de Marcel Boiteux, Thierry de Montbrial, Bertrand Munier,
Economica, 1986, 155-177, in French).
Bernard Roy, Manoelle Present, Dominique Silhol, A programming method
for determining which Paris metro stations should be renovated,
European Journal of Operational Resarch 24, 1986, 318-334.
Bernard Roy, Philippe Vincke, Relational systems of preference with one or
more pseudo-criteria: some new concepts and results, Management
Science, Vol. 30, No. 11, November 1984, 1323-1335.
Bernard Roy, Jean-Christophe Hugonnard, Ranking of suburban line
extension projects on the Paris metro system by a multicriteria method,
Transportation Research, Vol. 16A, No.4, 1982,301-312.
Bernard Roy, Philippe Vincke, Multicriteria analysis: Survey and new
directions, Invited Review, European Journal of Operational Resarch,
Volume 8, No.3, November 1981,207-218.
Bernard Roy, The optimisation problem formulation: Criticism and
overstepping, The Journal of the Operational Research Society, Volume
32, Number 6, June 1981,427-436.
Bernard Roy, Problems and methods with mutiple objective functions,
Mathematical Programming, Volume 1, No.2, November 1971, 239-
266.
Bernard Roy, La recherche operationnelle entre acteurs et realites, Annales
des Mines - Gerer et Comprendre nO 47, mars 1997, 16-27.
Selected Publications of Bernard ROY 5

Bernard Roy, Vincent Mousseau, Prise en compte formelle de la notion


d'importance relative des criteres en aide multicritere it la decision,
Cahiers du CERO, volume 34, 1992, 145-166.
Pierre Verdeil, C. Herve, Bernard Roy, P. Huguenard, Regulation medicale-
Analyse des criteres composant la fonction, Convergences Medicales,
decembre 1987,372-376.
Denis Bouyssou, Bernard Roy, La notion de seuils de discrimination en
analyse multicritere, INFOR, vol. 25, no. 4,1987,302-313.
Bernard Roy, Quelques remarques sur Ie concept d'independance dans l'aide
it la decision multicritere, Foundations of Control Engineering, Vol. 8,
No. 3-4, 1983, 183-191.
Bernard Roy, Jean-Christophe Hugonnard, Classement des prolongements
de lignes de metro en banlieue parisienne (Presentation d'une methode
multicritere originale), Cahiers du CERO, Volume 24, nO 2-3-4, 1982,
153-171.
Jean-Christophe Hugonnard, Bernard Roy, Le plan d'extension du metro en
banlieue parisienne, un cas type d'application de l'analyse multicritere,
Les Cahiers Scientifiques de la Revue Transports nO 6, 1er trimestre
1982, 77-108.
Bernard Roy, Philippe Vincke, Jean-Pierre Brans, Aide it la decision
multicritere, Ricerca Operativa, anno VIII, nO 5, 1978, 11-45
(see also Revue Beige de Statistique, d'Informatique et de Recherche
Operationnelle, Vol. 15, nO 4, 1975,23-53).
Bernard Roy, Commentaires it propos de I 'article de Jean-Claude Moisdon :
«La theorie de la decision en quete d'une pratique», Annales des Mines,
avril 1978, 115-118.
Bernard Roy, ELECTRE III : Un algorithme de classements fonde sur une
representation floue des preferences en presence de criteres multiples,
Cahiers du Centre d 'Etudes de Recherche Operationnelle, Vol. 20, n° 1,
1978,3-24.
Bernard Roy, Mathematique et decision en sciences du management,
Sciences et Techniques nO 44, septembre-octobre 1977,3-12.
Bernard Roy, Jean Moscarola, Procedure automatique d'examen de dossiers
fondee sur une segmentation trichotomique en presence de criteres
6 AIDING DECISIONS WITH MULTIPLE CRITERIA

multiples, RAIRO Recherche Operationnelle, Vol. 11, n° 2, mai 1977,


145-173.
Bernard Roy, Optimisation et aide a la decision, Journal de la Societe de
Statistique de Paris n° 3, 3e trimestre 1976,208-215.
Bernard Roy, Vers une methodologie generale d'aide a la decision, Revue
METRA, Vol. XIV, nO 3, 1975,459-497.
Bernard Roy, Analyse et choix multicritere, Informatique et Gestion nO 57,
1974,21-27.
Bernard Roy, La modelisation des preferences: Un aspect crucial de l'aide a
la decision, Revue METRA, Vol. XIII, nO 2, 1974, 135-153.
Bernard Roy, Criteres multiples et modelisation des preferences (L'apport
des relations de surc1assement), Revue d'Economie Politique, Volume
84, n° l,janvier/fevrier 1974,1-44.
Bernard Roy, Dominique Galland, Enumeration des chemins E-minimum
admissibles entre deux points, RAIRO V-3, septembre 1973, 3-20.
Bernard Roy, Decision avec criteres multiples: Problemes et methodes,
Revue METRA, Vol. XI, nO 1, 1972, 121-151.
Bernard Roy, Mathematiques modernes et sciences de la gestion, Revue de
I'Economiedu Centre-Est nO 52-53, avril-septembre 1971, 128-134.
Bernard Roy, Graphe partie! s-connexe extremum, Revue Roumaine de
Mathematiques Pures et Appliquees, Tome XIV, nO 9, 1969, 1355-1368.
Bernard Roy, Procedure d'exploration par separation et evaluation (PSEP et
PSES), RIRO, 3e annee, nO V.I, 1969,61-90.
Jacques Antoine, Bernard Roy, Les techniques preparatoires de la decision,
interet et limites, Revue PROJETno 33, mars 1969,269-278.
Bernard Roy, Classement et choix en presence de points de vue multiples (la
methode ELECTRE), RIRO, 2e annee, nO 8, 1968,57-75.
Hubert Le Boulanger, Bernard Roy, L'entreprise face a la selection et a
l'orientation des projets de recherche: La methodologie en usage dans Ie
groupe SEMA, Revue METRA, Vol. VII, nO 4, 1968,641-669
(see also Rationalisation des Choix Budgetaires, Dunod, 1970, 175-206).
Selected Publications of Bernard ROY 7

Bernard Roy, Raphael Benayoun, Programmes lineaires en variables


bivalentes et continues sur un graphe (Ie programme POLIGAMI),
Revue METRA, Vol. VI, nO 4, decembre 1967, 1-36.
Bernard Roy, Mathematique et decision, Gestion et Recherche
Operationnelle, Numero Special (l ere partie), novembre 1967,686-696.
Bernard Roy, Nombre chromatique et plus longs chemins d'un graphe,
RlRO, l ere annee, n° 5,1967,129-132.
Bernard Roy, Prise en compte des contraintes disjonctives dans les methodes
de chemin critique, Revue Franr;aise de Recherche Operationnelle n° 38,
1966, 69-84.
Patrice Bertier, Robert Fortet, Jean Mothes, Bernard Roy, Oil va la
Recherche Operationnelle ?, Revue METRA, Vol. V, nO 4, 1966, 515-
526.
Bernard Roy, Mathematiques et decision, Universite Mathematique
Entreprise nO 1, mai 1966, 10-13.
Bernard Roy, Chemins de longueur extremale, Gestion et Recherche
Operationnelle, Numero Special, mai 1966,322-335.
Michel Auberger, Bernard Roy, Arrivee Ii un bac d'un trafic composite,
Gestion, fevrier 1966,66-75.
Bernard Roy, Michel Dibon, L'ordonnancement par la methode des
potentiels - Le programme CONCORD, Automatisme nO 2, fevrier 1966,
1-11.
Bernard Roy, Nghiem Phong Tuan, Patrice Bertier, Programmes lineaires en
nombres entiers et procedure SEP, Revue METRA, Vol. IV, nO 3, 1965,
441-460.
Patrice Bertier, Bernard Roy, Une procedure de resolution pour une classe de
problemes pouvant avoir un caractere combinatoire, ICC Bulletin, Vol.
4, 1965, 19-28
(see also University of California, Operations Research Center, Berkeley,
California, ORC 67-34, September 1967, 14 pages).
Patrice Bertier, Bernard Roy, Les possibilites d'application de la Recherche
Operationnelle Ii la publicite, Gestion, novembre 1964,619-626.
8 AIDING DECISIONS WITH MULTIPLE CRITERIA

Bernard Roy, De la theorie des graphes et de ses applications en Recherche


Operationnelle, Gestion et Recherche Operationnelle, Numero Special,
mai 1964, 319-341.
Bernard Roy, Jean de Rosinski, Un exemple d'etude d'ordonnancement
realisee par la SEMA : Le calcul du planning journalier de la rotation du
coffr·age-tunnel Tracoba nO 4, Extrait des Annales de l'Institut Technique
du Batiment et des Travaux Publics, n° 194, fevrier 1964, 1-6.
Bernard Roy, Programmation mathematique et description segmentee, Revue
METRA, Vol. II, nO 4, 1963,523-535.
Bernard Roy, Physionomie et traitement des problemes d'ordonnancement,
Gestion, Numero Special, avril 1963.
Pierre Badier, Georges Nahon, Bernard Roy, Stockage et exportations des
cereales franyaises (Exemp1e d'application de la programmation
dynamique), Revue METRA, Vol. II, n° 1, 1963,49-78.
Bernard Roy, Cheminement et connexite dans les graphes - Application aux
problemes d'ordonnancement, Revue METRA, serie speciale nO 1, mars
1962, 130 pages.
Bernard Roy, Graphes et ordonnancement, Revue Franr;aise de Recherche
Operationnelle nO 25, 4e trimestre 1962,323-333.
Michel Algan, Bernard Roy, M. Simonnard, Principes d'une methode
d'exploration de certains domaines et application a l'ordonnancement de
la construction de grands ensembles, Cahiers du Centre de
Mathematiques et de Statistiques Appliquees aux Sciences Sociales,
Bruxelles, n° 3, 1962, p. 1-27.
Bernard Roy, M. Simonnard, Nouvelle methode permettant d'explorer un
ensemble de possibilites et de determiner un optimum, Revue Franr;aise
de Recherche Operationnelle nO 18, 1er trimestre 1961, 15-54.
Bernard Roy, Physionomie des problemes d'alimentation, Economie
Appliquee, 1961, 127-148.
Bernard Roy, Somme d'un nombre aleatoire de termes aleatoires -
Application aux problemes de stockage, Revue de Statistique Appliquee,
Vol. VIII, nO 1, 1960,51-60.
Bernard Roy, Les calculs d'actualisation dans Ie cas de durees aleatoires,
Revue Franr;aise de Recherche Operationnelle nO 13, 4e trimestre 1959,
Selected Publications of Bernard ROY 9

35-46 et Cahiers du Centre d'Etudes de Recherche Operationnelle n° 3,


1959,35-46.
Bernard Roy, Transitivite et connexite, Gauthier-Villars, Extrait des comptes
rendus des seances de l'Academie des Sciences, t. 249, seance du 15
juillet 1959,216-218.
Bernard Roy, Contribution de la theorie des graphes a l'etude de certains
probU:mes lineaires, Gauthier-Villars, Extrait des comptes rendus de
I'Academie des Sciences, t.248, seance du 27 avril 1959, 2437-2439.
Bernard Roy, Sur quelques proprietes des graphes fortement connexes,
Extrait des comptes rendus de l'Academie des Sciences, t.247, seance du
28 juillet 1957,399-401.
Bernard Roy, Recherche d'un programme d'approvisionnement ou de
production, Cahiers du Bureau Universitaire de Recherche
Operationnelle nO 1, 1957, 2-41 et Revue de Recherche Operationnelle,
Volume I, numero 4, 3e trimestre 1957, 172-184.
Bernard Roy, Metodi e problemi con funzioni obiettivo multiple, Ricerca
Operativa nO 2, Aprile 1971, Franco Angeli (ed.), 9-20.
Bernard Roy, Algunos aspectos teoricos de los problemas de
programmacion, Revue METRA, Vol. IV, nO 2, 1965,269-279.
Bernard Roy, Sergio Viggiani, I problemi di programmazione scientifica,
Revue METRA, Vol. III, nO 3, 1964,293-304.

4. Papers in Contributed Volumes


Vincent Mousseau, Bernard Roy, Isabelle Sommerlatt, Development of a
decision aiding tool for the evolution of public transport ticket pricing in
the Paris region, in Alberto Colorni, Massimo Paruccini, Bernard Roy
(eds.), A-MCD-A, 25th Year, EURO Working Group, Multicriteria
Decision Aiding, EUR Report, The European Commission (to be
published)
(see also Universite Paris-Dauphine, Document du LAMSADE n° 112,
fevrier 1999, 78 pages, in French).
Bernard Roy, Decision-aiding today: What should we expect?, in Tomas
Gal, Theodor J. Stewart, Thomas Hanne (eds.), Multicriteria Decision
Making - Advances in MCDM Models, Algorithms, Theory, and
Applications, Kluwer Academic Publishers, 1999, 1-1-1-35
(see also Albert David, Armand Hatchuel, Romain Laufer (eds.), Les
10 AIDING DECISIONS WITH MULTIPLE CRITERIA

nouvelles fondations des sciences de gestion, Editions Vuibert,


Collection FNEGE, 2001, 145-179, in French).
Bernard Roy, Decision-aid and decision-making, in c.A. Bana e Costa (Ed.),
Readings in Multiple Criteria Decision Aid, Springer-Verlag, 1990, 155-
183
(see also European Journal of Operational Research 45, 1990,324-331).
Bernard Roy, Denis Bouyssou, Comparison of a multiattribute utility and an
outranking model applied to a nuclear power plan siting example, in
Y.Y. Haimes, V. Chankong (eds.), Decision Making with Multiple
Objectives, Springer-Verlag, Lecture Notes in Economics and
Mathematical Systems, vol. 242, 1984, 482-494
(see also European Journal of Operational Research 25, 1986,200-215).
Bernard Roy, A multicriteria analysis for trichotomic segmentation
problems, in Peter Nijkamp, Jaap Spronk (eds.), Multiple Criteria
Analysis: Operational Methods, Gower Press, 1981,245-257.
Bernard Roy, Acceptance, rejection, delay for additional information -
Presentation of a decision aid procedure, in Alperovitch, de Dombal,
Gremy (eds.), Evaluation of Efficacy of Medical Action, North-Holland
Publishing Company, 1979, 73-82
(see also Cahier SEMA nO 3, 1979, III-XV, in French).
Bernard Roy, Partial preference analysis and decision-aid: The fuzzy
outranking relation concept, in David E. Bell, Ralph L. Keeney, Howard
Raiffa (eds.), Conflicting Objectives in Decisions, John Wiley and Sons,
1977,40-75.
Bernard Roy, A conceptual framework for a prescriptive theory of
«decision-aid», in M.K. Starr, M. Zeleny (eds.), Multiple Criteria
Decision Making, North-Holland Publishing Company, TIMS Studies in
the Management Sciences 6, 1977, 179-210.
Bernard Roy, How outranking relation helps multiple criteria decision
making, in Multiple Criteria Decision Making, Actes du Seminaire
«Theorie de la Decisioll», Beaulieu-Sainte-Assise, France, 6-7 decembre
1973, Ed. CESMAP, 1975, 81-98, J.L. Cochrane, M. Zeleny (eds.),
Multiple Criteria Decision Making, University of Carolina Press, 1973,
179-201.
Bernard Roy, An algorithm for a general constrained set covering problem,
in Ronald C. Read (ed.), Graph Theory and Computing, Academic Press
Inc., New York and London, 1972,267-283.
Selected Publications of Bernard ROY 11

Bernard Roy, Raphael Benayoun, Jean Tergny, From SEP procedure to the
mixed OPHELIE program, in Jean Abadie (ed.), Integer and Non Linear
Programing, North-Holland Publishing Company and John Wiley and
Sons, 1970,419-436
(see also Revue METRA, Vol. IX, nO 1, 1970, 141-156, in French).
Bernard Roy, Optimisation et analyse multicritere, in Claude Jessua,
Christian Labrousse, Daniel Vitry, Damien Gaumont (sous la direction
de), Dictionnaire des Sciences Economiques, Presses Universitaires de
France, 2001,640-643.
Bernard Roy, L'aide a la decision aujourd'hui : que devrait-on en attendre ?,
in Albert David, Armand Hatchuel, Romain Laufer (eds.), Les nouvelles
fondations des sciences de gestion - Elements d'epistemologie de la
recherche en management, Editions Vuibert, Collection FNEGE, 2001,
141-174.
Bernard Roy, Refiexion sur Ie theme quete de l'optimum et aide a la
decision, in Decision, Prospective, Auto-Organisation - Melanges en
I 'honneur de Jacques Lesourne, Textes reunis par J. Thepot, M. Godet,
F. Roubelat, A.E. Saad, Paris, Dunod, 2000, 61-83.
Bernard Roy, Denis Bouyssou, Aide a la decision, in J.P. Helfer, J. Orsoni
(coordinateurs), Encyclopedie du Management, TGme 1, Vuibert, janvier
1992,447-457
(see also AFCETIINTERFACESN° 65, mars 1988,4-13).
Bernard Roy, Denis Bouyssou, Comparaison, sur un cas precis, de deux
modeles concurrents d'aide a la decision, in Marches, Capital et
Incertitude, Essais en I 'Honneur de Maurice Allais, sous la direction de
Marcel Boiteux, Thierry de Montbrial, Bertrand Munier, Economica,
1986, 155-177.
Bernard Roy, Formatage et singularites du projet «Reseau 2000», in Jacques
Le Goff, Louis Guieysse (eds.), Crise de l'Urbain, Futur de la Ville,
Economica, 1985,45-50.
Jean-Christophe Hugonnard, Bernard Roy, Le plan d'extension du metro en
banlieue parisienne, in Methode de decision multicritere, Textes
rassembles par Eric Jacquet-Lagreze et Jean Siskos, Monographies de
I' AFCET, Division Gestion-Informatisation-Decision, Editions Hommes
et Techniques, 1983,39-65
(see also Les Cahiers Scientifiques de la Revue Transports nO 6, 1er
trimestre 1982, 77-108).
12 AIDING DECISIONS WITH MULTIPLE CRITERIA

Eric Jacquet-Lagreze, Bernard Roy, Aide a la decision multicritere et


systemes relationnels de preference, in Pierre Batteau, Eric Jacquet-
Lagreze, Bernard Monjardet (eds.), Analyse et Agrl?gation des
Preferences, Economica, 1981,255-278.
Bernard Roy, Chemins et circuits : Enumeration et optimisation, in B. Roy
(ed.), Combinatorial Programming: Methods and Applications, D.
Reidel Publishing Company, 1975, 105-136.
Bernard Roy, Graphe (Theorie des), Encyclopedie des Sciences et des
Techniques, 1972,450-455.
Bernard Roy, Michel Algan, Jean-Charles Holl, Physionomie et traitement
des problemes de stockage, Techniques Modernes et Gestion des
Entreprises, Dunod, 1962.

5. Papers in Proceedings
Bernard Roy, Daniel Vanderpooten, The European School of MCDA: A
historical review, in Roman Slowinski (ed.), Proceedings of the 14th
European Conference on Operational Research «OR; Towards
Intelligent Decision Support», Jerusalem, Israel, July 3-6, 1995, 39-65.
Jean-Charles Pomerol, Bernard Roy, Camille Rosenthal-Sabroux,
Developing an «intelligent» DSS for the multicriteria evaluation or
railway timetables: Problems and issues, in The International Society for
Decision Support Systems, Third International Conference, Conference
Proceedings, Volume 1, lOSS '95, June 22-23, 1995, 161-172
(see also Revue des Systemes de Decision, Volume 5, n° 3-4, 1996,249-
267, and Foundations of Computing and Decision Sciences, Vol. 20, No.
3, 1995,219-238).
Bernard Roy, Daniel Vanderpooten, The European School of MCDA: A
historical review, in European Conference on Operational Research OR:
Towards Intelligent Decision Support, Jerusalem, Israel, July 3-6, 1995,
39-65.
Bernard Roy, Eric Jacquet-Lagreze, Concepts and methods used in
multicriteria decision models: Their application to transportation
problems, in H. Strobel, R. Genser, M.M.Etschmaier (eds.), Optimization
Applied to Transport Systems, IIASA, Laxenburg, Austria, 1977,9-26.
Selected Publications of Bernard ROY 13

Bernard Roy, Why multicriteria decision aid may not vit with the assessment
of a unique criterion, in Milan Zeleny (ed.), Multiple Criteria Decision
Making, Springer-Verlag, 1976,283-286.
Bernard Roy, From optimization to multicriteria decision-aid: Three main
operational attitudes, in Proceedings of a Conference, Jouy-en-Josas,
France, May 21-23, 1975, Herve Thiriez, Stanley Zionts (eds.), Multiple
Criteria DecisionMaking, Springer-Verlag, 1976, 1-34.
Bernard Roy, Hubert Le Boulanger, Traffic assignment - The ATCODE
model, in Vehicular Traffic Science, Proceedings of the Third
International Symposium on the Theory of Traffic Flow, American
Elsevier Publishing Co., New York, 1967.
Tullio Joseph Tanzi, Bernard Roy, Michel Flages, D. Voncken, Indicateurs
de dangerosite appliques aux transports collectifs, in Actes du 12e
Colloque National de Surete de Fonctionnement, Montpellier, 28-30
mars 2000, 703-708.
Bernard Roy, Recherche operationnelle et aide a la decision, in Claude
Sayettat (ed.), L'Intelligence Artificielle - Une Discipline et un
Carrefour Interdisciplinaire, Compiegne, 10-12 decembre 1992, 139-
145.
Albert David, Bernard Roy, Existe-t-il une approche systemique du
changement organisationnel ? Discussion a partir de I' exemple de la
modernisation de la RATP engagee par Christian Blanc, Actes du rr
Congres biennal de I 'association franr;aise des sciences et technologies
de I'information et des systemes «Systemique et Cognition», Versailles,
8-10 juin 1993, 293-308.
Guy Casteignau, Bernard Roy, L'analyse multicritere interactive comme
outil d'aide ala decision pour la gestion des risques environnementaux et
industriels, in Actes du Congres International Innovation, Progres
Industriel et Environnement - Preparer Ie XXIieme Siecle, Strasbourg, 4-
6 juin 1991,93-102.
Bernard Roy, Frederic Letellier, Vne approche multicritere pour piloter la
gestion des approvisionnements dans une structure de stockage a deux
niveaux, Actes du Colloque AFCET sur Ie Developpement des Sciences
et Pratiques de I 'Organisation et 4e Journees Francophones sur la
Logistique et les Transports, Theme 1989 : Logistique, Production,
Distribution, Transports, Paris, 13-15 decembre 1989,63-70.
14 AIDING DECISIONS WITH MULTIPLE CRITERIA

Bernard Roy, Quelques aspects embarrassants, intervention au Colloque de


a
Cerisy Temps et Devenir partir de l'c£uvre d'Ilya Prigogine, Jean-
Pierre Brans, Isabelle Stengers, Philippe Vincke (eds.), Editions Patino,
Geneve, 1988, 197-199.
Bernard Roy, Des criteres multiples en Recherche Operationnelle :
Pourquoi ?, in G.K. Rand (editor), Operational Research '87, Elsevier
Science Publisher B.V. (North-Holland), 1988,829-842.
Bernard Roy, Management scientifique et aide a la decision, Actes du
Colloque International IRIA Informatique, Automatique et Sciences des
Organisations, Paris, 1976, 1-21.
Bernard Roy, Patrice Bertier, La methode ELECTRE II - Une application au
media-planning, in M. Ross (ed.), OR '72, North-Holland Publishing
Company, 1973,291-302.
Bernard Roy, Raphael Benayoun, Jean Tergny, Jean de Buchet, Sur 1a
programmation lineaire en variables mixtes, in Actes de 1a Cinquieme
Conference Internationale de Recherche Operationnelle, John Lawrence
(ed.), Tavistok Publications, 1970,437-445.
Bernard Roy, A propos de l'agregation d'ordres comp1ets : Quelques
considerations tbeoriques et pratiques, in La Decision - Agregation et
dynamique des ordres de preference, Aix-en-Provence, 3-7 juillet 1967,
Editions du CNRS, juillet 1969,225-239.
Necessita di una nuova assiomatica in teoria delle decisioni per pensare in
mode diverso la Ricerca Operativa, in Atti delle Giornate di Lavoro
AIRO, Vol. I, Bologna, Italia, 24-26 Settembre 1979, XI-XLIII.
I

MEMORIES OF EARLY CAREER


AND IMPACT OF EARLY WORKS
OF BERNARD ROY
BERNARD ROY, FORTY YEARS
OF ESTEEM AND FRIENDSHIP

Jacques Lesourne*
Paris, France

It is not without a mixed feeling of amusement and uneasiness that


I write in English a paper devoted to Bernard Roy, since for years we have
discussed in our colloquial language. Of course, I am appeased to know that
sooner or later we shall all use English in everyday life, not the Oxford
English of our British friends but the English spoken by Indians or Chinese
which is not easier to understand than the American slang.
An author of a chapter in a book devoted to a colleague has always at
the beginning a difficult choice to make among three potential solutions: the
solution of memories -old soldiers' stories would say some persons, the
solution of addition to the work of the honoured scientist either through a
broadview analysis or through an original piece of research, the solution of a
paper without strong connections with the subjects considered, but
expressing regards and friendship with respect to the honoured colleague.
Nevertheless, for me, there is no room for hesitation. Having worked
daily with Bernard Roy for twelve years or so, from the end of the fifties to
the beginning of the 70's, the solution of memories imposes itself. Having
tried to reconstruct them without simplifYing too much, I have come to a
three-act play with a prologue and an epilogue.
The prologue ? January 1st, 1958. I become Directeur General of the
recently created SMA (Societe de Mathematiques Appliquees). The four first
members of the staff are located in two big rooms on Kleber Avenue, hosted
by the Cabinet Marcel Loichot, waiting for our offices at the comer of
Trinity Square and Mogador street. In one of the rooms, I discover two
young people, 23 years old (I am myself thirty), lonely, unoccupied, full of
good will, at the same time intimidated and waiting for the future. Both have
a degree of the Institut de Statistiques de l'Universite de Paris. The former,

* Professeur honoraire au Conservatoire National des Arts et Metiers, Paris, France


18 AIDING DECISIONS WITH MULTIPLE CRITERIA

Bernard Roy, is almost blind, reads with the help of a thick lens and writes
with big letters; the latter, Patrice Bertier is handicapped by a severe
poliomyelitis which constrains him to a rolling chair, but does not prevent
him from working. The blind and the paralysed as in some tales of the old
days.
The first act begins a few weeks later. Brand-new offices with white
walls and big ash tables. The first contracts are signed and the team
develops. The first topic on which, as far as I can remember, Bernard Roy
operates concerns the production and inventory policy for tubes used in oil
drilling. Bernard elaborates, out of numerical data, a statistical law of the
life-lengths of the tubes. Quickly, my opinion is made: this young guy open
and joyful has a good brain and the faculty to pass easily from observation to
modelling and vice-versa.
But the real test occurred a little later when Electricite de France asked
us to solve the scheduling problem of the nuclear power plant at Chinon.
Bernard's contribution to the modelling was decisive. He proposed to
introduce a graph, nodes of which representing the tasks and arrows the
anteriority constraints. Without knowing it, in parallel with an American
team working on the Polaris submarines (but without any contact with
them), we had discovered the modem treatment of scheduling problems. So
appeared the METRN potential method (MPM).
In the initial modelling, the availability constraints were not introduced,
but new developments made possible to apply it to public works when only
one crane is available on the site or to the construction of the liner France for
which it was necessary to take into account the available staff of different
professions.
Bernard Roy pursued the development of graph theory which enabled
him to obtain brightly his thesis and to publish at Dunod his first book
"Theorie des graphes et applications" (1969).
Along this road, he made incursions into integer linear programming
which in these days were of interest to us.
Act II opened in 1960 when SMA was transformed into SEMA (Societe
d'Economie et de Mathematiques Appliquees) and transferred -due to its
rapid growth- into larger offices, in La Boetie street. The size of the team
implied now the definition of a structure. It seemed to me that, in addition to
the various operational units, it was necessary to create a group which would
ensure the relations with the scientific world, would conceive new tools and

1 Name given to the group composed of Serna and its European subsidiaries.
Bernard ROY, Forty Years ofEsteem and Friendship 19

would assist the teams in the modelling of some of the problems they had to
deal with.
A name was obvious for the appointment of the head of this Direction
Scientifique: Bernard Roy. All the members of the staff supported this
choice. But the task was far from easy. The presence of this department
increased the central costs and was only acceptable if the operating staff
considered that the department was offering them a real service which it
could not carry out itself. The Directeur Scientifique had therefore to prove
daily that he was managing an efficient team, helping equitably all the
operational units with diplomacy and humility. In these responsibilities,
never was Bernard Roy criticized, which is a remarkable performance.
With the European development of the Serna group, Bernard had to
put the Direction Scientifique at the disposal of foreign companies, generally
smaller, in great need of technical transfers, but wanting to keep jealously
their identity. In this field also, Bernard Roy was recognised, thanks to the
Metra review, a quarterly journal freely distributed but with the quality of a
scientific periodical, conceived to promote, out of real examples, the OR
techniques in the broadest meaning of the concept and not to make publicity
for the group. Courage was indeed necessary for the man in charge of it
since I had imposed the constraint -which seems now crazy to me- for the
authors to present their papers in their own language, French, English,
German, Italian, Spanish so that, except for the summaries, only a few
people could read the whole set. Nevertheless, one may still read nowadays
the issues of Metra without being ashamed.
This period was the golden age of the Direction Scientifique. Among
the topics on which Bernard Roy has personally worked during these years,
I shall point out two fields:
the models for traffic analysis and forecasting, the first of which, called
"the model of preferential equilibrium" enabled to estimate the volume of
trips from home to work, from a zone i to a zone j when computed the
generalized cost of transportation between the zones and known the
geographical distribution of workers and jobs. This model was successfully
used for predicting traffic on the lines of the new underground system, the
Reseau Express Regional (RER).
the models of multicriteria choices in which Bernard Roy was a pioneer,
which contributed to establish through the years his international reputation.
The initial issue was to distribute optimally an advertising budget between
different medias, or for a given media, the press for example, between the
various papers. Bernard Roy realized that it would have no meaning to look
for a function to maximize. Too many and heterogeneous were the criteria:
20 AIDING DECISIONS WITH MULTIPLE CRITERIA

the cost of insertion in relation with date and size, the characteristics of the
audience, the impact of repeated insertions, ... He understood that it was not
satisfactory to give to each program a mark for each of the criteria, then to
add up these marks weighted by coefficients expressing the relative
importance of the various criteria. Pragmatisms which could lead to different
solutions from one problem to the next had to be based on a rigorous set of
axioms and on solid theoretical foundations. Electre -since it was the name
given to the first member of the family- was born in these years. It had a
numerous offspring conceived in Serna and later in Lamsade.
Difficult to situate precisely the beginning of act III, but it started under
the influence of two evolutions: the explosion of informatics (the name was
invented at SEMA by Philippe Dreyfus) and the absorption by SEMA of
OTAD a sister company operating in the fields of organization, training and
selection. Hence it became more difficult for the Direction Scientifique to
meet the needs of a very diversified international group covering many fields
from scientific informatics to urban planning including automatic data
processing, business economic studies, marketing research and surveys.
Nevertheless, I wanted the presence of a unit asserting to all that the group
wanted to go on invented methods coming from the fertile ground of
practical cases and from the development of sciences, mathematics,
economics and social sciences. In the dull climate generated by the group's
financial difficulties, Bernard Roy had to diversify his team, facilitate the
insertion of scientific advisers, broaden the scope of his research themes.
However, his primary task was to lead the elaboration of a yearly research
programme, each of the projects selected being financed by the centre and
executed by an operation unit in cooperation or not with the Direction
Scientifique. The proposals could be sent by any unit of the group, the
results of the projects being freely diffused throughout the group. Bernard
became then the head of the application of multicriteria methods to establish,
out of the bundle of offers, the group's research program. A serious task
which required openness, diplomacy and an aptitude to be accepted by the
representatives of all the sectors. Bernard perfectly assured this
responsibility. Some of the projects were achieved magnificently, as for
instance one on the internal staff relations in a hospital, but the practical
results were uneven, the efforts made to reduce the group's expenditures
modifying constantly the organization and diminishing the central costs. Hit
by this storm, the Direction Scientifique disappeared and Bernard Roy was
elected as a professor at University Paris IX Dauphine, where, for thirty
years, he has been developing a second career.
Bernard ROY, Forty Years ofEsteem and Friendship 21

As for the epilogue, it is not obviously to an end. It took the shape for
me of a many years' presidency of the Lamsade Scientific Committee, which
meant for me a double privilege, to receive the research documents of the
team and to have the opportunity every year to spend a stimulating day with
Bernard and his colleagues. Of course, my own research had developed in
other directions but I had not lost my interest for the immense field related to
decision and more generally action. On his side, Bernard has procured for
me the great pleasure of discovering in the book of M(?ianges offered to me
at the beginning of 2000, a deeply-thought paper on the meaning of
optimality in a decision context.
This long historical cohabitation is not however sufficient to explain
the friendship and esteem I feel for Bernard Roy. If I try to analyse it, several
dimensions come up to my mind: his attitude with respect to his handicap, a
handicap mastered, neither hidden nor borne as a medal, which facilitates
greatly relations with others; his authentic good temper which likes humour
and expresses an internal joy, certainly supported by the equilibrium of his
family; a great continuity of mood which helps him in facing and dominating
rationally difficulties; a constant curiosity and an openness which makes him
a precious partner for the specialists for all fields concerned with decision;
an ability to pass easily from reality to abstraction and vice-versa, which is
absolutely necessary in applied sciences; a constant thinking capacity which
enables him to plough and pursue the furrows he has chosen; at last, his
faithfulness in friendship to which a forty year period gives a precious
thickness.
CONNECTIVITY. TRANSITIVITY AND
CHROMATICITY: THE PIONEERING WORK
OF BERNARD ROY IN GRAPH THEORY

Pierre Hansen
GERAD and Ecole des HEC, Montreal, Canada
pierreh@crt.urnontreal.ca

Dominique de Werra
Ecole Poly technique Pedemle de Lausanne, Switzerland
dewerra@drna.epfl.ch

Abstract We review the work of B. Roy in graph theory and its posterity.

Keywords: Graph; Connectivity; Transitivity; Chromaticity; Review

Introduction
Before exploring in depth and breadth Decision Aid Methods, Bernard
Roy devoted a few years to the study of graph theory. This led to a series
of contributions including a large book (Roy 1969/70). Generated in a
context of intensive research in graph theory, pioneering ideas in several
of his papers induced long streams of results by many authors up to the
present time.
In this chapter, we review the work of B. Roy on graph theory about
forty years after its publication. We also outline its posterity by pre-
senting a sample of the extensions of his seminal results and we describe
the general context in which his research was carried out. We do not
aim at exhaustivity, but rather, in a tutorial spirit, try to present to a
large audience the main themes of this research.
We assume the reader is familiar with the basic concepts of graph
theory and refer to the book of C. Berge, "Graphs and Hypergraphs"
(Berge 1973) for definitions not given here.
24 AIDING DECISIONS WITH MULTIPLE CRITERIA

1. Connectivity and transitivity


In his first paper (Roy 1958), B. Roy considers a graph G = (X, f)
where X denotes a set of vertices Xl, X2, ... ,X n and f a one-to-many
application from X to X. He then introduces a network R defined as
follows:
• its vertex set is the union of X and a copy Y of X plus a source
vertex Xo and a sink vertex Zi
• its arc set contains arcs XiYj if and only if XiXj is an arc of G, plus
arcs XOXi and YjZ for each vertex Xii
• nonzero capacities Ci are then associated with all arcs XOXi and
YjZi the remaining arcs have infinite capacities (see Fig. 1).

Figure 1. The construction of a network R associated to a graph G in which every


vertex is contained in a circuit.

Then B. Roy obtains the following results:

Theorem 1 The following statements are equivalent:


(i) there is no subset A of X strictly containing its image f(A)
(ii) each vertex belongs to at least one circuit of G
(iii) capacities Ci can be chosen in such a way that there is a flow in R
which saturates the extremal arcs XOXi and Yjz.

The graph G in Fig 1 satisfies the conditions (i), (ii), (iii), as can be
verified.

Theorem 2 A graph G is strongly connected if and only if there is no


proper subset A of X containing its image f(A).
Connectivity, Transitivity and Chromaticity 25

In the graph G in Fig. 1 the subset A = {X2' X3} contains its image
f(A) = A, hence G is not strongly connected.
D. Gale (1959) observes that this last result appears in a long paper of
R. Rado (1943) on linear combinatorial topology and general measure.
P. Camion (1959) uses the proof technique of (Roy 1958) to derive a
well-known result on Hamiltonian circuits:

Theorem 3 (P. Camion 1959): A complete graph has a Hamiltonian


circuit if and only if it is strongly connected.

Fig. 2 shows a complete graph which is not strongly connected; it has


however a Hamiltonian path (but no Hamiltonian circuit according to
theorem 3).

Figure 2. A complete graph which is not strongly connected.

The result of Camion has been in turn extended by M. Goldberg and


J.W. Moon (1972). We recall that a graph is k-strong if and only if
between any two vertices there are k arc-disjoint paths.

Theorem 4 (Goldberg and Moon 1972): A k-strong tournament has at


least k distinct Hamiltonian circuits.

Many additional results on this topic have been obtained later by


various authors, see e.g. (Thomassen 1980), (Bermond and Thomassen
1981), (Zhang and Song 1991) and (Bang-Jensen and Gutin 1988) for
surveys.
The second paper of B. Roy, motivated by applications in sequencing
and scheduling problems, states conditions of existence of systems of
potentials: we are given a system of linear inequalities of the following
form
for i,j E K (1)
where the ti are unknown and the aij are real numbers given for each
pair i, j in a given set K.
The pairs i, j in K can be associated with the arcs of a graph G. The
problem so defined, after introduction of a linear objective function, is
26 AIDING DECISIONS WITH MULTIPLE CRITERIA

the dual of a minimum cost flow problem, and it can be transformed


into the dual of a transportation problem.

Theorem 5 (Roy 1959a): A necessary and sufficient condition for the


existence of a solution of (1) is that the sum of the aij'S over all arcs
i, j of any elementary circuit is non-positive.

As an illustration, examine the graphs in Fig. 3: for the graph of


Fig. 3 (a), there is no solution to (1); for the graph in Fig. 3 (b), there
is a solution.

o
(a)

Figure 3. Graphs with and without systems of potentials.

After that, A. Ghouila-Houri (1960) presents in a group-theoretical


wording a more general result formulated in terms of algebraic proper-
ties of the aij'S; this formulation contains the theorem of A. Hoffman
(see (Berge 1973)) on the existence of circulations in capacitated net-
works and an early result of D. Konig (see (Berge 1973)) on the two-
colorability of graphs (the vertices of a graph can be colored with two
colors if and only if every cycle has an even number of edges).
The theory of tensions (systems of potentials), of which the problem
considered by B. Roy is a special case, has been extensively studied in
the book of C. Berge and A. Ghouila-Houri (1965).
Based on the study of B. Roy on systems of potentials, the so-called
MPM method (methode des potentiels METRA) was developed for solv-
ing sequencing problems. In opposition to the classical critical path
method (CPM) used previously, MPM uses graph representations where
the various tasks of a project are associated to the vertices of a graph (in-
stead of the arcs). This formulation is extremely fruitful since it enables
to model much more general constraint types than classical CPM.
The third paper of B. Roy (Roy 1959b), on transitivity and connectiv-
ity, presents a major result which has generated a number of interesting
developments by many authors.
Connectivity, Transitivity and Chromaticity 27

B. Roy considers the adjacency matrix A = (ahk) of a graph G =


(X, f) (assuming akk = 1 for all k) and studies its transitive closure,
i.e., the matrix A. = (ahk) such that ahk = 1 if there is a path from
vertex Xh to vertex Xk in G and ahk = 0 otherwise. The basic tool in
this computation is the transformation Ti defined as follows: starting
from a matrix A = (ahk) we apply Ti and get Ti . A where

for every h with ahi = 1 we set ahk := max(ahk. aik).


Transformation Ti reproduces the ones of row i in any row containing a
one in column i: all the vertices Xk which can be reached from vertex
Xi can also be reached from any vertex Xh if there is an arc (Xh' Xi) or
more generally if Xi can be reached from Xh.
The transformations T i , T j are commutative. Applying Ti to A does
not change its transitive closure and A = A. if and only if A is not
modified by any transformation Ti.

Theorem 6 (Roy 1959b): The matrix Tn· Tn-I ... Tl . A is equal to the
transitive closure A. of A.

A. can be seen as the vertex-vertex incidence matrix of a graph G


which is called the transitive closure of G : G has an arc (Xi, Xj) if and
only if there is a path in G from Xi to Xj.
This gives immediately a O(n 3 ) algorithm for the transitive closure of
a graph. We also see that a graph G is strongly connected if and only
if all entries of A. are equal to one. The same algorithm was discovered
independently, but published three years later, by S. Warshall (1962).

Remark 1 With a view on the process of formalizing proofs, P. Naur


(1994) examines the papers (Wars hall 1962) and (Floyd 1962), (which,
as described below extends it to shortest paths), declares that the pre-
sentation of S. Warshall is "a complicated mixture of formal expression
and informal prose", and speculates on the importance or not of for-
malization in making a proof convincing. Unfortunately, he does not
compare the paper of S. Warshall (1962) with the more formal presen-
tation of B. Roy (1959b). 0

The papers of Roy (Roy 1959b) and Warshall (Warshall1962) led to


a large stream of developments, up to the present day. These can be
divided in two main categories.
On the one hand, increasingly large classes of problems which can be
solved by similar matrix algorithms have been identified; on the other
hand, refinements and improvements have been brought to the basic
28 AIDING DECISIONS WITH MULTIPLE CRITERIA

algorithm, including adaptations to parallel computing. We shall review


them in turn.
Transformation Ti may be expressed in a condensed form by using the
+
boolean sum and product x:

A first extension of the transitive closure algorithm was to the com-


putation of the matrix t of distances between all pairs of vertices of a
graph G where each arc has a nonnegative length. First published as
a twenty line paper by R.W. Floyd (1962) and referring to (Warshall
1962), this extension remained unnoticed for some time.
Let L = (fhk) denote the matrix of arc lengths where the absence
of an arc is expressed by an arbitrary large value in the corresponding
entry. As before, let Ti be a transformation applied to L defined by
Ti . L = (f hk ) with
fhk := min(fhk' fhi + fik)'
This means that the distance between Xh and Xk is at most equal to the
minimum of the length fhk of a path from Xh to Xk and of the sum of
the lengths fhi of a path from Xh to Xi and fik of a path from Xi to Xk.
With some adaptation the proof of Theorem 6 shows that

Tn . Tn-I' .. Tl . L = t.
A further extension to the maximum capacity path can be made by
starting from a matrix C = (Chk) where Chk is the capacity of arc (Xh' Xk)
if it exists and 0 otherwise. The capacity of a path is defined as the
smallest capacity of its arcs. 7i is defined by Ti . C = (Chk) with

Chk:= max (chk,min(chi,Cik))'


A similar formula gives the maximum reliability path by starting from
R = (rhk) where rhk is the reliability of arc (Xh' Xk) if it exists and 0
otherwise. The reliability of a path is defined as the product of the
reliabilities of its arcs. Then Ti is defined by Ti . R = (rhk) with

rhk := max (rhk , rhi . rik)'


The three cases presented above can be viewed as instances of a more
general algorithm to solve a system of equations

ahk := ahk EEl (ahi ® aik)


where EEl and ® are the boolean sum and product for the transitive clo-
sure, the Min operator and the usual sum for shortest path, the Max and
Connectivity, Transitivity and Chromaticity 29

the Min operations for maximum capacity path and the Max operation
and the usual product for reliability. This suggests there should be a
general algebraic structure subsuming all these cases. It is indeed so. In
a general setting it is a dioid (Gondran and Minoux 1984) (also called
semi-ring by other authors), defined as a set S with two operations
(i) the operation EB ("add") gives S a structure of commutative monoid
(closure, commutativity, associativity) with neutral element E.
(ii) the operation ® ("multiply") gives S a structure of monoid (clo-
sure, associativity) with neutral element e (unit). Moreover, E is
absorbing (a ® E = E for any a in S) and ® is right and left dis-
tributive with respect to EB. Moreover,
(iii) the preorder relation ~ (reflexivity, transitivity) induced by EB
(canonical preordering) and defined by a ~ b if and only if there is
a c in S such that a = b EB c is a partial order, i.e., it satisfies a ~ b
and b ~ a implies a = b (antisymmetry).
Many authors have studied this structure (e.g. (Shimbel1954), (Cun-
ninghame-Green 1960, 1962, 1979), (Yoeli 1961), (Robert and Ferland
1968), (Tomescu 1968), (Gondran 1975) and (Wongseelashote 1976)).
The reader is referred to (Gondran and Minoux 1984) for further refer-
ences.
Properties of classical linear algebra, and in particular algorithms
to solve systems of linear equations (Gauss, Gauss-Seidel, Jacobi, Jor-
dan, etc.) can be adapted to dioids. Algorithms of (Dantzig 1967)
and (Tabourier 1973) can be viewed in this light, as well of course and
that of (Roy 1959b) which corresponds to a generalized Jordan method.
So the work of B. Roy on transitive closure is part of a vast stream
which dates back to the middle fifties. This research program has
been expanded further recently by M. Gondran (1996a, 1996b) and
M. Gondran and M. Minoux (1997) who have shown how the use of
dioids can extend nonlinear analysis.
Among possible applications of the transitive closure algorithm, we
mention finding the transitive reduction of an oriented graph G with-
out circuits, i.e., removing a maximal subset of arcs without changing
the transitive closure (Gries, Martin, van de Snepscheut and Udding
1989). Another application is the efficient evaluation of single-rule Dat-
alog programs with a slight generalization of the Floyd-Roy-Warshall
algorithm (Papadimitriou and Sideri 1999).
P.L. Hammer and S. Nguyen (1977) consider the following logical
problem, which generalizes the question of computing the transitive clo-
sure of a graph: we are given a set of binary relations between boolean
30 AIDING DECISIONS WITH MULTIPLE CRITERIA

variables Yj, i.e., relations of the form Yh :S Yk or Yh :S Yk (which are


equivalent to Yh ~ Yk or Yh ~ Ykl where Yk denotes the complement of
Yk). We have to determine the logical closure of these relations, i.e., the
set of all conclusions which can be of the following types:
a) a contradiction

b) some variable Yk takes only value 1 (or value 0)

c) some pairs of variables Yh, Yk may be identified

d) some pairs of variables Yh, Yk are such that Yh = Yk.


P. Hansen (Hansen 1976/77) has extended the algorithm of B. Roy
to this problem: observe that none, one or more of the four relations
Yh :S Yk, Yh ~ Yk, Yh :S Yk, Yh ~ Yk can hold for any pair of variables
Yh, Yk· The subset of relations defines one among 16 states Shk; then a
table can be constructed for the product of states Ski and Sih and for
the sum of states on the same pair of variables. Using these tables, the
extension is straightforward.
Further extensions to path problems restricted in various ways were
studied in (Klee and Larman 1979).
Turning to the computational improvements of the transitive closure
algorithm of B. Roy, we note that several authors have proposed versions
which reduce the number of computations to a half without however
changing the worst case complexity. This is usually done by using in
turn forward and backward processes; see (Hu 1967), (Bilde and Krarup
1969), (Hoffman and Winograd 1972), (Warren 1975), (Goldman and
Tiwari 1986), (Farbey, Land and Murchland 1986). (Land and Stairs
1967) note that for weakly connected graphs a block structure of the
matrix can be exploited. (YuvaI1975/76) observes that using Strassen's
algorithm for matrix multiplication leads to a transitive closure algo-
rithm with a complexity O(n 2 .81 ). The many extensions of Strassen's
algorithm can be transposed in a similar way.
As expected many adaptations to parallel computing have been pre-
sented for the transitive closure algorithm and for its generalizations to
the all pair shortest path problems. For parallelization of the transi-
tive closure algorithm, see e.g. (Rote 1985), (Zhu 1985) and (Poel and
Zwiers 1993). A recent survey together with new results (also linked to
complexity) is given by (Takaoka 1998).
B. Roy's results in graph theory together with many others are also
presented in a large book, in two volumes (Roy 1969/70); a couple ofpa-
pers (Roy 1962/69) survey shortest paths and connected graphs. Their
material is included in the book. Algorithms are covered in detail with
Connectivity, Transitivity and Chromaticity 31

a rare wealth of applications, often based upon case studies done at


METRA's scientific direction, headed by B. Roy for several years.

2. Paths and Colors


In a now classical paper (Roy 1967), B. Roy had the original idea
of linking two seemingly different concepts of graph theory: the length
of a path and the chromatic number. This result was also obtained
independently by T. Gallai (1968).
In fact colors used in graphs are often replaced by integer numbers, so
it may look more natural to examine the connections between colorings
and orientations of graphs; if G is an (oriented) graph containing no
circuits, then by associating to each vertex x a number c(x) which is
the number of vertices on the longest elementary path ending at x, we
obtain a k-coloring of G where k = s(P) is the number of vertices on
the longest (elementary) path P of G.
So, if G is an (oriented) graph without circuits such that every (ele-
mentary) path P has s(P) ~ k vertices, then G has a k-coloring (i.e., G
is k-chromatic). Observe that in a graph without circuits, all paths are
necessarily elementary. The result of B. Roy and T. Gallai is to extend
this to oriented graphs (possibly with circuits) where one simply requires
that s(P) ~ k for any elementary path P.

Theorem 7 (Roy 1967): If in a finite oriented graph G there is no


elementary path P with s{P) > k, then G has a k-coloring.

This result combined with the observation that the vertices of a graph
G can always be colored with .6.{G) + 1 colors (where .6.{G) denotes the
maximum degree of G, i.e., the maximum number of arcs adjacent to a
vertex) gives the following:

Corollary 1 The edges of a graph G can always be oriented in such a


way that the resulting graph has no circuit and each elementary path has
at most .6.{G) arcs.

This last result can be strengthened using a theorem of C. Szekeres


and H.S. Wilf (1968): the maximum degree .6.(G) of G can be replaced
by the maximum over all induced subgraphs H of G of the minimum
degree of H.
S. Fajtlowicz (1988, 1999) designed the system Graffiti to obtain auto-
matically conjectures in graph theory. He obtained and proved, in 1993
(see Fajtlowicz 1999), the following variant of B. Roy's theorem for undi-
rected graphs:
32 AIDING DECISIONS WITH MULTIPLE CRITERIA

Theorem 8 (Graffiti's conjecture 148): LetG be a finite, connected and


'undirected graph. Then the chromatic number of G is not more than the
minimum over all vertices v of G of the number of vertices in the longest
elementary path beginning at v.
This result was recently extended to directed graphs by Hao Li (1998).
An equivalent formulation of Theorem 7 due to (Berge 1982) is the
following:
In every finite oriented graph G with chromatic number X(G) = k,
there exists at least one path P with s(P) = k.
The second result of B. Roy in (Roy 1967) is related to the construc-
tion of a particular orientation of the edges of a graph G with X( G) = k:
Theorem 9 (Roy 1967): For any graph G with X(G) = k, one may
orient its edges in such a way that the resulting graph contains no circuit
and has the following properties:
(a) Let 8 1 be the set of vertices without predecessors in G,
8 2 the set of vertices without predecessors in G - 8 1 ,
8 3 the set of vertices without predecessors in G - 8 1 - 82, etc.
We thus define a partition of the vertex set into exactly k stable
sets, i.e., a k-coloring of G.
(b) For any vertex x in 8 h+ 1 , the chromatic number of the subgraph
Gh(x) of G induced by 8 1 ,82 , ... , 8 h and x satisfies X(Gh(x)) =
h+l.

Fig. 4 (a) shows a 4-coloring of a graph G with X(G) = 4, in Fig. 2 (b)


an orientation without circuits is given (it is derived in fact from the col-
oring in Fig. 2 (a)) with a 4-coloring having Property (a) of Theorem 9.
Notice that it does not satisfy Property (b) since the subgraph generated
by 81, 8 2 and x is 2-colorable.
The proof technique of B. Roy consists in starting from a X(G)-
coloring of G with "colors" 1, 2, ... , X( G) = k.
Scanning consecutively the vertices with colors k, k -1, ... ,3,2,1 one
tries to assign to each vertex z a color c(z) as small as possible. Then
by orienting each edge [x, y] from x to y if c(x) < c(y) one obtains the
orientation satisfying (a) and (b).
B. Roy gives additional properties of the orientation (Roy 1967):
(c) Any vertex x with c(x) =h has a predecessor in each one of the
sets 8h- 1 , 8 h- 2 , . · · , 8 1 .
(d) Every stable set 8h is (inclusion-wise) maximal in the subgraph of
G induced by 8h, 8 h+ 1 , ... ,8k'
Connectivity, Transitivity and Chromaticity 33

3 2

(a) (b)

G with X(G) = 4 and An orientation without circuit in G


a 4-coloring and the coloring of Theorem 9 (a)

(e)

A 4-coloring having Property (b)


of Theorem 9 (b)

Figure 4. Some illustrations of Theorem 9

As an illustration, one may verify that the 4-coloring of Fig. 4 (c) sat-
isfies the above properties. It is interesting to observe that such colorings
are similar in spirit to some solutions of sequencing problems which are
named "squeezed to the left" (Fr: "cale it gauche") which were also stud-
ied by B. Roy (1962). Here we may consider the graph G as associated
to a chromatic scheduling problem as follows:
The nodes correspond to jobs with equal processing times, say 1. An
arc (x, y) means that job x must precede job y. There is a one-to-one
correspondence between k-colorings of G and feasible schedules in k time
units where each job starts at some integer time.
The k-colorings satisfying (a)-(d) are precisely schedules squeezed to
the left: according to (b), if a job x is scheduled at period c(x) = h, it
is because all jobs in Sl, S2,'" ,Sh-1 need h - 1 periods and x cannot
be scheduled in a period i ~ h - 1. It is thus a schedule where each job
starts at its earliest date.
34 AIDING DECISIONS WITH MULTIPLE CRITERIA

Remark 2 The connection between orientations and colorings is also


illustrated in (Hansen, Kuplinsky and de Werra 1997) where mixed col-
orings are defined: a partially oriented graph G is given which con-
tains edges and arcs. A mixed k-coloring is an assignment of colors
c(x) E {I, 2, ... ,k} to all nodes x of G in such a way that for each
edge [x, y] the colors c( x) and c(y) are different and for each arc (x, y),
c(x) < c(y). This model is introduced to take into account some schedul-
ing problems where both precedence and disjunctive requirements occur.
In (Hansen, Kuplinsky and de Werra 1997) bounds on a generalized
chromatic number are derived and an algorithm is sketched for partially
oriented trees.

At the end of his paper B. Roy mentions briefly the special case of
perfect graphs G; these are the graphs in which each induced subgraph
H of G satisfies X(H) = w(H) where w(H) is the maximum cardinality
of a clique of H.
In such graphs, the colorings satisfying (a)-(d) have the following
characteristics:
For each vertex xESh there exists a clique K h(x) 3 x with K h(x) n Si i=
o for i = h, h - 1, ... , 1.
Such colorings have been called canonical in (Preissmann and de
Werra 1985) where strongly canonical colorings have also been defined
for strongly perfect graphs (Berge 1984) (a graph G is strongly perfect if
in every subgraph H of G there exists a stable set S such that S n K i= 0
for every inclusion-wise maximal clique K of H).
A coloring is strongly canonical if for any clique K of C there is a
clique C :J K such that en Si i= 0 for i = 1,2, ... ,min{ flS1 n K}.
As observed in (Preissmann and de Werra 1985), a graph G is strongly
perfect if and only if every induced subgraph of G has a strongly perfect
coloring. The graph G in Fig. 4 (a) is perfect (one may check that the
4- coloring in Fig. 4 (c) is canonical); it is also strongly perfect, as can
be verified.
Coming back to the general case handled by B. Roy, one observes
that the orientation constructed is such that there is a path P meeting
(consecutively) S1, S2, ... ,Sk; in fact, the coloring satisfies the following:
every vertex z with c(z) = h is on a path which meets (consecutively)
S1, S2,"" Sh·
Starting from this observation, Berge has obtained the following re-
sult:
Theorem 10 (Berge 1982): Let k be the maximum number of vertices
in a path of G. Then for every path P with k vertices, there exists
a k-coloring (S1,"" Sk) such that ISh n PI = 1 for h = 1,2, ... , k.
Connectivity, Transitivity and Chromaticity 35

Furthermore, this coloring is such that: for each x E Sh there is an arc


from Sh-l to x.
Notice that this does not imply that in any graph G, there exists a
x{ G)- coloring and a path P meeting every color exactly once.
V. Chvatal (1972) has observed that Theorem 7 can be used to derive
a consequence which is a generalization and a simplification of a result
of (Busolini 1971):

Corollary 2 {ChvataI1972}: Let G = (X, U) be a finite oriented graph


(without loops) where the arc set U is partitioned into U l , U2, ... ,Uk. As-
sume X{G) > mlm2 ... mk where ml, m2, ... ,mk are positive integers.
Then there exists an integer j with 1 ::; j ::; k such that Gj = (X, Uj)
contains an elementary path with mj + 1 arcs.

The proof consists in observing that we obtain a coloring of G by


taking Cartesian products of colorings of the graphs Gj; so

and there is a G j with X{G j ) > mj, hence by Theorem 7, there is in Gj


a path on mj arcs.
The result of B. Roy can also be stated as follows:
In every finite oriented graph G, the maximum number s{P) of vertices
in a path P satisfies s(P) ~ X(G).
J .A. Bondy has obtained a result linking the chromatic number to the
length of a longest circuit:

Theorem 11 {Bondy 1976}: In a strongly connected graph G {with at


least two vertices}, the longest circuit has length at least X( G).

It is immediate to observe that Theorem 7 can be obtained from


Theorem 11: it suffices to introduce into G a new vertex linked to every
vertex of G in both directions.
Furthermore, one may also observe that Theorem 11 of J.A. Bondy
implies Theorem 3 of P. Camion. Combining the concept of perfectness
with Theorem 7, C. Berge has called x-diperfect the graphs such that
any subgraph H satisfies the following condition:
given any optimal k-coloring (SI, S2, ... ,Sk) of H with k = X{H), there
exists a path P with IP n Sil = 1 for i = 1, ... , k.
He has shown that every perfect graph and every symmetric graph
is x- diperfect (Berge 1982). The graph G.in Fig. 5 is not x-diperfect:
there exists a 3-coloring (SI, S2, S3) for which no path P can be found
with IP n Sil = 1 for i = 1,2,3.
36 AIDING DECISIONS WITH MULTIPLE CRITERIA

SI ={a,c}
S2= {b,e}

S3= {d}

Figure 5. A graph G with a 3-coloring (S1) S2) S3)

It is worth mentioning here a companion result of Theorem 7. We


define a path-partition of an oriented graph G as a collection M =
(PI, P2, ... ,Pk ) of vertex-disjoint paths Pi which partitions the vertex
set of G.
T. Gallai and A.N. Milgram have obtained the following:

Theorem 12 (Gallai and Milgram 1960): If there is no path partition


of G with less than k paths, then G contains a set of at least k non
adjacent vertices.

A set of non adjacent vertices is a stable set and the maximum cardi-
nality of a stable set in G is denoted by a(G). Theorem 12 amounts to
saying that min IMI :::; a(G). The graph G in Fig. 5 has a path partition
M = ({a,b,c},{d,e}) and a(G) = 2.
C. Berge has defined in an analogous way a-diperfect graphs which
are those in which every subgraph H has the following property:
given any maximum stable set 8, there exists a path-partition M =
(PI, P2, ... ,Pk ) with k = a(H) and 18 n Pil = 1 for i = 1, ... ,k.
Again perfect graphs and symmetric graphs are shown to be a-diperfect
(Berge 1982). The graph G in Fig. 5 is not a-diperfect:
for 8 = {a, c}, the only path partition (PI = {a, b, c}, P2 = {d, e}) in
a(G) = 2 paths is such that P2 n 8 = 0.
In general, for an arbitrary graph, the known proofs of Theorem 12
do not imply the existence of a maximum stable set 8 and of a path-
partition (PI,'" ,Plsl) with 18 n ~I = 1 for all i.
We would now like to recall the conjecture of C. Berge which would
unify Theorems 7 and 12.
A path partition M(PI,'" ,Pq ) is called k-optimal if it minimizes the
quantity
q
Bk(M) =L min {k, IPi !}.
i=1
Connectivity, Transitivity and Chromaticity 37

Notice that a I-optimal path partition M contains a minimum number


IMI of paths.
The Strong Path Partition Conjecture (SPPC) is formulated as fol-
lows:
For every k-optimal partition M = (PI,' .. , Pq ) of an oriented graph
G there exists a k-coloring of a subgraph H of G such that the number
of different colors on Pi is
for i = 1, ... ,q
Fig. 6 shows a 2-optimal path partition M and a 2-coloring of H spanned
by {a, b, c, d}; one verifies that each Pi in M contains vertices of min {2, lPi I}
different colors. For k = max {s(P) : P path of G}, the SPPC is true:
it is Theorem 10. For k = 1, it was proved by N. Linial (1978).

PI = {a,b,e}
P2 ={d,e}

SI ={b,d}
S2= {e,e}

Figure 6. A 2-optimal path partition M = (Pl,P2) and a 2-coloring (SI,S2).

Theorem 13 (Linial 1978): If P = (PI, ... , Pq ) is an optimal path


partition in an oriented graph G, there exists a stable set S with
for i = 1, ... ,q

For the graph G in Fig. 6, if we take PI = {a,b,c}, P2 = {d,e}, we


can choose S = {a, d}.
The SPPC has been shown to hold for special classes of graphs; it
holds in particular for transitively oriented graphs without circuits, a
result of (Greene and Kleitman 1976), for bipartite graphs as proved
by (Berge 1984) and for oriented graphs where all cycles (and circuits)
are vertex disjoint as shown in (Sridharan 1993).
As can be expected, for graphs without circuits, stronger results can
be derived. We shall mention first the following statement due to K.
Cameron (1986) and M. Saks (1986).
Theorem 14 (Cameron 1982, Saks 1986): Let G be an oriented graph
without circuits and k a positive integer. Then there exists a partial
38 AIDING DECISIONS WITH MULTIPLE CRITERIA

k-coloring (SI, S2,'" ,Sk) such that for every k-optimal path partition
M = (PI"'" Pq) every path Pi of M meets min {k, JPi!} color classes.

By interchanging the roles of paths and stable sets, we can obtain a


"dual" result. The analogue of a partial k-coloring for paths is a family
of at most k vertex disjoint paths (PI, ... , Pq). We may call it a path
k-packing. Its cardinality is IUr=1 Pil and it is optimum if its cardinality
is maximum. R. Aharoni, I. Ben-Arroyo Hartman and A.J. Hoffman
(1985) have obtained the following:

Theorem 15 (Aharoni, Ben-Arroyo Hartman and Hoffman 1985): Let


G be an oriented graph without circuits and k a positive integer. Then
there exists a coloring G = (SI,"" Sp) such that for every optimum
path k-packing (PI,'" ,Pq) every color class Si ofG meets min{k, lSi!}
different paths of M.

As a final observation we should indicate that the results of (Roy


1967), (Gallai 1968) and (Gallai and Milgram 1960) have been general-
ized and reformulated in terms of hypergraphs by H. Muller (1981).

Conclusion
We have presented in a condensed form the contributions of B. Roy
to graph theory. Our discussion has shown that several of his results as
well as some of the questions raised in his papers have undoubtedly had
an impact on the work of numerous researchers.
Many generalizations and variations have followed; what is now called
the "theorem of Roy-Gallai" has in particular been a source of inspiration
for a number of researchers and we conjecture that it will continue for
many years to come.

Acknowledgments
This research was carried out during a visit of the second author
to GERAD in Montreal. Support of both authors by Grant NSERC
#GP0105574 is gratefully acknowledged.
REFERENCES 39

Note added in proof:


It was brought to our attention by a referee that a striking allusion to
Theorem 7 appears in the "Cymbalum Mathematicorum" of Chevalier
Theo de Biroille (1534). We are pleased to quote an excerpt of this
recently rediscovered manuscript.

Les longs chemins du Roy


Sont fort bien colores;
II fault pour qu'on Ie voye
Points et lignes explorer:
Que soyent moult arcs donnes!
Pour toute direction
Qu'on veult bien or donner
U ne coloration
Peut etre ainsy trouvez
Au total de couleurs,
Comme cela est prouvez,
Egal a la longueur
Du plus long des chemins
Qu'on y peut parcourir
A pied ou a la main
A votre gloire, Messire

References
Aharoni, R., Ben-Arroyo Hartman, I., and Hoffman, A.J. (1985). Path partitions and
packs of acyclic digraphs. Pacific J. of Math. 118:249-259.
Bang-Jensen, J., and Gutin, G. (1988). Generalization of tournaments: a survey.
J. Graph Theory 28: 171-202.
Berge, C. (1973). Graphs and Hypergraphs. North Holland, Amsterdam.
Berge, C. (1982). Diperfect graphs. Combinatorica 2:213-222.
Berge, C. (1984). A property of k-optimal path-partitions. In: Progress in Graph
Theory (Waterloo, Onto 1982). Academic Press, Toronto, 105-108.
Berge, C., and Duchet, P. (1984). Strongly perfect graphs. Ann. of Discrete Mathe-
matics 21:57-6l.
Berge, C., and Ghouila-Houri, A. (1965). Programming, games and transportation
networks. Methuen, London.
Bermond, J.C., and Thomassen, C. (1981). Cycles in digraphs - A survey. J. of Graph
Theory 5:1-43.
Bilde, 0., and Krarup, J. (1969). A modified cascade algorithm for shortest paths.
METRA VIII 231-24l.
Bondy, J. A. (1976). Disconnected orientations and a conjecture of Las Vergnas. J. Lon-
don Math. Soc. 2:277-282.
40 AIDING DECISIONS WITH MULTIPLE CRITERIA

Busolini, D.T. (1971). Monochromatic paths and circuits in edge-colored graphs.


J. Combinatorial Theory 10:299-300.
Cameron, K. (1986). On k-optimum dipath partitions and partial k-colourings of
acyclic digraphs. Europ. J. Combinatorics 7:115-118.
Camion, P. (1959). Chemins et circuits hamiltoniens des graphes complets. C. R.
Acad. Sci. Paris 249:2151-2152.
Chvatal, V. (1972). Monochromatic paths in edge-colored graphs. J. of Combinatorial
Theory B 13:69-70.
Cunninghame-Green, RA. (1979). Minimax algebra. Lecture Notes in Economics and
Mathematical Systems 166, Springer-Verlag, Berlin.
Cunninghame-Green, RA. (1960). Process synchronization in a steelworks - a prob-
lem of feasibility. In: Proceed. 2nd International Conf. on Operational Research,
English Univ. Press, 323-328.
Cunninghame-Green, R.A. (1962). Describing industrial processes with interference
and approximating their steady-state behaviour. Operational Research Quart. 13:
95-100.
Dantzig, G.B. (1967). All shortest routes in a graph. Proc. International Symp. on
Theory of Graphs, Rome, Italy 1966, Paris, Dunod.
de Biroille, T. (1534). Private communication.
Deo, N., and Pang, Ch.Y. (1984). Shortest-path algorithms: taxonomy and annota-
tion. Networks 14:275-323.
Fajtlowicz, S. (1988). On conjectures of Graffiti. Disc. Math. 72:113-118.
Fajtlowicz, S. (1999). Written on the wall. Version 9-1999. Regularly updated file
available from ",clarson@math.uh.edu
Farbey, B.A., Land, A.H., and Murchland, J.D. (1967). The cascade algorithm for
finding all shortest distances in a directed graph. Manag. Sci. 14:19-28.
Floyd, RW. (1962). Algorithm 97: shortest path. Communications of the ACM 5:345.
Gale, D. (1959). Math Review 20 # 2727 55.00
Gallai, T. (1958). Maximum-minimum Siitze tiber Graphen. Acta Math. Acad. Sci.
Hungar. 9:395-434.
Gallai, T. (1968). On directed paths and circuits. In: Theory of graphs. Proceed. Colloq.
Tihany 1966, Academic press, New York, 115-118.
Gallai, T., and Milgram, A.N. (1960). Verallgemeinerung eines Graphen theoretischen
Satzes von Redei. Acta. Sci. Math. 21:181-186.
Gallo, G., and Pallottino, S. (1986). Shortest path methods: a unifying approach.
Math. Prog. Study 26:38-64.
Giffier, R. (1963). Scheduling general production systems using schedule algebra.
Naval Res. Logistics Quart. 10:237-255.
Ghouila-Houri, A. (1960). Sur l'existence d'un flot ou d'une tension prenant ses valeurs
dans un groupe abelien. C. R. Acad. Sci. Paris 250:3931-3933.
Goldberg, M., and Moon, J.W. (1972). Cycles in k-strong tournaments. Pacific J.Math.
40:89-96.
Goldman, A.J., and Tiwari, P. (1986). Allowable processing orders in the accelerated
cascade algorithm. Discrete Appl. Math. 13:213-221.
Gondran, M. (1975). Path algebra and algorithms. In: Combinatorial Programming:
Methods and Applications. (B. Roy, ed.) NATO Adv. Study Inst. 19, Reidel, Dor-
drecht, 137-148.
REFERENCES 41
Gondran, M. (1996a). Analyse MINMAX. C. R. Acad. Sci. Paris 323:1249-1252.
Gondran, M. (1996b). Analyse MINPLUS. C. R. Acad. Sci. Paris 323:371-375.
Gondran, M., and Minoux, M. (1984). Linear algebra in dioids: a survey of recent
results. Ann. of Discrete Mathematics 19:147-164.
Gondran, M., and Minoux, M. (1997). Valeurs propres et fonctions propres d'endomor-
phismes a diagonale dominante en analyse Min-Max. C. R. Acad. Sci. Paris 325:
1287-1290.
Hammer, P.L., and Nguyen, S. (1977). APOSS. A partial order in the solution space
of bivalent problems. In: Modern trends in cybernetics and systems. Proc. Third
Internat. congr. Bucharest, 1975. Springer, Berlin, pp. 869-883.
Hansen, P. (1976/77). A cascade algorithm for the logical closure of a set of binary
relations. Info. Proc. Lett. 5:50-54.
Hansen, P., Kuplinsky, J., and de Werra, D. (1997). Mixed graph colorings. Math.
Methods of O.R. 45:145-160.
Hoffman, A.J., and Winograd, S. (1972). Finding all shortest distances in a directed
network. Math. of Numerical Computation, IBM J. Res. Develop. 16:412-414.
Hu, T.C. (1967). Revised matrix algorithms for shortest paths. SIAM J. Appl. Math.
15:207-218.
Greene, C., and Kleitman, D.J. (1976). The structure of Sperner k-families. J. Com-
bin. Theory A 20:41-68.
Gries, D., Martin, A.J., van de Snepscheut, J.L.A., and Udding, J.T. (1989). An
algorithm for transitive reduction of an acyclic graph. Sci. Comput. Programming
12:151-155.
Klee, V., and Larman, D. (1979). Use of Floyd's algorithm to find shortest restricted
paths. Ann. of Discrete Math. 4:237-249.
Land, A.H., and Stairs, S.W. (1967). The extension of the cascade algorithm to large
graphs. Manag. Sci. 14:29-33.
Li, Hao (1998). A generalization of the Gallai-Roy theorem, preprint, Universite de
Paris-Sud and Graphs and Combinatorics (to appear).
Linial, N. (1978). Covering digraphs by paths. Disc. Math. 23:257-272.
Muller, H. (1981). Oriented hypergraphs, stability numbers and chromatic numbers.
Disc. Math. 34:319-320.
Naur, P. (1994). Proof versus formalization. BIT 34:148-164.
Papadimitriou, C., and Sideri, M. (1999). On the Floyd-Warshall algorithm for logic
programming. J. Logic Programming 41:129-137.
Poel, M., and Zwiers, J. (1993). Layering techniques for development of parallel sys-
tems: an algebraic approach, computer aided verification. Lecture notes in Comput.
Sci., Springer, Berlin, 663:16-29.
Preissmann, M., and de Werra, D. (1985). A note on strong perfectness of graphs.
Math. Prog. 32:321-326.
Rado, R. (1943). Theorems on linear combinatorial topology and general measure.
Ann. of Math. 44:228-270.
Robert, P., and Ferland, J. (1968). Generalisation de l'algorithme de Warshall. Revue
Fran~aise d'Aut. Info. et Rech. Oper. 2:71-85.
Rote, G. (1985). A systolic array algorithm for the algebraic path problem (shortest
paths; matrix inversion). Computing 34:191-219.
Roy, B. (1958). Sur quelques proprietes des graphes fortement connexes. C. R. Acad.
Sci. Paris 247:399-401.
42 AIDING DECISIONS WITH MULTIPLE CRITERIA

Roy, B. (1959a). Contribution de la theorie des graphes al'etude de certains problemes


lineaires. C. R. Acad. Sci. Paris 248:2437-2439.
Roy, B. (1959b). Transitivite et connexite. C. R. Acad. Sci. Paris 249:216-218.
Roy, B. (1962). Cheminement et connexite dans les graphes: application aux problemes
d'ordonnancement. METRA Serie Speciale No 1 (Mai 1962).
Roy, B. (1967). Nombre chromatique et plus long chemins d'un graphe. Rev. Info. et
Rech. Oper 5:129-132.
Roy, B. (1969). Graph partial s-connexe extremum. Revue Roumaine Math. Pures et
Appl.XIV, 1355-1368.
Roy, B. (1969/70). Algebre moderne et theorie des graphes. Dunod (Paris, Tome 1:1969,
tome 2:1970).
Saks, M. (1986). Some sequences associated with combinatorial structures. Disc.
Math. 59:135-166.
Shimbel, A. (1954). Structure in communication nets. Proc. Symp. on Information
Networks. Polytechnic Inst. of Brooklyn, 119-203.
Sridharan, S. (1993). On the strong path partition conjecture of Berge. Discrete Math-
ematics 117:265-270.
Szekeres, C., and Wilf, H.S. (1968). An inequality for the chromatic number of a
graph. Journal of Combinatorial Theory 4:1-3.
Tabourier, Y. (1973). All shortest distances in a graph. An improvement to Dantzig's
inductive algorithm. Discrete Mathematics 4:83-87.
Takaoka, T. (1998). Subcubic cost algorithms for the all pairs shortest path problem.
Algorithmica 20:309-318.
Thomassen, C. (1980). Hamiltonian-connected tournaments. J. Combin. Theory B28:
142-163.
Tomescu, I. (1968). Sur l'algorithme matriciel de B. Roy. Revue Prant;aise Informat.
Rech. oper. 2:87-91.
Warren, H.S. (1975). A modification of Warshall's algorithm for the transitive closure
of binary relations. Comm. ACM 18:218-220.
Warshall, S. (1962). A theorem on Boolean matrices. J. of ACM 9:11-12.
Wongseelashote, A. (1976). An algebra for determining all path-values in a network
with applications to Kshortest paths problem. Networks 6:307-334.
Yoeli, M. (1961). A note on a generalization of Boolean matrix theory. American
Math. Monthly 68:552-557.
Yuval, G. (1975/76). An algorithm for finding all shortest paths using N 2 . 81 infinite-
precision multiplications. Information Processing Lett. 4:155-156.
Zhang, K.M., and Song, Z.M. (1991). Cycles in digraphs - A survey. Nanjing Daxue
Xuebao Ziran Kexue Ban, (Special Issue) 27:188-215.
Zhu, S.Y. (1985). A parallel computation of the transitive closure of a relation using
Warshall's method. J. Shanghai Jiaotong Univ. 19:101-107, 127.
II

PHILOSOPHY AND EPISTEMOLOGY


OF DECISION-AIDING
DECISION-AID BETWEEN TOOLS
AND ORGANISATIONS·

Albert David
Evry-Val d'Essonne Universit!, France
albert.david@cgs.ensmp.fr

Abstract: There has been a wealth of very varied literature on decisions, decision aid and
decision aiding tools since the initial work carried out by Barnard [1938] and
Simon [1947], which marked the sudden emergence of decision-related issues
in organizational theory. Whether the decisions concern strategy, finance,
marketing or « operations », decision aiding tools have signalled the waves of
rationalisation that have taken place in the history of management sciences and
organizational theory. Our aim here is not to go into the typology of the
different tools, but to explore two particular questions. Do decision aiding
tools have specific structural properties? Which concepts can be used to
analyse and understand the dynamics of their introduction into organizations
and the resulting learning processes? We will begin by asking what a tool is,
what aid it can offer, for which decisions. We will see that knowledge cannot
be produced without tools, that decision-making is a complex process and that
decision aiding is a prescriptive relationship. We will confront decision aiding
tools with two « functional» and two « critical» decision models. We will
then examine the structure and dynamics of managerial innovations in general.
Finally, we will analyse to what extent decision aiding tools are specific
managerial innovations, by their technical substratum, their management
philosophy and their simplified view of organizational relations. We will
conclude by putting into perspective conforming and exploring approaches,
methods for managing change and the nature of learning during the tool
contextualisation process.

Keywords: Decision-aid; Decision aiding tools; Management models; Organisation theory;


Epistemology

I A first version of this paper was published in French with the title "L'aide ala decision entre
outil et organisation", Entreprise et Histoire, n° 13, 1996.
2 The research activity of Albert David is also with the Centre de Gestion Scientifique (Ecole
des Mines de Paris) and the LAMSADE (Paris-Dauphine University).
46 AIDING DECISIONS WITH MULTIPLE CRITERIA

1. Which tools, what aid, which decisions?


1.1 Knowledge cannot be produced without tools
The term tool refers to « an object produced to act on materials, to do a
job »3. In comparison with instruments or machines, tools are designed to be
simpler and used directly « by hand» and are therefore, to a certain extent,
controlled by the user. Tools are hence seen as extending and developing
human capacities. Nonetheless, it should be noted that expressions such as
« management instruments» [see Soler, 1993, for example], « management
machines », [Girin, 1981], « management models» [see Hatchuel and
Moisdon, 1993, for example], « management devices» [see Moisdon, 1996,
for example] or « management apparatus» [Hatchuel and Weil, 1992] are
also to be found in management research literature. Whereas a distinction is
seldom made between « tool» and « instrument », the expression
« machine» refers to something that goes beyond and sometimes even
manipulates or enslaves its users, whereas the notions of management
devices or apparatus refer to tools systems that structure the organisation of
collective action. The distinction between model and tool is more delicate. In
a positivist approach to modelling, models can be seen as more prescriptive -
stating the truth - whereas tools are more open - helping to discover it. As we
will see, even the most formal models are now used in a constructivist
rationale, making the distinction between tool and model somewhat
artificial. « Any formalization of organized activity, [... ] any system of
reasoning that formally links a certain number of variables within an
organisation, designed to provide information for the different acts of
management» [Moisdon, 1996], can be considered as management tools.
We will retain here that the term tool refers to an object, and therefore to
something that is at least partially separate from its user and which presents
at least a small degree of formalisation. Hence, a tool is not entirely related
to the context in which it is used, which means that it can always be formally
adapted. It can be noted that this idea of the tool as a means of acting, refers
back to the etymology and philosophical meaning of the verb « to inform »,
which means shaping, forming, giving structure and meaning. In other
words, constructing and using a tool involves producing and handling
knowledge. On the other hand, it can be maintained that knowledge cannot
be produced without tools, however simple, informal or inexplicit they may
be. Moisdon's definition, although it was drawn up mainly to refer to tools
that work at least partly on the basis of formal models - this is indicated by
the notion of « variable» - therefore also applies to a list, a double-entry

3 Definition taken from the Robert dictionary.


Decision-Aid Between Tools And Organisations 47

table, an organization chart, work groups, assessment interviews or


management by objectives contracts.

1.2 Decision-making is a complex process


What is a decision? And if we take a wider definition of management,
what is management apart from making decisions?
As the saying goes, «deciding is what you do when you don't know what
to do ». This is the traditional picture of a decision maker making a decision,
that is closing or, etymologically, «cutting off» the matter. When we do not
know what to do, we either make a decision or we put it off until later, for
example if we consider that we do not have sufficient information on the
characteristics or impact of such and such a scenario. With this conception of
decision-making, the focus is on the instant when the decision is made.
Unless he decides to put it off until later or to gather more information, the
decision maker is alone and exercises his own free will. Even the choice not
to decide is a decision in itself. In this case, the only aid that may be
necessary is psychological support, in the form of encouragement, for
instance. Nonetheless, it is clear that the decision maker's position owes
nothing to chance, as the decision he is making has a story behind it. In the
traditional model of the rational decision maker, the story begins with the
definition of the problem. The decision maker then examines the different
alternatives, then makes his choice. This moment of choice is what we call
the decision; what comes before it is simply a preparatory phase, what comes
after it is merely its management.
This conception of decisions has been subject to various criticisms that
can be summarised in the following points:
even if the moment when the choice is made can be pinpointed, it cannot
be isolated from the building up of alternatives or from the individual
and organisational context of the choice : the decision is a process, not a
point in time;
a decision is not a decree : the process does not end when the choice is
made;
a decision cannot be applied without being transformed or reinterpreted;
there is no first order reality, or single, omniscient decision maker, or
objective optimum [Roy, 1985];
decisions are not linear, nor do they have a single rationality or a single
purpose [Sfez, 1973];
the concepts of "creative ambiguity", translation, "organising by
chance", "overcoding", learning and exploration are central to attempts
to analyse the process of complex decision-making;
48 AIDING DECISIONS WITH MULTIPLE CRITERIA

players' rationality is not only limited by their individual cognitive


capacities, but also by organisational systems that define and structure
all collective action, including the decision process;
organising implies choosing a specific means of producing knowledge;
in contraposition to Simon's proposal, it can be said that limiting
rationality is a condition necessary to action: organisation conditions
and structures action, but it is not possible to act without being
organised;
a distinction can be made between programmed decisions, non-
programmed but highly structured decisions and non-programmed, ill-
structured decisions [Simon, 1957; Le Moigne, 1974; Mintzberg et ai,
1976]. Programmed decisions «are programmed by automatism
(operational research), with human intervention limited to controlling
that the conditions under which the automatism is applied are correctly
fulfilled; non-programmed, little structured decisions concern partial
models and simulations in which the decision maker's opinion, aided by
the tools, is never eclipsed by the tools; in non-programmed, ill-
structured decisions, the most vital aspect is the understanding of the
problem and the decision makers capacities are a determining factor »4;
essentially, decisions are not programmed or unprogrammed, structured
or unstructured: on the one hand, solutions can continually be found to
problems that had remained unresolved and, on the other, a decision may
become programmed simply because the players have decided to use
such and such a tool to make their choices, irrespective of whether the
modelling is realistic or nots .

1.3 Decision aiding is a prescriptive relationship


The question of decision aiding leads to a prescriptive relationship. In the
history of management techniques, prescription was strong when modelling
claimed to dictate choices based on universal rationality, independent of the
players. This was true for Taylor and his scientific organization of the
workplace, and also for operational research when it confused optimisation
of models and optimisation of reality. Prescription was weak whenever tools
were designed or used with aiding in mind rather than as a substitute for the
decision maker. In other words, recommendations were all the stronger when

4 H. Bouquin, article « Controle », Encyclopedie de gestion, p.556


5 For example, the Dow Jones Index is a simple arithmetic average calculated using a limited
number of shares ... but this in no way prevents the index from serving as a reference and
therefore as a reality for a certain number of financial markets.
Decision-Aid Between Tools And Organisations 49

tools were used in a closed, conforming perspective, and all the weaker
when the tools were used in an open, exploratory perspective6 .
The crisis in operational research that took place during the second half
of the 1970s, when a certain number of tool designers and managers using
them began to have misgivings about the performance of models, can be
analysed as a crisis in the prescription process [Hatchuel, 1996]. Similarly to
a number of other management instrument approaches, one of the major
changes in the field of decision-making, begun at the end of the 1960s [Roy,
1968], was the gradual abandonment of strong recommendations based on
universal rationality. The science of decisions gave way to a weaker form of
prescriptive action, embodied in the idea of a science of decision aiding
[Roy, 1992].
According to Roy [1985], «decision aiding is the activity of the person
who, through the use of clearly explicit but not necessarily completely
formalised models, helps obtain elements of responses to the questions posed
by a stakeholder of a decision process. These elements work towards
clarifying the decision and usually towards recommending, or simply
favouring, a behaviour that will increase the consistency between the
evolution of the process and this stakeholder's objectives and value
system ».
This definition appears to give decision aiding a relatively modest role.
This conception of aiding is, in reality, quite sophisticated. A decision aiding
tool is considered as a model which is «clearly explicit but not necessarily
completely formalised », which helps the person intervening to act in a more
coherent manner with respect to his own objectives and value system.
Naturally, the neutrality is only on the surface: we will see below that this
definition of decision aiding tends to direct the decision-making process in a
particular manner.
In this context, « deciding, or in a wider scope, intervening in a decision-
making process only very rarely implies finding a solution to a problem.
Often, it is a question of finding a compromise or making people accept
arbitration in a conflict situation» [Roy, 1993, p.21]. It can be seen that the
definition of « deciding» is very wide in this case. Nevertheless, it should be
noted that if too wide a definition is accepted, anything that serves to
improve efficiency in decision-making must be considered a decision aiding
tool. In this way, quality circles, a new structure, assessment interviews,

6 The strength or weakness of recommendations also depends on technical knowledge: it is


more difficult for a Marketing Department to «recommend» to the Engineering
Department the specifications for a car to be marketed five years later, than for a mechanic
to « recommend» changes to brake pads [David, 1990].
50 AIDING DECISIONS WITH MULTIPLE CRITERIA

meetings between corporate management and top executives or a


graphological analysis become decision aiding tools in the same way as
performance indicators, scores or multicriteria methods. In which case,
should reading the Bible, installing a coffee machine or giving bonuses not
also be considered as decision aiding tools?
At this stage we run the risk of watering down the obJect of our analysis :
if all activities can be attached to a decision-making process, deciding is
nothing other than a permanent activity in organisations, and decision aiding
has no clear limits. The empirical nature of the definitions is very clear: the
term « decision aiding tools» is generally used to qualify formalised tools
coming, for example, from operational research, decision-making theory and
statistics. Similarly, the expression « strategic decision aiding tools » refers
to all the tools born from the wave of instrumentalisation of strategic
decisions [Allouche and Schmidt, 1995): life curve models, BCG or ADL
matrixes, strategic planning tools, etc.

1.4 Confronting the tools with four decision models


First, let us consider two « functional» decision models: Simon's IDCE
canonical model and Courbon's regulation loop [1980]. Then we will
consider two more « critical» decision models: the garbage can model
[March and Olsen, 1972] and the overcoding theory [Sfez, 1973f, How can
these four models help us to understand the nature and role of decision
aiding tools?
The « canonical» decision model proposed by Simon makes a distinction
between four interdependent phases: intelligence (I), design (D), choice (C)
and evaluation (E). For our purposes here, we will take this standard model
as a general analytical model, presupposing neither a single decision maker
nor a sequential process: for a given decision, the different phases of I, D, C
and E can co-exist at the same time in different parts of a multi-player
system. In this manner, our use of the model is not incompatible with the
above-mentioned idea that decision-making processes are neither linear, nor
have a single rationality or a single purpose.
In a more systemic perspective, Courbon [1980] viewed decisions as a
four-phase regulation loop :

7 We have taken these four models as examples, to confront functional and critical
approaches. For a more in-depth analysis of the types of rationality at work in decision
models, see, for example, Munier [1994].
Decision-Aid Between Tools And Organisations 51

real decision

mod,Hng ,teoring ( \ "ntrot observation,


,--P_utt_i_n_g_in_to--,- system \ ) system- measurement
_ ope"Ho" ~ /

virtual decision

representation
understanding of the organisation

Figure 1. The decision process as a regulation loop [Courbon, 1982]

Real decisions, observations and measurements, virtual decisions and


putting into operation-modelling follow on from one another in this order,
but none of the four phases can be considered as the first or the last. Each
element is related to another term, framed in the diagram. The virtual
decision refers to the individual's or the organisation's representation of
things and consists in the « organisation's intelligence», that is its capacity to
build a true vision of how the system of which it is part operates and of the
problems to be solved, and hence to deduce possible solutions. At this stage,
we refer to a virtual decision as the problem has been formulated (e.g. our
sales are falling), an action is considered (e.g. we will increase customer
loyalty), but the real decision has not yet been made. The steering system
enables the virtual decision to be translated in operational terms: the
company can then design all the « levers» required to act (in our example,
offer discount coupons for renewed purchases). The real decision refers to
the action as such; it is the translation of the representation into action, via
the steering system. The control system serves to carry out the necessary
measurements and observations on the parts of the system on which the
decision is supposed to have an effect; it feeds the representation that
52 AIDING DECISIONS WITH MULTIPLE CRITERIA

generated the decision, by confirming or refuting it (in our example, retail


panel data and information received from the sales force).
The four elements form a loop: the representations impact on the
steering system that enables the virtual decision to be transformed into a real
decision. The control system serves to measure the effectiveness of the
decision and provides information that in turn potentially changes
representations.
Although they postulate that there is a phase of choice - which other
more critical models contest -, these representations of decision-making do
go beyond the idea that decision aiding only takes place at the time when the
choice is made. Understanding the situation, drawing up alternatives,
evaluation and, in Courbon's model, representations, steering systems and
control systems are also concerned.
Aiding during the «understanding» phase takes us back to the problem
of clarification present in Roy's definition: the decision makers must be
provided with elements that enrich their way of formulating the problem
with which they are faced and make it more pertinent. Aiding during the
design phase involves helping build alternatives for the choice. It is a
fundamental phase in the process and its instrumentation has only recently
been the subject of rationalisations [Hatchuel, 1994 and 1996; Weil, 1999].
Aiding in the choice as such concerns problems of detailed descriptions of
alternatives and the evaluation of their impact, problems of aggregation and
a more psychological aspect of support (a decision maker's anxiety at having
to make a decision and the inherent responsibility). Aiding in the evaluation
phase concerns, at the same time, difficulties in measuring the effects of the
decisions made and also the learning process, including changes in
representations, sparked off by the confrontation between expected effects
and measured effects.
Going back to Courbon's loop, aiding in the virtual decision matches
Simon's phase of understanding: it involves detecting a problem and
deciding that something must be done about it. Aiding in the «putting into
operation - modelling» phase concerns levers, that is means of action.
Elements such as management by objectives contracts, the structure of an
organisation or the use of dispute procedures are all means of making virtual
decisions operational, that is concrete and real. The «real decision» phase
of the loop is, in decision aiding terms, a «blind spot », as at that point the
ball is in the court of the « environment» receiving the decision. From the
decision maker's standpoint, nothing else happens until the control system
can « read» the effects of the decision. The control phase corresponds to
Simon's «evaluation» phase, in which the aim is to help the decision maker
Decision-Aid Between Tools And Organisations 53

design an appropriate control system and to interpret the information


received from it in a pertinent manner.
Most decision aiding tools contain one or several phases that match these
two models, especially Simon's choice and evaluation phases and Courbon's
steering system and control system boxes. However, this becomes slightly
more complicated if we consider that in many cases tools give ideas of
problems, and the way in which virtual decisions are made operational has
an impact on the way in which the effects of the real decision can be
controlled, and solutions change the representation or the understanding of
the problem. It becomes clear that the introduction of a tool in an
organisation represents far more than simply «plugging in» a well-defined
aid procedure at a given time and place in the decision-making process.
Nonetheless, Simon's and Courbon's models do share a functionalistic
approach. They are not only descriptive models, as in both cases an effective
decision-making process is guaranteed by going through each of the four
phases and by respecting the coherence of the overall loop. If we now go on
to look at more critical models, we must change our perspective. To
simplify, the garbage can model holds that problems, solutions and decision
makers meet in an anarchical manners. Decision aiding, as defined by Roy,
consists in introducing a certain amount of method into Cohen, March and
Olsen's big garbage can. In the overcoding model, which is more
sophisticated, a decision has the same structure as a story. The notion of
chance - in the sense of the meeting of independent causal chains - is also
present, but Sfez analyses how the overcoding of a code - that is the
rationality of one player - by another code - that is by the rationality of
another player - can produce meaning and innovation. In this model, each
player can instrument his point of view as he wishes, but the decision in the
sense of a « story» can only be aided in two ways: (1) the fact that the
players know that a decision-making process has this structure can clarify
things for them and help them avoid errors that may have occurred if they
had interpreted what was happening to them with a traditional view; (2) co-
operation between the players could be improved at certain moments in the
process, but it is clear that the person initiating this improvement should then
also be considered as a player in the story, with his own rationality and his
own code. In other words, unless it is postulated that there is a supra-rational
co-ordinator, the introduction of such a player - for example a researcher-
player [Hatchuel, 1994] - is part of the decision-making process. This takes
us back to the difficult question of knowing to what extent a descriptive

8 Although they are in the confined, restricted space defined by the garbage can, meaning that
it is not total anarchy.
54 AIDING DECISIONS WITH MULTIPLE CRITERIA

decision model can be used in a prescriptive approach. In other words, can a


model fulfil a function of representing reality at the same time as being a
guideline for action [David, 1998b].
In the following pages, we must therefore consider decision aiding from a
functional standpoint in order to understand what aid is required by the
players in a decision-making process, but also from a critical standpoint to
take into account the fact that all aids to decision-making involve
intervention, with the sudden arrival of a new player in the process.

2. Structure and dynamics of managerial innovations


2.1 Tools and organisation: an isomorphic structure
Hatchuel and Weil [1992] demonstrated that all management tools 9
comprise three interactive elements: a technical substratum, which is the
abstraction on which the tool is grounded and which enables it to work, a
management philosophy, which is the spirit in which it is planned to use the
tool, and a simplified vision of organisational relations, which offers a basic
glimpse of the main players and their roles in relation to the tool. For
example, expert systems have a technical substratum comprising a rule base,
a knowledge base and an inference engine. The management philosophy
behind these systems is, at least at their outset, the automation of reasoning.
The simplified vision of organisational relations that they implicitly offer
includes experts (who possess knowledge), cognitive scientists (who extract
it) and decision makers (who use it, for example, to make diagnoses). We
have adopted a more general formulation of this conceptual framework
[David, 1998a and b], by introducing the terms of formal model, efficiency
model and organisational model. We can now go back to the difficulty,
mentioned at the beginning of this article, of making a distinction between
tools and models, by putting forward the following hypothesis :
Tools are the expression of a three-tier model: a formal
model, an efficiency model and an organisation model.
Hatchuel and Weil drew up this three-tier structure for managerial
techniques such as the scientific organisation of labour, operational research,
expert systems or computer-aided production management. The distinctive
characteristic of these techniques is that they primarily and explicitly
concern organisational knowledge rather than organisational relations. For
instance, when an expert system is designed, work begins by examining

9 Hatchuel and Weil use the tenn «managerial technique».


Decision-Aid Between Tools And Organisations 55

rules, knowledge and reasoning by inference, irrespective of the new


relations between the players implicitly assumed by its implementation.
But there are other management tools that primarily and explicitly
concern relations between the players, such as a new structure, for example.
Still others address both relations and knowledge: a management by
objectives contract includes both a contractual relation and knowledge in the
form of objectives. We have therefore applied Hatchuel and Weil's analysis
to managerial innovations in general [David, 1996], distinguishing between
knowledge-oriented innovations, relations-oriented innovations and, as a
continuum of the two, mixed innovations.
In knowledge-oriented innovations, the technical substratum concerns
only the knowledge, and the simplified vision of the organisation concerns
only the relations between the players. In relations-oriented innovations, the
technical substratum is relational and the simplified vision of the
organisation concerns only the knowledge.
It can thus be seen that, implicitly or explicitly, a tool always has a dual
knowledge/relations base, either through its technical substratum or through
its simplified vision of the organisation. Hence, an organisation can be seen
both as a system of relations and as a system that produces knowledge.

models and tools

relations /knOWledge

organisation

Figure 2. Management tools and organisations have isomorphic relationslknowledge


structures

Organisations and tools therefore stem from a limited rationality of the


same nature. This will enable us to define the distance between tools and
organisations and help provide a better understanding of what happens when
an organisation adopts a tool - in particular, as we will see in the third part,
when it is a decision aiding tool.
56 AIDING DECISIONS WITH MULTIPLE CRITERIA

2.2 Distance between tool and organisation and tool


contextualisation process
The four possible starting points for the process of introducing
managerial innovations
As we have seen above, a managerial innovation can concern primarily
relations or primarily knowledge. In addition, at the start of the process of
introducing the innovation, details of relations and knowledge mayor may
not be fixed. For example, when management by objectives contracts are
introduced, they may only be defined in outline or, on the contrary, may
have a very elaborate definition from the start, with a list of indicators and
detailed organisation of the procedures for negotiation and discussion
between the persons signing the contracts.
The starting point for the process of introducing managerial innovations
can therefore be represented by a dot on a two-dimensional graph: the
horizontal axis indicates if the innovation concerns relations or knowledge -
or both -, the vertical axis indicates the level of precision to which the
innovation is defined at the start of the process, that is its degree of
formalisation. Figure 3 below indicates four standard situations. A purely
relational framework, such as the new structure decided in February 1990 at
the RATP [David, 1995], is shown in the top left-hand quarter of the
diagram. It is a relational configuration, initially only defined in outline. A
purely knowledge-oriented framework would be shown in the top right-hand
quarter. An example of this framework is a management decision to reduce
stock levels by half - without specifying how this is to be done. If, for
instance, players from marketing and engineering departments in the car
industry [David, 1990], decide to co-operate and if the exact composition of
the group, the frequency of meetings, reporting etc. are defined beforehand,
the co-operation is a relational procedure that can be placed in the bottom
left-hand quarter of the diagram. Finally, it is also possible to address in
detail how the knowledge is to be built up and handled. For example, if the
way in which the personnel is to be evaluated is fixed and if the different
criteria for judgement are precisely defined in advance, including the
aggregation procedures for evaluations on each criteria and, for example,
how the evaluation conditions affect people's careers, then this is a
knowledge-oriented procedure that can be shown in the bottom right-hand
side of the diagram:
Decision-Aid Between Tools And Organisations 57

framework

re lational knowl edge


fr amewo oriented
frame work

relations knowledge

knowl edge
relati onal oriented
procedure procedure

detailed
procedure

Figure 3. The four possible starting points for the process of introducing managerial
innovations

Internal contextualisation and distance between innovation and


organisation
Whether a managerial innovation is relations or knowledge-oriented,
whether its initial degree of formalisation is weak or strong, a third variable
will also playa part: the innovation's degree of internal contextualisation.
By contextualisation we mean :
« a state or a specific process of reciprocal transformation of the
innovation by the players, and the players by the innovation JO »
The internal degree of contextualisation can be defined as the
« distance» that exists, at a given time in the history of an innovation in an

10 This notion of contextualisation is therefore stronger than the more traditional notion of
adoption and less ambiguous than that of codification, which refers to both the
formalisation of the innovation in manuals and directions for use and to the concept of
encoding used in the cognitive sciences in particular.
58 AIDING DECISIONS WITH MULTIPLE CRITERIA

organisation, between the innovation and the organisation. The greater the
distance between the innovation and the organisation adopting it, the smaller
the degree of internal contextualisation. On the contrary, the nearer the
innovation is to the organisation adopting it, the higher the degree of internal
contextualisation. In qualitative terms, the « distance» not only corresponds
to the gap between the way things operate at present and the way they are
imagined in the future, but also to the time it will take and the difficulties
that will be encountered before the innovation works effectively in the
organisation.
If, at the start of the process, we consider the organisation (relations and
knowledge) on the one hand, and the managerial innovation on the other,
and if we acknowledge that each of the two includes an incomplete vision of
the other (initially, the players have an incomplete vision of the innovation;
the innovations convey a simplified vision of the organisation), then the
contextualisation of a managerial innovation in an organisation can be seen
as a process of cross exploration. At the start, the technical substratum is
controlled or controllable at varying degrees by the players, the managerial
philosophy is understood and accepted at varying degrees by the players, and
the simplified vision of relations or knowledge is to a greater or lesser extent
schematic and removed from current relations and knowledge. All these
factors illustrate a certain « distance» between the innovation and the
organisation. If all goes well, the process will converge towards a full
integration of the innovation and the organisation, at the price of more or
less significant transformations on either side. At that point, the innovation is
fully contextualised, meaning that the technical substratum is working, the
management philosophy is well-adapted and the simplified vision of relation
and/or knowledge has become explicit and complete: it can then be said that
the distance between the innovation and the organisation is equal to zero.

Dynamics of managerial innovations


We have just seen that at the start of the contextualisation process, there
is a certain « distance» between the innovation and the organisation, which
depends both on the innovation itself and on the specific state of the
organisation that is about to adopt it. In principle, this distance will be
reduced until it is near to zero. This process is not necessarily regular or
convergent. There are three dimensions that influence the distance between
an innovation and an organisation: the degree of feasibility of the technical
substratum, the extent to which the management philosophy is pertinent, and
the compatibility between real relations and knowledge (whether they are
espoused or in-use theories, in Argyris and SchOn's terms [1978]) and the
Decision-Aid Between Tools And Organisations 59

simplified vision of knowledge and/or relations conveyed by the innovation.


The process starts to move because tension is generated by the players
comparing the innovation and the organisation. It is this tension that initiates
an exploration process concerning the relations and/or the knowledge
currently stated or in use.
The implementation process for managerial innovations - and in
particular, decision aiding tools - can therefore be visualised in a double
diagram (Figure 4). The first takes into account on the horizontal axis
whether work is done on relations or on knowledge, and on the vertical axis,
the extent to which the definition of the innovation is detailed. The second
serves to visualise the degree of internal contextualisation compared with the
degree of formalisation. Whatever the starting point of the process may be,
the arrival point - unless there is an interruption - will always be in the
bottom right-hand part of the first diagram: whether it is a question of
introducing KOI (knowledge-oriented innovations) or ROI (relations-
oriented innovations), we are interested here in the conditions under which
knowledge is produced and how this is organised. In our analytical diagram,
the effectiveness of the introduction of new relations is therefore analysed
with respect to the pertinence of the knowledge that this produces.
Contextualisation processes for managerial innovation can hence be
visualised by a more or less tortured path.

framework
starting
point degree of
contextualisation
minimum maximum

relations knowledge

detail

Figure 4. Visualisation of the process of contextualisation of a managerial innovation (case of


a knowledge oriented innovation presented as a « framework »)
60 AIDING DECISIONS WITH MULTIPLE CRITERIA

3. Decision aiding between tools and organisation


3.1 Evolution of formalised decision aiding tools
The history of decision aiding tools and their modes of existence is linked
to the history of techniques, to epistemological changes and to the history of
organisations. Just as an invention only becomes an innovation if society
gives it a practical value, a tool only becomes a decision aiding tool if it can
be given an organisational use, even if this use is imaginary.
In concrete terms, decision aiding tools have evolved in three ways:
The programming process for decisions which were not programmed
previously but were highly structured has not stopped: progress in
combinatorics and the power of computers, for example, have helped
find solutions to problems such as working out timetables which, if they
are to match the experience of planning agents, must be carried out using
a base of hundreds of thousands of candidate actions, combining
mathematical programming and column generating algorithms [Jacquet-
Lagreze, 1995].
The man/machine or decision maker/model interface has been included
in modelling with, in particular, the development of interactive decision-
aiding systems (IDAS); from the simple development of the
conversational characteristics of tools, there has been a move towards
explicitly taking into account movements backwards and forwards
between the programmes' results and the decision makers' reasoning.
Amongst the first interactive decision-making software was PREFCALC
[Jacquet-Lagreze, 1984], which enabled users to indicate their
preferences out of a selection of actions, pre-evaluated on several
criteria, then to modify the utility functions calculated by the
programme. Later came what were called « interactive methods », that is
procedures which explicitly take into account the exploration process
and the gradual construction of a structure of preferences
[Vanderpooten, 1990]. In this case, it can be said that it was the
human/machine interaction that allowed the gradual structuring of a non-
programmed, weakly structured decision, or the gradual programming of
a highly structured but non-programmed decision. At this point in time,
the most common metaphor for this was « chauffeur»: the decision
maker entrusts the programme with the job of guiding in exploring the
problem.
Variables that were not traditionally included in formalised models, such
as the degree of decentralisation of a structure or the degree of autonomy
Decision-Aid Between Tools And Organisations 61

of a category of players, have been explicitly included in the tools


[Erschler and Thuriot, 1992, SaYdi-Kabeche, 1996].
There have been two leaps forward in epistemological terms : first, the
appearance of multicriteria approaches, which represent far more than
simply introducing the optimisation of several criteria as a basis for
constructing tools instead of just one and second, a new awareness of the
fact that tools are not only concerned with conformation but also
exploration. The general rationale of these two changes is to make the use of
tools compatible with complex decision-making processes.

3.2 Structure of decision aiding tools


Do decision aiding tools fall within the province of specific formal
models (technical substratum), efficiency models (management philosophy)
and organisational models (simplified vision of the organisation)?
The technical substrata of decision aiding tools usually call on concepts
and methods of mathematical origin (probabilities, optimisation algorithms,
combinatory techniques, etc.) or, for tools of a simpler form, on particular
ways of or¥anising thoughts, such as lists, double entry tables, tree structures
or graphs l . Many of these concepts and methods are made operational by
the increasingly powerful and rapid means of processing information and
calculations. Management philosophy has also changed, due to the new
possibilities offered by the technical substrata, but also due to the « needs »
of organisations: things such as measuring quality, tracing decision-making
processes, rewarding merit, decentralising decisions, managing variety,
automating diagnostics or anticipating risks can all be targets in successive
rationalisation processes, materialised each time by a series of new tools.
The simplified vision of the organisation - or the theory that the tool
implicitly makes of the organisation in which it is to be introduced - depends
on the general idea that the tool's prescribers have of the organisation, on
their idea of reality and objectivity (epistemology) and, consequently, the
normative or exploratory role that they intend to give it.
We will now illustrate this with three examples of decision aiding tools :
performance indicators, scores and multicriteria analyses.
Performance indicators have a simple technical substratum: they are
usually lists or cross-entry tables, comprising different indicators that reflect

11 See Goody [1977] on the way in which lists, classifications and tables are a « graphical
reasoning ».
62 AIDING DECISIONS WITH MULTIPLE CRITERIA

the organisation's business. The level of abstraction is 10wl2 . The


management philosophy concerns steering and control: performance
indicators are designed to give information on the state of the organisation
and on the state of the environment, and also to monitor the internal and
external impact of decisions. The simplified vision of the organisation
concerns the players and their relationships: basically, it involves a
manager, a controller and a co-ordination structure to fill in the table and
draw up decisions from the results.
A score has a more sophisticated technical substratum. Traditionally, it
calls on statistical techniques such as discriminant analysis. The method
serves to explain a variable Y using a certain number of so-called
« explicative» variables. The aim is to find one or several equations that
enable the best possible classification of individuals into the different
categories defined by Y, by using knowledge of their characteristics with
respect to the explicative variables. A certain number of statistical indicators
serve to test the quality of the model. For instance, in decisions to grant bank
credits, a score will help establish a diagnosis of the way in which loan
contracts can be expected to run, whether good or bad. It can be noted that
this involves modelling client behaviour. The management philosophy will
be automatic decision-making if there is a normative approach, or
understanding of client behaviour and assistance in decision-making if there
is a more open approach. The simplified vision of the organisation is in part
linked to the management philosophy: automation of decisions can go hand
in hand with little qualified users in a centralised, and in principle controlled,
universe, whereas a more open approach concerns more modern
management, with more autonomous users and a hierarchy that designs and
puts into practice the general loan strategy in a decentralised universe.
The technical substratum of multicriteria analysis calls on less formalised
mathematical formalisations than a score, although it is in fact based on
more sophisticated concepts. It includes notions such as criteria, coherent
criterion families, actions or scenarios, preference thresholds, veto thresholds
and independence in the sense of preferences. Relations between actions
concern dominance, outranking, indifference and incomparability 13 •
The management philosophy is declined, on a first level, in the four
issues defined by Roy: choice, ranking, sorting or simply description of
possible actions. But more fundamentally, multicriteria analysis is intended
to aid decision-making by (1) highlighting objective and less objective

12 On the other hand, constructing the indicators to be included in a performance indicator can
often be very difficult.
13 For full explanations, see Roy [1985] and Roy, Bouyssou [1993]
Decision-Aid Between Tools And Organisations 63

issues, (2) separating robust conclusions from fragile ones, (3) dispelling
certain forms of misunderstandings in communication, (4) avoiding the trap
of false reasoning and (5) highlighting uncontrovertible results once
understood [Roy, 1993]. It is quite clear that this is not just a question of
techniques designed to enable reasoning to be based on several criteria
instead of a single one. Admittedly, the management philosophy of the
multicriteria approach borrowed from operational research the idea of
working on a model to help select solutions - as reflected in the four issues
mentioned above - but the origin of these methods should first be sought in
social choice theory. As several authors have noted [for example, Pomerol
and Barba-Romero, 1993, Munier, 1993], there is conceptual identity
between the issue of the aggregation of the opinions of judges on actions and
that of the aggregation of the evaluations of actions on multiple criteria.
Social choice theories (Borda, Condorcet, Arrow, etc.) were drawn up by
researchers working in the field of political sciences on, to simplify, issues
of democracy such as drafting procedures for the aggregation of individual
votes that best reflected collective demands, and hence the general interests.
This concern can be found explicitly in the multicriteria approach, where it is
a question of introducing as much reason as possible in a decision-making
process, whilst respecting the players free play. The democratic, honest ideal
conveyed by the multicriteria approach can also be compared with the
principle of isonomy - lack of prejudice - proposed by Hatchuel [1994] to
guarantee the scientific nature of a researcher's intervention in a company
and with the principle of « low normativeness» which, according to
Lautmann [1994], qualifies the Crozerian approach to organisations in which
everything takes place as though the player who aids in decision-making is
« the ally of the ideal reformers of the system, [... ] whether such reformers
exist or not» 14.
The simplified vision of the organisation may appear not to exist, due to
the apparently very general way in which the problem is formulated. No one
would disagree that deciding means choosing, sorting, ranking or simply
describing potential actions evaluated using a series of criteria. In reality,
similarly to performance indicators or scores, the simplified vision of the
organisation concerns relations between players. In multicriteria analysis, it
focuses on two elements:

14 Lautmann, 1994, p.187 : « A low fonn of nonnativeness is inserted, which is to address the
issue to the sincere, free thinking decision-maker who is not the sociologists double but
who makes a couple with him». There is an obvious parallel with Roy's «decision-
maker/researcher» couple.
64 AIDING DECISIONS WITH MULTIPLE CRITERIA

- first, a multicriteria table is not simply a multivariate table. Its aim is to


assess the performance of actions on dimensions - the criteria - that offer
rigorous comparisons in terms of preferences. The selected criteria
consist in a real theory of the effectiveness of actions;
- second, the procedures for the aggregation of the criteria are the
analytical version of a real negotiation in a multiplayer or multi-
institutional context: the way of comparing actions and of reaching the
proposal stage are similar to a process used to seek a compromise. It
represents a theory of the effectiveness of modes of relations between
players.
Given, as we have seen, that an organisation is made up of relations and
knowledge, theories of the effectiveness of action and theories of the
effectiveness of relations between players go to make up a theory of
organisations. Moreover it can be noted that the greatest successes of the
multicriteria approach concern two types of organisational worlds :
- in-house operational research departments, which use the multicriteria
approach, first because the cognitive nature of the problem requires the
use of a tool and second, because the role of this type of department is,
in principle, to provide unbiased studies that are as objective and
rigorous as possible (for example, the research departments at the RATP
used the Electre methods to rank stations in order of priority for
renovation [Roy, Present and Silhol, 1986]);
- multi-institutional co-operation, for example in the field of
environmental management [Maystre, Pictet and Simos, 1994], very
close to the management philosophy (need for a tool to help go beyond
purely political play) and the simplified vision of organisational relations
(responsible citizens in a process of dialogue) described above.

3.3 The dynamics of decision aiding tools


The path followed by the contextualisation of performance indicators, a
score and a multicriteria analysis can be seen on the graph in Figure 4. As
for most knowledge-oriented innovations, the starting point will generally be
on the right-hand side of the diagram, as these tools are explicitly interested
in knowledge and implicitly in organisational relations. Then, it will depend
on the way in which the process is steered and the initial distance between
the tool and the organisation. If it is a management model (Figure 5), the
formalisation and the contextualisation will take place at the same time,
whereas if it is a technocratic model (Figure 6), with « detailed design before
Decision-Aid Between Tools And Organisations 65

delivery to users », most of the formalisation is done beforehand, at the risk


of coming up with difficulties at the contextualisation stage.

framework

_.... --;....-....-.":--;;'
degree of
mmimlm contextualisation JI'8JI,.

"
I"
/'
,
",,,
I
",
"

,I
I
I minimum
JI
I I ,
I , ,
re",la",tio""n",-s+'___ +'+-__
, ,
----4'_--+.::knowledge
\ \ \

~~\
,
A more or less important
...
....... ".... \
\
degree of
transl~m~~uionof . .... ... ___: - _ ...... \I
orgamsatlOnal relations + fonnalisation
is necessary
detail

Figure 5. Management model for steering change

framework
Degree of
contextualisation
minimu.m maximum

minimum

relations knowledge

maximllm

Degree of
formalisation

detail

Figure 6. Technocratic model for steering change 1.)

15 For a detailed explanation of these models, together with the political model and the
conquest model that are not mentioned here, see David [1996, 1998a].
66 AIDING DECISIONS WITH MULTIPLE CRITERIA

The difference as far as those steering the process are concerned, is that
in the management model, the tool's implicit relational dimension is
explicitly taken into account in the organisation of the change. For example,
corporate management may ask for performance indicators to be designed in
each entity of the organisation. This is a framework that the players
concerned must then try to draft themselves, in detail. If the management
supports the process and agrees to assess the results only at a later stage, this
guarantees that the formalisation and the contextualisation take place at the
same time. This result will also be obtained if the outside players
(researchers-players, for example) play this role of mediation between the
tool and the organisation.

Conclusion: decision aiding tools between conformation and


exploration
There have been changes in the use of decision aiding tools since the end
of the 1960s, with a move from a strongly prescriptive vision of their role to
a far more open one. To use other terms, there is a move from conforming
tools to exploring tools l6 . This does not mean that conforming tools really
succeed in making the players' behaviour « conform », as it has been
recognised for a long time that there is a gap between prescribed work and
real work, or to quote Hatchuel's sophisticated formulation [1994], that the
introduction of a tool, however normative it may be, always results in
crossed learning between the prescriber and those targeted by the
prescription. In other words, the idea that a tool was more or less
constraining was mostly in the minds of the prescribers. For example, tools
used in decision-making theory such as decision trees, despite their
apparently normative nature, are in the great majority of cases used with an
exploratory approach to different fields of knowledge. There are two
exceptions, however. On the one hand, when a tool really prescribes a
decision, the organisation can decide to follow the advice given by the tool,
in which case the contextualisation process mostly goes from the
organisation to the tool. On the other hand, the contextualisation process can
fail, with the tool proving to be neither conforming nor exploring. For
instance, if performance indicators are introduced in too technocratic or
centralised a manner, there is a risk that the people designated to provide the
necessary information will do so badly or too late, if they do not feel

16 The terms « conforming» and « exploring» are borrowed from Moisdon [1997]. The terms
« constraining» and « enabling» can be found in particular in Landry [1993].
Decision-Aid Between Tools And Organisations 67

involved or, on the contrary, feel that there is a threat to their autonomy.
Although the tool exists formally, it will fail either to control the business or
to explore the conditions under which actions could be effective.
It is therefore clear that the nature of learning due to the introduction of a
tool, in particular a decision aiding tool, varies depending on whether the
tool is designed with a conforming or an exploring approach, depending on
the model for steering the contextualisation process and depending on the
time at which the tool goes from the design stage to a more autonomous
stage of current use. The three aspects are related: the more the approach is
one of conformation and the greater the chances that the contextualisation is
steered in a technocratic, centralised manner, the more the tool is likely to be
delivered to the users without the co-operation of designers-users that would
enable a sufficiently high level of crossed learning to ensure that the tool
retained part of its exploratory capacities. There is then a risk that the tool
becomes completely autonomous, to such an extent that the players will be
unable to challenge their representations of reality if the context changes. On
the contrary, the more the tool is designed with an exploring angle and the
greater the chances that the contextualisation process is management-
oriented and decentralised, the more the tool will involve a process of
« simultaneous engineering» and result in far richer crossed learning. But if
the players attain this degree of clear-sightedness, up to what point will the
tool still be necessary?
There is a long continuum of tools/use couples ranging from normative
tools to disposable tools. It is no longer a question of comparing real
organisations and organisations that are implicitly driven by tools, but of
drawing up an appropriate theory for steering change, that it an intervention
theory. The question remains open as to the « right distance» from the
organisation at which the decision aiding tools should be designed and
inserted and how to steer contextualisation. If it is too near or too far, or if
the model for steering the change is out of place, it can fail to create tension
that generates learning.
Finally, we have shown [David, 1996] that methods for the design and
implementation of tools can be considered as tools in their own right, with a
technical substratum, a management philosophy and a simplified vision of
the organisation. The very notion of crossed learning, when viewed in the
context of management philosophy for multicriteria analysis as explained
above, refers back, at a higher level, to the question of the design,
deliberation and steering of projects in a democratic world. In the case of the
business world, shared design of managerial innovations can go as far as
challenging the hierarchy of employment, competence and remuneration. In
the case of a public multi-institutional world, shared design can, implicitly or
68 AIDING DECISIONS WITH MULTIPLE CRITERIA

explicitly, question the hierarchy of the institutions that decide on the


construction of society.

References
Allouche, l , Schmidt, G. (1995), Les outils de la decision strategique, tomes 1 et 2, La
decouverte.
Argyris C. et Schon D. (1978), Organizational Learning: A Theory of Action Perspective,
Addison-Wesley.
Barnard, C. (1938), The fonctions ofthe executive, Cambridge, Harvard University Press.
Boulaire, C. , Landry, M. et Martel, lM. , « L'outil quantitatif d'aide a la decision comme jeu
et enjeu », Revue INFOR, 1996.
Bouquin, H. (1996), article « Controle », Encyc/opedie de gestion, Econornica.
Courbon J.C. (1982), « Processus de decision et aide a la decision» , Economies et Societes,
series Sciences de Gestion nO 3, tome XVI nO 12, decembre, p. 1455-1476.
David, A. (1988), Negociation et cooperation pour Ie developpement des produits nouveaux
chez un grand constructeur automobile - Analyse critique et role des outils d'aide a la
decision, PhD dissertation, Universite Paris-Dauphine, septembre.
David A. et Giordano lL. (1990), « Representer c'est s'organiser », Gerer et Comprendre,
Annales des Mines, juin.
David, A. (1995), RATP : la metamorphose - Realites et theorie du pilotage du changement,
InterEditions.
David, A. (1996), « Structure et dynamique des innovations manageriales », Centre de
Gestion Scientifique, cahier de recherche n° 12, juin.
David, A. (1998a), « Model implementation: a state of the art », EURO Conference,
Brussels. To be published in European Journal of Operational Research, 2001.
David, A. (1998b), « Outils de gestion et pilotage du changement », Revue Fran{:aise de
Gestion, septembre-octobre.
Erschler, l et Thurio, C. (1992), « Approche par contrainte pour I'aide aux decisions
d'ordonnancement », in Les nouvelles rationalisations de la production, de Terrsac, G. et
Dubois, P. (Editeurs), Editions Cepadues, Toulouse, pp. 249-266.
Girin, l (1981) « Les machines de gestion », Ecole Polytechnique.
Goody, J. (1977) La raison graphique, Editions de Minuit.
Hatchuel A. et Weil B. (1992), L'expert et Ie systeme, Econornica. English translation
published in 1995, Experts in Organizations, W. de Gruyter.
Hatchuei, A. (1994) « Apprentissages collectifs et activites de conception », Revue Fran{:aise
de Gestion, juin-juillet-aout.
Hatchue1, A. (1994), « Les savoirs de I'intervention en entreprise », Entreprise et histoire nO
7, pp 59-75.
Hatchuel, A. (1996), « Cooperation et conception collective: variete et crises des rapports de
prescription », working paper, Ecole des Mines de Paris.
Hatchuel, A, Molet, H. (1986) « Rational modelling in understanding human decision
making: about two case studies », European Journal of Operational Research, nO 24, p.
178-186.
Jacquet-Lagreze, E. (1995), «Optimisation sous contraintes et programmation lineaire»,
a
dossier Les techniques de I'aide la decision, cahiers de I'ANVIE, novembre 1995.
Landry, M. , Banville, C. , Oral, M. (1996) « Modellegitimisation in operational research »,
European Journal of Operational Research 92, 443-457.
Decision-Aid Between Tools And Organisations 69

Lautmann, 1 (1994), « L'analyse strategique et I'individualisme methodologique» in :


L 'analyse strategique, Colloque de Cerisy autour de Michel Crozier, Seuil.
Lemoigne, lL. (1974), Les systemes de decision dans les organisations, PUF.
March, J.G. , Simon, H.A. (1958) Organizations, New York, Wiley and Sons (traduction
fran!j:aise : Dunod, 1964)
Maystre, L. , Pictet, 1 , Simos, J. (1994) Methodes multicriteres Electre. Description, conseits
a
pratiques et cas d'application la gestion environnementale, Presses polytechnique et
universitaires romandes.
Mintzberg, H., Rasinghani, D. , Theoret, A. (1976) « The Structure of "Unstructured"
Decision Processes », Administrative Science Quarterly (June)
Moisdon J.C. , Hatchuel, A. (1987), « Decider, c'est s'organiser », Gerer et Comprendre,
Annales des Mines, decembre.
Moisdon lC., Hatchuel, A. (1993), « Modeles et apprentissage organisationnel », numero
special « Instrumentation de gestion et conduite de l'entreprise », Cahiers d'economie et
sociologie rurales nO 28.
Moisdon, lC. (sous la direction de) (1997) Du mode d'existence des outits de gestion,
Editions Seli-Arslan.
Munier, B. , article « Decision », Encyclopaedia Universalis.
Munier, B. (1994), « Decision et cognition », Revue Fram;aise de Gestion, juin-jui1let-aout.
Pomerol, J.C., Barba-Romero, S. (1993) Choix multicriteres dans ['entreprise, Hermes.
Roy, B. (1968), « II faut desoptimiser la recherche operationnelle », Bulletin de I' AFIRO nO 7,
p. 1.
a
Roy, B. (1985), Methodologie multicritere d 'aide la decision, Economica
Roy, B. (1992), « Science de la decision ou science de l'aide a la decision »,
Revue Intemationale de Systemique, Vol. 6, nO 5,497-529.
a
Roy, B. , Bouyssou, D. (1993), Aide multicritere la decision - Methodes et cas, Economica.
Roy, B. , Present, M. and Silhol, D. (1986), A programming method for determining which
Paris Metro stations should be renovated », European Journal of Operational Research,
24: 318-334.
Sabherwal, R. and Robey, D. (1993), "An Empirical Taxonomy of Implementation Processes
Based on Sequences ofEvents in Information System Development", Organization Science,
vol. 4, nO 4, november.
SaYdi-Kabeche, D. (1996), Planification et pilotage de la production dans les systemes
productifs multicentriques, these de doctorat, Ecole des Mines de Paris.
Sardas, lC. (1993) « Dynamiques de l'acteur et de I'organisation - A partir d'une recherche
intervention sur la gestion du risque bancaire », these de doctorat, Ecole des Mines de
Paris.
Simon, H.A. (1947), Administrative Behaviour, MacMi1lan.
Sfez, L. (1973), Critique de la decision, Presses de la Fondation nationale des sciences
politiques.
Soler, L.G. (1993) Foreword to the special issue « Instrumentation de gestion et conduite de
l'entreprise », Cahiers d'economie et sociologie rurales.
Thepot, 1 (1995), « La modelisation en sciences de gestion ou I'irruption du tiers », Revue
Fran!j:aise de Gestion, janvier-fevrier 1995.
Vanderpooten, D. (1990), « L'approche interactive dans l'aide multicritere a la decision »,
these de doctorat, Universite Paris Dauphine.
Weil, B. (1999), « Gestion des savoirs et organisation de la conception des produits
industriels », these de doctorat, Ecole Nationale Superieure des Mines de Paris.
TALKING ABOUT THE PRACTICE OF MCDA

Valerie Belton
University of Strathclyde, Glasgow, UnitedKingdom
val@mansci.strath.ac.uk.

Jacques Pictet
Bureau AD, Lausanne, Switzerland
Jpictet@fastnet.ch

Abstract: Most of the literature on MCDA is concerned with the development of


aggregation methods and underlying theory. A few key papers address issues
of philosophy and process. Even application-oriented papers are relatively
rare, and these generally concentrate on the particular issue addressed and the
use of MCDA to inform decision-making.
In this paper, we seek to focus attention on more general issues relevant to the
practice of MCDA. We draw on the broader OR I MS and Management
literature. The form of the paper - a dialogue - has been chosen to reflect the
context in which a MCDA practitioner might frequently have to confront these
issues, namely a conversation with a potential client.

Key words: MCDA; MCDM; Nature of organisations; Organisational interventions;


Consultancy; Practice

Setting the scene


The formal part of one-day seminar entitled 'Bringing Consultancy into
the 21 st Century' has just ended; delegates are relaxing and reflecting in the
bar. We join the conversation of two delegates who have just met for the first
time. One is a senior manager in a public administration (C), the other a
young partner in 'Better decisions pIc' (I). As the discussion develops, many
issues of relevance to the practice of MCDA emerge.
C: ''That was an interesting seminar. The speaker touched on many issues
relevant to my present situation. Did you have the same impression?"
72 AIDING DECISIONS WITH MULTIPLE CRITERIA

I: "Yes, definitely ... but perhaps not on the same points! What interested
you most?"

The situation
The real world is a mess
c: "There were different aspects, but the first was when she spoke about
dealing with messy situations. It brings to mind the situation we are facing at
work right now."
I: "What is that, if I may ask?"
c: "Sure ... I am a senior manager in a public administration. The current
organisation was formed some years ago, in a merger of three independent
bodies, all operating in the area of food standards. Each of these had its own
culture, sphere of influence, working processes, etc., which they were keen
to protect. To gain approval for the merger, these bodies were given
guarantees about the future, mainly in terms of dedicated resources.
However, changes in priorities and financial cutbacks have led to the
questioning of these guarantees and the search for a more effective use of
limited resources. To achieve this, the vice-director has come up with a new
management structure, which he refers to as "management by project". He
intends to employ consultants to advise him on how to implement his
scheme. The idea seems to be good, but I wonder whether he has a clear
understanding of the possible consequences of implementing his ideas. He
used to work in a private company and thinks that the public sector should
follow the same path. Possibly he is right, but I am not sure that he has
thought through the broader repercussions for the impact on the organisation,
in particular on working relationships."
I: "In what sense?"

Aspects of complexity
C: "Well, firstly, it makes people feel insecure about the future. Like
many public organisations we have been through various changes such as
this one over the years and almost inevitably undesirable things happen:
people have to apply for their own jobs, they get displaced or even fired.
You never know where you will be at the end of it."
I: "Yes, I've heard of such situations. How does your boss expect his new
policy to work?"
C: "The general idea is to make resources more 'flexible'. This means
that part of the money and peoples' time will be taken away from their
'department' or 'sector' and allocated to a central resource pool. The
Talking About the Practice ofMCDA 73

management will decide how best to use these resources by comparing the
projects proposed by the various sectors. Potentially, people will have to
move from one sector to the other, or work on projects put forward by a
sector other than their own. I've already heard some people muttering that
they are experts in area X and it would not be making best use of their
expertise to have to become part of a project team in area Y. Others have
commented on the dangers of fragmenting teams that have been built up
carefully and operated effectively over a number of years. Not to mention
the nightmare of managing individuals' time effectively in such a matrix
structure."
I: "What you are saying is that it is difficult, if not impossible, to
differentiate the problem from the people connected with itl. I guess it is the
reason why situations are messy."
C: "Not only is it impossible differentiate the problem and the people
involved, everyone involved sees the problem differently - through their
own 'frame'2. However, that is only one important reason. Based on my
experience, I can suggest a couple of additional aspects. A second point
relates to the interdependence between the various people involved, their
objectives, and other related issues. You cannot expect these elements to
form nice little piles waiting for someone to take care of them. Usually, they
are scattered in a complete chaos, moreover they are moving all the time and
it is difficult to know where to draw the boundary3. This leads on to a third
factor contributing to the 'mess' - time itself. Situations are continually
evolving as both the external and the internals contexts change. Furthermore,
people are usually quite poor at keeping track of all these changes and tend
to reconstruct the past according to their beliefs, when asked for
explanations. "

There have been many terms used to capture the complexity of real life
situations. Rittel and Webber [1973] referred to 'wicked problems', Ackoff
[1981] to 'messes' and Schon [1987] differentiated between the 'swamp'
and the 'high ground'. Real life situations are messy for many reasons,
including the following:
- They are dynamic
- Issues cannot be considered in isolation
- Different people perceive the same issue differently
Different people have different values and objectives

I Melese [1987].
2 Russo and Shoemaker [1990].
3 Roy [1985] considers this activity more as an art than a science.
74 AIDING DECISIONS WITH MULTIPLE CRITERIA

Making sense
I: "That's a lot of food for thought! It sounds as if your boss is trying to
achieve a 'mission impossible'. It makes you wonder how anyone ever
manages to make sense of any situation, never mind decide on and
implement change. But of course, the way someone makes sense of the
situation will have a strong influence on how they might try to change it."
C: "Yes. Do you remember when the speaker mentioned various images
- or metaphors - of the organisation? She talked about visions originating
from different academic disciplines: the organisation as a body, a brain or a
machine 5 • Moreover, she suggested that a person's mental representation of
an organisation has strong implications for the way they conceptualise
interventions in the organisation. I can easily imagine that consultants tend
to use the models of their adopted metaphor: the surgeon uses a scalpel on a
body, the psychologist therapy on a mentally ill patient, the engineer a
screwdriver on a machine, etc.
I am quite sure that my boss thinks of the administration as a big
computer, which is dedicated to processing information. To a certain extent,
it is so, but I would feel more comfortable with a model that is less
restrictive. To me, each of these visions is describing only one dimension of
a whole."
I: "I don't know a lot about this; but I would tend to describe an
organisation as a "system" - it seems less restrictive than the metaphors you
mention. However, if you use this term there is a danger of people
interpreting it in the sense of cybernetics - that is, as a controllable system.
A more meaningful description is that of a 'negotiated system'."
C: "Yes, it seems a reasonable expression ... and it corresponds well to
my perception of my own institution. I am quite convinced that you cannot
oblige someone to do something they don't want to do. They might not
disobey openly, but they may deliberately attempt to sabotage the process or
through inaction allow the situation to rot'."

4 Many authors write about ways of understanding, interpreting and making sense of
situations. Weick [1995] brings together much of this discussion in an ongoing
conversation on sensemaking. Weick's view of sensemaking is encapsulated in the well-
known quotation "how can I know what I think until I see what I say". See also Dery
[\983] and Eden [1987].
5 Morgan [1989].
6 Bemoux [1985], following Crozier and Friedberg [1977] specify that in an organisation,
unlike technical systems, the people are not obliged to co-operate. Thus, there is a need for
some kind of agreement among them. See also Eden [1989].
Talking About the Practice ofMCDA 75

I: "That's why some authors insist so strongly on commitmenf. If you


don't have it, it's worthless engaging the best consultants, using the best
methods to advise on the best decision. Nothing will ever be implemented if
those involved have not "bought in" to the recommended way forward, or if
it is, it will be at such a cost - in terms of effort - that it makes the whole
exercise a joke!"
C: "Fortunately I have encountered such situations only with respect to
very limited issues."
I: "One of the things that particularly interested me was how the nature of
organisations and the relationships of people within them has changed over
time, as well as the ways in which we perceive and intervene in them."
C: "Yes. I wonder sometimes if consultants are aware of that. More and
more, I have the impression that they concentrate on the type of intervention
they propose, without explicit reference to the way the nature of the
organisation, as they, or anyone else, see it."

There are many different ways to view organisations, the way they work
and how decisions are taken within them. The following models are quite
influential:
- Simon [1976] changed the view of rationality within organisations,
shifting from substantive to procedural rationality and introducing the
notion of 'satisficing' (see also Bourgine and Le Moigne [1992]).
- Crozier [1977] founded the school of thought mown as the sociologie
des organisations. He analysed the way actors behave in order to increase
their power and influence on the system, highlighting the importance of
negotiated decisions (see also Bemoux [1985]).
- Mintzberg's model [1982] allows for different structures of organisation,
explained by the relative dominance of the different components (X, Y or
Z).
- Morgan [1989] proposes a number of metaphors for organisations and
suggests that one's understanding of how an organisation works and how
to change it is influenced by the image adopted.

Inter-organisational working
I: "I don't mow, perhaps you are right. .. I wonder what the implications
are for activities involving multiple organisations. In my own work, inter-
organisational structures are as important nowadays as intra-organisational

7 Eden [1992].
76 AIDING DECISIONS WITH MULTIPLE CRITERIA

ones: working groups, steering committees and panels are my everyday


bread and butter. This raises many interesting questions about the nature of
collaboration, the relationships between the organisations and their
representatives8 and the outside legitimisation process9 ."
C: "Yes, I have seen that increasing in my own activity over the years:
even though we have the legal power to decide on our own, the tendency and
the pressure is to involve a bigger circle of interested parties, including the
public, in the discussion."

Interventions

MCDA as a specific type of intervention


I: "Coming back to your case, how do you think the consultants will
intervene? How will it be decided how to allocate the limited resources to
projects? How will people then be allocated to projects? How will they be
persuaded to adopt the new system?"
C: "Well, I think that persuading people to adopt the new system will be
a very difficult task, given all the issues I described earlier. However, for
part of it he has talked about using a methodology called 'Multiple criteria
decision aid'. Cryptic, isn't it?"
I: "Well, as it happens, not for me. That's exactly my field of
competence! "
C: "What a coincidence!"

What's in a name?
I: "I have to admit that the name tends to put off clients who feel it
sounds terribly academic ... and, in any case, the academics themselves can't
decide what to call it!"
C: "Yes, that's a problem. Names are very important, as any marketing
expert will tell you. It's quite difficult to know what you are letting yourself
in for the first time you employ a new bunch of consultants. I don't know
how some people expect to get work when they can't convey clearly what
type of consultancy they are offering. Why can't the academics decide?"
I: "It is a long story. Historically, the first methods emerged under the
name of Multiple Criteria Decision Making (MCDM) - since they are

8 Eden et al. [\996].


9 About the activity of convincing outsiders - both within the represented and not
represented organisations - to accept the recommendation, see Pictet [\996]. See also
Landry et al. [1996].
Talking About the Practice ofMCDA 77

concerned with helping people make better decisions through taking account
of multiple factors, that makes some sense. However, one author argued that
this name was potentially misleading, as it engendered confusion between
the real-world, ongoing process of decision-making, which has to take into
account many factors and the contribution of these methods. So he proposed
instead Multiple Criteria Decision Aid (MCDA) - with the emphasis on
aiding or supporting the broader process of decision-making lO •
For the broader field, the same author proposed, using a similar
argument, that if there were to be a science, it could only a science about
how to help the people decide, and not a science of the decision itselfll."
C: "That's very interesting. But is it just that there are different views on
the name, or are there more fundamental differences between MCDM and
MCDA?"
I: "Hmm. That question would take a long time to answer fully. In fact,
there are many more than two approaches to MCDMlMCDA. These differ,
for example, theoretically in the assumptions they make about the way in
which preferences can be measured, and in the mathematics they employl2.
However, perhaps of even greater significance is the extent to which beliefs
about the nature of models and social intervention influences the way
consultants seek to intervene in a problem situation - this cuts across the
MCDMlMCDA division. For example, one of the most successful MCDA
practitioners bases his work on a theory drawn from the MCDM camp, but
applies this in a manner consistent with the philosophical basis of MCDA.
It's interesting to note - referring back to what we were just saying about
marketing - that he refers to his work as "Decision Conferencing" rather
than MCDN3. On a slightly more frivolous level, I am convinced that the
divide originates partly from the use of different languages: exponents of one
side use the French language as a way to resist!"

Magic potions and pink pills


C: "That reminds me of a well-known cartoon about warriors with big
noses and silly hats resisting the Roman Empire ... What's its name again?
Oh yes, Asterix! So, are these people using a magic potion?"
I: "Your analogy might be more meaningful than you can imagine! I had
a boss once who was always challenging me to prove that MCDA was more
than just a pink pill for people faced with a difficult decision! It can be

10 Roy [1985).
II Roy [1992,1994).
12 Roy and Vanderpooten [1996). Pictet and Belton [2000] try to reduce this gap.
13 Phillips [1990).
78 AIDING DECISIONS WITH MULTIPLE CRITERIA

difficult to convince potential clients of the benefits they can expect from
MCDA, particularly as these are often intangible, such as shared
understanding. But that takes us back to the nature of consultancy. You
made a comment that you don't know how people can expect to get work
when they are not able to describe what kind of consultancy they are offering
- it sounded as if you were speaking from bitter experience - have you had
someone try to sell you a magic potion which didn't work?"

Models of consultation
C: "No, not really. I was really thinking back to the comments made by
the speaker about different models of consultancyI4 and wondering which
applied in my current situation. One was the client as a purchaser of
expertise from the consultant; I thought that in this case the client would
need to know what they wanted before they could engage a consultant. This
seems to best describe the mode our vice-director is operating in, but as I
said earlier, I'm not sure he has a good enough understanding of our
organisation to know what is needed. In my view, the second model, the
notion of a partnership between the client and consultant, would be better
suited to our needs at the moment. In that model, the consultant helps the
client find out what their problem is and to work towards a solution. I can't
remember the third model ... "
I: "It was the doctor-patient analogy, whereby a consultant is employed
to diagnose what is wrong with the organisation and prescribe a solution. I
remembered that one because I was wondering whether it was an old-
fashioned relationship in which the doctor knew what was best for the
patient or a more modem one in which the patient is allowed to exercise
their own choice in selecting an appropriate treatment. I was trying to match
my own practice to one of the models."
C: "And did you succeed? Anyway, tell me more about yourself and
what you do ... we've been focusing on my problems. You said you were a
MCDA specialist?"
I: "Yes, but a rather junior one. I work with a very small company
"Better Solutions pIc" - just four of us - and I spend most of my time selling
MCDA to potential clients."
C: "Do you mean to imply that you spend all your time selling MCDA
rather than doing MCDA - and I'm still wanting to know what it is!"
I: "I do manage to spend sometime doing, but as I said earlier the
"selling" is a real challenge. At the moment I'm working on a project which

14 Schein [1988].
Talking About the Practice ofMCDA 79

is advising a University on the selection of a company to develop what they


call a Managed Learning Environment, a sophisticated IT system which
integrates existing management and teaching systems to give staff and
students quick and easy access to any information they might need."
C: "So you are also an IT specialist?"
I: "Well, in a way I am because we make use of technology in our
approach to decision aid and we are constantly adopting our software to deal
with new problems. However, we don't offer our clients technical expertise
in IT, we are not "subject experts" in that sense ... in our situation the
technical knowledge comes from within the company. Our expertise is in the
process of managing that knowledge alongside the knowledge of the context
to help the administration agree on the solution that's best for them. They
had six responses to their invitation to tender 's and are now looking to select
the one which best matches their needs. My last project was evaluating
alternative designs for a new civic building, who knows what the next might
be?"
C: "OK ... so your expertise is more oriented towards directing the
process than contributing to the content of a problem? Where does that place
it with respect to our speaker's three models?"
I: "There are two interesting questions there! Actually, there's quite of a
debate in my professional community regarding the first - the extent to
which we intervene in content, as opposed to providing only process
facilitation. Many people argue that MCDA should be just the latter. .. but
for me that highlights two dilemmas. Firstly, I was trained as someone with a
technical expertise in modelling ... I find the whole issue of facilitation,
particularly the idea of working with inter-organisational groups, quite scary.
Where do I get the skills required to do that? And secondly, as we were
discussing earlier on - it's difficult to convey what you're offering to a client
- isn't that even more difficult when you are focusing on process rather than
content? Even though we don't consider ourselves as expert in IT solutions,
or tender evaluation as I said earlier - we do have a good understanding of
the issues and now have a fair bit of experience on working with companies
on those kinds of problems. I guess there's a sense of security in being able
to offer a standard approach ... "
C: "A standard approach? Can that work? I'd say that almost every
problem I face is different. .. that mess I described to you earlier is nothing
like the last one we were in ... How do you find organisations that have your
standard problems?"

15 About government procurement, see Roy and Bouyssou [1993]; Bana e Costa et al. [to be
published]; Pictet and Bollinger [2000).
80 AIDING DECISIONS WITH MULTIPLE CRITERIA

There are different ways in which someone can seek to help through
intervening in a problem situation.

Schein [1988] identifies three models of consultation:


- the purchase of expertise model - the organisation purchases information
or a service to fulfil a need which cannot be met internally,
- the doctor patient model - a consultant is invited to carry out a "health
check" on an organisation, or to diagnose the cause of a problem and to
prescribe treatment,
- the process consultation model- the consultant works with a client to
help them address an issue.

Eden and Sims [1979] discuss three approaches to helping:


- to coerce a client into using methods and solutions devised by the helper,
to develop empathy with a client in order to represent their vision of the
problem,
to negotiate a definition of the problem together with a client and then to
try to help them solve it.

"Selling" MCDA
I: "Well, that's a sensitive issue and it relates to your second question ...
there are many organisations facing issues of the type I described ... but there
is a problem finding enough who are willing to bring us in at the point at
which our expertise becomes relevant. We have good contacts, who we refer
to as our product champions, in a couple of organisations. They know what
we do and how we can help. These people are purchasing our expertise in
the use of MCDA to support decision making. But it is difficult getting into
new organisations. It comes back again to the question of what we're trying
to sell. Is it ourselves? Is it a process? Is it a method? You hit the nail on the
head earlier."
C: "Suppose I asked you to help me to make sense of the problem I
described earlier ... could you do that?"

The nature of models


I: "Well, it certainly does seem to be messy. I used to think - my
mathematical training you know - that the role of a consultant was to
provide an objective view of the issue - to somehow capture the "truth' or
'reality' - that is, to be able to tell you what options are open to you and
Talking About the Practice ofMCDA 81

which solution would best match your objectives. However, as I'm learning
from my experience and as your problem clearly demonstrates, there are
many different perspectives on the issue - and it sounds as if you don't yet
know what the objectives are."
C: "Does that mean that your models would be useless?"
I: "No ... I think they are still potentially useful. But you have to look at
them in a different light. They don't reflect a tangible 'reality' - it's not as if
there is something physical out there, like a manufacturing process, that we
are trying to capture in a simplified form. Our models are more about
intangibles - values, preferences, priorities - but it's not as if the model is
making explicit, or trying to simplify something which already exists in
someone's mind I6 ••• it's more about helping them (or more often helping a
group of people ... so there's more than one mind to worry about!) to
discover what is important to them - to learn about their values (and about
each other's values) - essentially to construct their preferences and to
facilitate their thinking."
C: "OK - that's an important aspect of our problem - but not actually
knowing what we want is only part of the whole issue."
The use of models is central to MCDA. However, the nature and meaning
of models can be perceived differently.

Roy [1992] describes three paths taken to give meaning to knowledge


produced in Operational Research / Decision Aid (OR-DA), equally
applicable in the more specific context of MCDA:
- the path of realism and the quest for a description for discovering,
- the axiomatic path and the quest for norms for prescribing,
- the constructivist path and the quest for working hypotheses for
recommending.

Landry [1995] describes three different perspectives on modelling:


- The objective view - models as representations of an objective reality,
- The subjective view - models as capturing an individual's mental
representations,
- The constructivist view - modelling as a process to help someone make
sense of a situation.

Phillips [1984] defined the important notion of a requisite model - one


which is developed to the extent that no new intuitions about the situation
come to light.

16 Roy [1989].
82 AIDING DECISIONS WITH MULTIPLE CRITERIA

MCDA in the broader context of problem structuring


I: "I appreciate that... and that highlights one of the limitations of our
models... they focus on preference modelling, but don't help with
understanding the broader context, which needs to happen first. This is
another dilemma for us - one which again relates back both to the issue of
marketing and to the model of consultancy. On the one hand, we haven't
learned the skills to tackle the broader issues, to function as the true process
consultants, described by the second model of consultancy. However, I've
been reading some interesting journal articles recently about linking MCDA
and so-called "soft" OR methods - this seems to be a very powerful
combination, an approach such as cognitive mappingl7 can be used to surface
and capture material relating to the broad issue then, if it turns out to be
appropriate, MCDA can be used to explore and evaluate detailed options
which are identified. It seems that the key elements of the MCDA model
emerge almost naturally from the mapping process I8 ."
C: "That sounds interesting, now there are two things for you to explain
to me - MCDA and cognitive mapping. Your concern there seems to relate
back to what you said earlier about finding the emphasis on process 'scary'.
What was the other side of the dilemma?"
I: "Well, I suppose you could sum it up by the question 'Can you live by
MCDA alone?' As you pointed out, most issues are 'bigger' than MCDA-
although most do have a multicriteria element to them. Even the tender
evaluation problems are broader than simply choosing the best... at some
stage those involved need to specify their requirements, their objectives,
determine how the invitation to tender should be phrased, be aware of the
legal issues, be prepared to respond to challenges to their decision, and so
on. Actually, we developed a small piece of software that helps to choose the
appropriate procedure, monitors the deadlines and provides with the legal
documents.
We are fortunate to have established the contacts I mentioned who call us
in at the appropriate stage of the process. But it could be helpful for an
organisation to be supported throughout the process and helpful for us as a
means of gaining an earlier entry to an organisation - thereby generating
more business. Let's face it: if they have engaged other consultants to help
with the broader process, they are unlikely to call us in when it comes to
making the decision."
C: "I think you are quite right there. However, doesn't that present
another dilemma? You said that MCDA could be used' ... if it turns out to

17 Eden [1988]; Eden and Ackermann [1998].


18 Ackermann and Belton [1994,1999]; Belton et al. [1997]; Bana e Costa et al. (1999].
Talking About the Practice ofMCDA 83

be appropriate'. What ifit doesn't? What ifit emerges that something else is
needed?"
I: "That just adds further to the dilemma, doesn't it? I don't think it's
possible for a single person, or even a small team, to cover all the areas of
expertise that might be required. The ethical solution could be to build up a
network of contacts with companies having complementary expertise.
However, I fear that a more common response is to try to fit the problem to
your solution method ... you know the saying 'The danger, if all you have is
a hammer, is to see nails everywhere'19".
C: "Yes - and it relates back to what I was saying earlier about
consultants not being sensitive to the nature of the organisation. Can I pick
you up on something else you touched on earlier, working with groups of
people, how do you do that?"

MCDA for groups


I: "Well, as I've already told you, MCDA is not a complete approach in
the sense that it incorporates formal approaches to deal with all aspects of a
problem. I think it is best referred to as a set of tools that can enrich a
consultant's toolbag. One of the things it says little about is the way of
dealing with people. For instance, some academics insisted that MCDA is
appropriate only for individual decision-makers. Nowadays, the need for
participative decision-making seems to be acknowledged by most of the
scientific community, but there are many different approaches 20 . The
decision conferencing approach I mentioned earlier engages all participants
simultaneously in the construction of a shared model, whereas other
approaches seek individual opinions and then 'average' them in some way.
However, I know that there are people who argue that it is important to think
about group problem solving quite differently to individual problem
solving21 ."
C: "I'm particularly interested in this aspect because not only do we have
to be concerned about the participation of stakeholders within the
organisation, but nowadays, as I said earlier, there is increasing emphasis on
public consultation and participation. Organising effective and meaningful
public participation is a significant issue for us. I remember reading
something about bringing democracy to public administration using an

19 B. Roy in Colasse and Pave [1997]. To a certain extent, it is very similar to the one of a
drunkard looking for his keys below a streetlight to take advantage of the light, even
though he knows he lost them somewhere in the dark [Roy, 1985).
20 Belton and Pictet [1997).
21 Sims [1979).
84 AIDING DECISIONS WITH MULTIPLE CRITERIA

electronic voting device, but I wonder whether it is a good thing or not. On


the one hand, it allows one to gather many opinions, but it could threaten the
power of the existing hierarchy - to which I belong by the way."
I: "There are many interesting points in what you say. First, as you may
know, there is no perfect voting procedure - there is even a theorem that
proves it22 • Then, I'm concerned with this so-called egalitarian vision of the
decision - that is one in which everyone's view has an equal 'weight'23. It
seems very far from organisational reality and remains a 'win-loose' game. I
rather prefer the consensus seeking approach, which is based on the
assumption that the people are involved as peers in seeking a solution that
everyone can accept, more of a 'win-win' game 24 ."
C: "All these aspects are, in my point of view, very important for my
everyday work. Unfortunately, we do not discuss them within the
administration and, when outsiders are brought in, they tend to present the
latest management trend as the obvious solution25 • As a matter of interest,
how you would actually go about applying MCDA in a real-world situation
- how might you involve all the interested parties - from a practical
perspective, I mean?"
I: "Well, there is a lot to think about. I think that it is very important that
the process is interactive, that people are active rather than passive
participants and would thus try to get together the key actors in a decision
workshop. At the start of the workshop you need to begin by establishing a
clear agenda - setting expectations for time together. Of course, this can be
renegotiated with the group if necessary. In the other hand, the practicalities
are very important. Thinking back to the magic potion analogy, I often see
myself as a magician when I see all the props I have to bring with me: there
are the ovals (for cognitive mapping), the cards (for certain weighting
techniques), sticky dots for voting, flipcharts, coloured pens, computer,
projector ... Thank goodness, they are getting smaller and more portable
nowadays. Then you can't forget the actual environment - the room layout
and so on26 • And of course, who is invited to be there."
C: "It sounds like quite a show!"
I: "Hmm... maybe, but one that is interactively choreographed rather
than rehearsed, and one in which the audience plays a key role. Part of my

22 Arrow [1951].
23 For a discussion, see Eden [1992]. For the related concept of 'procedural justice', see Eden
and Sims [1979].
24 Maystre and Bollinger [1999].
25 Edwards and Peppard [1994].
26 Huxham [1990].
Talking About the Practice ofMCDA 85

role as a facilitator is to ensure that everyone participates in and contributes


to the process."
c: "What about the participants - who decides who will be there - I
presume space is a restriction?"
I: "It is important that all key stakeholders are represented. But it relates
also to another dimension of your mess that struck a chord with me. The
question of who actually decides? - from my perspective - who is the client?
Suppose you did engage me to help with your problem - whom exactly
would I be working for? Would it be you? Your boss? A team of senior
managers?"
C: "That's an interesting question. If you manage to convince me that
you could help, then I would have to convince my colleagues that you have
something to offer - casting me in the role of your product champion.
However, it is a shared problem - at the end of the day we have to agree a
way forward as a group."
I: "And who would be paying me?"
C: (laughs) "Of course, we must not forget that! We would have to get
your engagement cleared with the Senior Manager my group reports to. She
is also the person who would have to argue the case with the senior
management team, including the vice-director, for resources to fund any
recommendations we come up with. What prompted you to ask this ... have
you had some difficulties in the past?"
I: "Not exactly ... we have be lucky enough not to have insolvent clients
so far. Actually, my question was more directed to the issue that the people
paying the bill might also want to be part in the process27 • But perhaps it isn't
a major issue within your organisation?"
C: "No, the management controls the money as well ... to a certain
extent."
I: "Another very practical issue, which can cause a lot of headache, is the
contract itself. The more I practice, the more I get concerned about the
number of issues to be specified. It sometimes seems as though the contracts
are getting longer than the reports themselves! But this relates back to our
earlier discussion about setting expectations - often it is very difficult to
know how an intervention will evolve. This can make it difficult to write a
clearly specified proposal and agree a contract - not all organisations are
willing to commit resources on the basis of trust."
C: "Which brings us back yet again to the difficulty of selling your
skills ... Well, it has been very nice meeting you, but I'm afraid I have to go

27 A distinction is often necessary between the demandeur (client) and the decideur
(decision-maker) [Roy, 1985].
86 AIDING DECISIONS WITH MULTIPLE CRITERIA

now. However, a final question - how do you go about making new


contacts? "
I: "Well, now you mention it, I guess serendipity often plays a hand! You
know, a chance conversation in a bar about someone's problem ... I enjoyed
meeting you too, the conversation has prompted me to consider parts of my
activity I hadn't thought about before ... "
C: "Yes, me too. It's a shame that there is no forum where people on the
both sides of the fence could exchange their opinions, problems, etc."
I: "Yes. I guess there are good reasons for that."
C: "Maybe. I guess that if I want something like that I'll have to organise
it myself! Bye now."
I: "Bye. Take care."

Afterword
This conversation, although completely fictional, summarises some of the
major issues a practitioner faces in her or his activity. Our aim in writing the
paper was to bring them to the attention of academics, particularly those who
are more often preoccupied with theory. As we have written eIsewhere28 , it is
our view that MCDA is a practical subject, which is worthless unless it is
applied, and so research and theoretical developments must be grounded in
practice. The development of theory, its implementation and evaluation in
practice should form a continuous loop, as proposed by Kolb29 • It may not be
the case that the theory is developed and the practice effected by the same
people. However, it is essential that practitioners and theoreticians
collaborate and communicate, in order that each is aware of the others
preoccupations, and through synergy to achieve the full potential for MCDA
as a management tool.

References
Ackermann F., Belton V., 1994, "Managing corporate knowledge experience with SODA and
VoloSoA", British Journal ofManagement 5, pp. 163-176.
Ackermann F., Belton V., 1999, Mixing methods: Balancing equivocality with precision,
Management Science Theory, Method and Practice Series 99/4, University of Strathclyde,
Glasgow.
Ackoff, R. L., 1981, "The art and science of mess management", Interfaces II, 20-26
Arrow K. J., 1951, Social choice and individual values, Wiley, New York.
Bana e Costa C. A., Antunes Ferreira J. A., Correa E. C., 1999, "A multicriteria methodology
supporting bid evaluation in public call for tenders" (to be published).

28 Pictet and Belton [1997].


29 Kolb [1984).
Talking About the Practice ofMCDA 87

Bana e Costa C. A., Ensslin L., Correa E. C., Vansnick I-C., 1999,"Decision support systems
in action: integrated application in a multicriteria decision aid process", European Journal
of Operational Research 112 (2), pp. 315-335.
Belton V., Ackermann F., Shepherd I., 1997, "Integrative support from problem structuring
through to alternative evaluation using COPE and VoIoSoA", Journal of Multicriteria
Decision Analysis 6, pp. 115-130.
Belton V., Pictet I, 1997, "A framework for group decision using a MCDA model: Sharing,
aggregating or comparing individual information?", Journal of decision systems 6 (3),
pp. 283-303.
Bernoux P., 1985, La sociologie des organisations: Initiation, Seuil, Paris.
Bourgine P., Le Moigne J.-L., 1992, "Les 'bonnes decisions' sont-elles optimales ou
adequates ?", in Bourcier D., Costa I-P. (Eds), L 'administration et les nouveaux outits
d'aide ala deciSion, Editions STH, Paris.
Colasse B., Pave F., 1997, "Entretien avec Bernard Roy: La recherche operationnelle entre
acteurs et realites", Gerer et comprendre / Annales des Mines 47, pp. 16-27.
Crozier M., Friedberg E., 1977, L 'acteur et Ie systeme, Seuil, Paris.
Dery D., 1983, "Decision-making, problem-solving and organizational learning", Omega 11,
pp.321-328.
Eden C., 1987, "Problem-solving or problem-finishing?", in Jackson M., Keys P. (Eds), New
directions in management science, Gower, Hants.
Eden C., 1988, "Cognitive mapping", European Journal of Operational Research 36, pp. 1-
13.
Eden c., 1989, "Operational research as negotiation", in Jackson M., Keys P., Cropper S.
(Eds), Operational research and the social sciences, Plenum, New York.
Eden C., 1992, "A framework to think about group decision support systems", Group
Decision and Negotiation 1, pp. 199-218.
Eden C., Ackermann F., 1998, Making strategy: The journey of strategic management, Sage,
London.
Eden C., Huxham c., Vangen S., 1996, The dynamics of purpose in multi-organisational
collaborative groups: Achieving collaborative advantage for social development,
Management Science Theory, Method and Practice Series 96/3, University of StrathcJyde,
Glasgow.
Eden C., Sims D., 1979, "On the nature of problems in consulting practice", Omega 7 (2),
pp.119-127.
Edwards C., Peppard I W., 1994, "Business process redesign: hype, hope or hypocrisy?",
Journal of Information Technology 9, pp. 251-266.
Huxham c., 1990, "On trivialities of process", in Eden C. and Radford I (Eds), Tackling
Strategic Problems, Sage, London ..
Kolb D. A., Rubin I. M., McIntyre I M., 1984, Organisation psychology: An experimental
approach to organisational behaviour, Prentice-Hall, Englewood Cliffs.
Landry M., 1995, "A note on the concept of problem", Organization Studies 16 (2), pp. 315-
343.
Landry M., Banville C., Oral M., [1996], "Model legitirnisation in operational research",
European Journal of Operational Research 92, pp. 443-457.
Maystre L. Y., Bollinger D., 1999, Aide a la negociation multicritere, Presses polytechniques
et universitaires romandes, Lausanne.
Melese I, 1987, "Interventions systemiques dans les organisations", Revue internationale de
systemique 1 (4), pp. 457-470.
88 AIDING DECISIONS WITH MULTIPLE CRITERIA

Mintzberg H., 1982, Structure et dynamique des organisations, Editions d'organisation, Paris.
Morgan G., 1989, Images de I'organisation, Presses de I'Universite Laval et Editions Eska,
Quebec.
Phillips L. D., 1984, "A theory of requisite decision models", Acta Psychologica 56, pp. 29-
48.
Phillips L. D., 1990, "Decision analysis for group decision support", in Eden C. and Radford 1.
(Eds), Tackling Strategic Problems, Sage, London, pp. 142-150.
Pictet J., 1996, Depasser !'evaluation environnementale, Presses polytechniques et
universitaires roman des, Lausanne.
Pictet 1., Belton V., 1997, "MCDA: What Message?", Newsletter of the European Working
Group on Multicriteria Aid for Decisions II, pp. 1-3.
Pictet 1., Belton V., 2000, "ACIDE : Analyse de la compensation et de I'incomparabilite dans
la decision. Vers une prise en compte pratique dans MAVT", in AMCD - Aide Multicritere
ala Decision (Multiple Criteria Decision Aiding), Colomi A., Paruccini M., Roy B. (Eds),
Joint Research Centre, EUR Report, The European Commission.
Pictet J., Bollinger D., 2000, "Aide multicritere Ii la decision. Aspects mathematiques du droit
des marches publics", Baurecht I Droit de la construction 2/00, pp. 63-65.
Rittel H. W. 1., Webber M. M., 1973, "Dilemmas in a general theory of planning", Policy Sci.
4, pp. 155-169.
Roy B., 1985, Methodologie multicritere d 'aide a la decision, Economica, Paris. (English
version: Roy B., 1996, Multicriteria methodology for decision aiding, Kluwer, Dordrecht.)
Roy B., 1989, "Main sources of inaccurate determination, uncertainty and imprecision in
decision models", Mathematical Computer Modelling 12 (10111), pp. 1245-1254.
Roy B., 1992, "Science de la decision ou science de I'aide Ii la decision ?", Revue
internationale de systemique 6 (5), pp. 497-529. (English version: Roy B., 1993, "Decision
science or decision-aid science?", European Journal of Operational Research 66, pp. 184-
203.)
Roy B., 1994, "On operational research and decision aid", EURO Gold Medal Speech,
European Journal of Operational Research 73, pp. 23-26.
Roy B., Bouyssou D., 1993, Aide multicritere a la decision: Methodes et cas, Economica,
Paris.
Roy B., Vanderpooten D., 1996, "The European school of MCDA: Emergence, basic features
and current works", Journal ofMulti-Criteria Decision Analysis 5, pp. 22-38.
Schein E. H., 1988, Process consultation (Volume 1): its role in organization development,
Addison Wesley, USA
Russo J. E., Shoemaker P. 1. H., 1990, "Decision traps: Ten barriers to brilliant decision-
making and how to overcome them", Fireside, New York.
Schon D. A., 1987, Educating the reflective practitioner: towards a new design for teaching
and learning in the profeSSions, Jossey-Bass, San Francisco.
Simon H., 1976, "From substantive to procedural rationality", in Models of bounded
rationality (volume 2), 1982, The MIT Press, Cambridge Mass., pp. 424-443.
Sims D., 1979, "A framework for understanding the definition and formulation of problems in
teams", Human Relations 32, 909-921
Weick K. E., 1995, Sensemaking in Organisations, Sage, Thousand Oaks.
MULTI-CRITERIA DECISION-AID
IN A PHILOSOPHICAL PERSPECTIVE *

Jean-Louis Genard
ULB, FUSL, ISA "La Cambre", Belgium
jgenard@ulb.ac.be

Marc Pirlot
Faculte Poly technique de Mons, Belgium
marc.pirlot@fpms.ac.be

Abstract In this essay we explore an avenue of reflection on the epistemological


status of models and of recommendations deriving from a decision-aid
process conceived in a constructivist perspective. After a brief pre-
sentation of a philosophical framework, that of Habermas's theory of
orders of validity, which enables us to talk of the "true" and the "good"
by defining procedural-type criteria of validity, we attempt to situate
decision-aid within this philosophical perspective.

Keywords: Decision-aid; Decision-making; Hermeneutics; Orders of validity; Ratio-


nality; Habermas

1. Introduction
According to B. Roy (Roy 1985, English version, p.1O), decision aiding
is the activity of the person who, through the use of explicit but not nec-
essarily completely formalized models, helps obtain elements of responses
to the questions posed by a stakeholder of a decision process. In his arti-
cle "Decision science or decision-aid science", Roy distinguishes in par-
ticular between the realist and constructivist approaches to decision-aid
(Roy 1992, English version: Roy 1993). According to the first approach,

• An initial version of this text was presented on 12 October 1995 at the 42th European
Multi-Criteria Decision-Aid Working Group Days in Namur, Belgium.
90 AIDING DECISIONS WITH MULTIPLE CRITERIA

the decision-aid activity is carried out on the assumption that a clearly


defined "problem" exists, considered as an objective reality, independent
both of the intervening parties and of the analyst, and which can be iso-
lated from its context. The objective or objectives to be optimised in the
decision share these same characteristics. This means that the models on
which the aid process are based are conceived as needing to describe this
problem as faithfully as possible. In a situation of multiple and conflict-
ing objectives, this approach postulates that it is possible, by applying
standards of rationality and modelling both the decision-maker's pref-
erences and his value system (themselves considered as having a stable
existence), to formulate a global objective that synthesises the different
viewpoints, hence giving a meaning to the notion of optimal solution.
In the constructivist approach, the model as constructed, the con-
cepts and the procedures are not envisaged as required to reflect a well-
defined reality, existing independently of the actors. First and foremost
they constitute a communication and reflection tool: these models and
concepts should allow the participant to the decision process to carry
forward his process of thinking and to talk of the problem.
In his writings, Roy definitely positions himself in the constructivist
paradigm and goes quite far (especially in Roy 1992 and Roy 1993) in
denying any relevance to the concept of a science of the decision. For
him, the concept of a decision science cannot be separated from the real-
ist approach since the object of a decision science cannot be else (in his
view) than the quest for an objectively best decision. The authors of this
paper, essentially agree with the constructivist perspective for decision-
aid but it is on Roy's very conception of science that they would express
some reservations. In their view, as they will argue in the sequel, the gap
between scientific activity in the natural sciences and the construction of
a model and a recommendation in a decision-aid process is not so deep.
The purpose of this essay is thus to explore an avenue of reflection on
the epistemological status of models and of recommendations deriving
from a decision-aid process conceived in a constructivist perspective.
Initially, in the next section, we contrast the notions of model in
the realist and the constructive perspective; we discuss the question
of its role and usefulness mainly from the constructive point of view.
The intertwinement in decision-aid models of factual elements and value
judgments is stressed.
In section 3 we present briefly a philosophical framework, that of
Habermas'theory of orders of validity, which specifies procedural-type
criteria of validity for the "true" and the "good". We shall then attempt,
in section 4, to situate decision-aid and the models thereof within this
philosophical perspective. Our feeling is that, even if the decision-aid
Philosophical Perspective 91

(and operational research) models and recommendations cannot pre-


tend to the status of "true" or "good", the way in which they are put
together, in the constructivist perspective, appears to us to reproduce
the procedural conditions that would lead to validity if they were to be
implemented after everyone interested in the decision had taken part in
the process. In other words the validity criterion are of the same type as
for the "true" or the "good" but restricted to the limited universe of the
participants to the decision process. This leads us to the conclusion that
the sort of validity that can be expected is only "local and partial"; the
model and the recommendations are nevertheless by no means arbitrary:
their validity is guaranteed by specific procedural requirements.
In section 5, we try to give a more precise idea of how the concern
for (internal) validity permeates the whole construction process and, in
particular, intervenes in the selection of a model. We also argue that
an internally valid model can be a tool for a decision-maker wanting to
submit a decision to external validation.
Section 6 discusses the role of theoretical results and axiomatic char-
acterisations of methods. After having distanced ourselves from a nor-
mative usage of axioms, we argue that a theoretical understanding of the
descriptive power of the methods helps the analyst to drive the decision-
aid process to building models that are likely to provide a reliable image
of the decision situation, integrating all relevant factual elements as well
as reflecting the decision-maker's way of thinking and system of values.
We finish by showing that similar, yet not identical, questions arise in
other fields (like statistics); this leads us to the conclusion that the final
word has not been said on what a model is and that it could be fruitful
to further investigate this notion in an interdisciplinary spirit.

2. Problem, model and validation in O.R. and


decision-aid
2.1. An old debate
As Dery, Landry and Banville (who more or less share the construc-
tivist views) stress (Dery et al. 1993), the construction of models is an
activity of production of knowledge. The philosophical position that one
takes on the nature of knowledge therefore clearly considerably impacts
the modelling activity and the notion of the validity of a model.
The opposition between the realist and constructivist approaches is
not unrelated to the old debate on the academic application of opera-
tional research techniques (see, for example, the articles by Ackoff (Ack-
off 1979a, Ackoff 1979b), recently republished as "Influential Papers" to
mark the 50th anniversary of the Journal of the Operational Research
92 AIDING DECISIONS WITH MULTIPLE CRITERIA

Society). We are all too aware of the failure of the "solutions" that
Operational Research (OR) technicians seek to impose in the name of
"optimality". In a concrete problem, a deep understanding and dialogue
are essential between analyst(s) and decision-makers(s) - many essentials
can escape an outside consultant. In particular, who are the intervening
parties and the decision-makers, and what are the objective(s)? The raw
data, when these exist, require interpretation. Here the lack of precision,
the indeterminate state or the inaccessibility of certain data can lead to
the use of one model rather than another. English OR in particular has
been very sensitive to these questions and Rosenhead (Rosenhead 1989)
goes as far as to speak of revolution (in Kuhn's meaning of the term),
calling into question the dominant OR paradigm, i.e. the scientistic
and objectivist conception of decision-making problems (what Roy (Roy
1992), calls the "path of realism"). We will not enter into this debate,
but we will quote (Rosenhead 1989), p. 6, who expresses clearly the
extent to which the decision-making situation is not a given fact:
The clarity of the well-structured problem is simply unavailable, and an
OR approach which asserts otherwise does violence to the nature of the
situation.

2.2. The cognitive and communicational value of


models
In all conceptions that depart from the realist path, it is important to
cast light on what is the role of the model, and what is its cognitive and
communicational value. It is clear that, seen from these viewpoints, con-
ceiving the validity of a model in terms of conformity with a particular
reality is, to say the least, insufficient. And moreover, the calling into
question of the empiricist interpretation of the concept of the model is
not limited to operational research or decision-aid; even in a science as
exact as physics, empiricism comes under serious attack from, for exam-
ple, historicist theories like those of Kuhn (Kuhn 1962, Kuhn 1977). In
other words, we are far from unanimity on the concept of the validity of
a model. Readers interested by the concept of validity and by the valida-
tion or legitimisation process in operational research and in decision-aid
can refer to Landry 1998, Landry et al. 1996, Landryet al. 1983 and Le
Moigne 1986. In Dery et al. 1993, they will also find a brief overview of
different epistemological positions on the notion of model and conditions
of validity. In it are described, alongside the empiricist vision and its
falsificationist variant, the instrumentalist conception of knowledge (the
criterion of validity of knowledge being that it is useful) and historicist
and sociological conceptions. When it comes to the practice of valida-
tion of operational research models, Landry and his co-authors {Landry
Philosophical Perspective 93

et al. 1983) defend a validation on several levels: conceptual, logical,


experimental, operational and data. These various levels cover both the
communicational and cognitive aspects of the model and its practical
and operational aspects. The idea that the validation of a model should
cover all these aspects appears to be widely accepted by operational
research methodologies, even if, in practice, validation procedures are
probably far from being scrupulously respected. The fact remains that
the question as to the epistemological status of models constructed in
operational research and in decision-aid has not until now received a
clear reply.
If models, and in particular formal or mathematical decision-aid mod-
els, are not representations of an existing reality, one could go as far as
to question their very utility. One naive position upheld in Bouyssou et
al. 2000 is that formal models have a certain number of advantages that
appear crucial in complex organisational or social processes:
• They contribute to communication between the intervening parties
in a decision-making or evaluation process by providing them with
a common language;

• They are instruments in structuring the problem; the process of


developing them forces the intervening parties to make explicit
those aspects of "reality" that are important to the decision or
evaluation;
• They lend themselves naturally to "what-if" types of questions,
thereby contributing to the development of robust recommenda-
tions and increasing the intervening parties' degree of confidence in
the decisions.

2.3. True, good, just


In a multi-criteria context (and probably all contexts are), the situ-
ation is complicated by the fact that we need, in a certain manner, to
manage and to move beyond the potential contradictions between value
systems or, to put it in simpler language, a decision can be reached only
by making compromises. Decision-making in general and even more
where several criteria are explicitly taken into account therefore needs
to position itself not only in relation to the sphere of what is true (even
if only to say that it has nothing to do with "what is true"), but in
particular to the spheres of what is "good" or "just", given the ease
with which a decision is described as "good" (or bad). Decision-aid is
also concerned by these two spheres of values as it needs to be able to
handle, on the one hand, of "factual" evaluation data (even where these
94 AIDING DECISIONS WITH MULTIPLE CRITERIA

"facts" are known through the judgements of experts) and, on the other
hand, the preferences of the decision-maker or the value judgements of
the intervening parties (to which the category of "true" does not apply).
This does not, however, mean that the validity criteria of decision-aid
are identical to those of decision-making. We shall attempt to clarify
this point later.

3. Habermas's theories on the orders of validity


In general, texts of an epistemological nature on decision-aid refuse
a certain number of methods, or rather, methodological presuppositions
either because of the excessively objectivist conception of the scien-
tific concepts (realism, optimisation, etc.) or-and the two are often
linked-owing to insufficient interactions or dialogue between analysts
and decision-makers, with a preference, instead, for a constructivist
approach. These comments are of course pertinent, but in our view
they merit being confronted with an additional difficulty, deriving from
the fact that non-reducible forms of rationality are inextricably woven
into any decision-making processes and into any interactions between
decision-makers and analysts.
In order to throw light on this affirmation, we refer to a thesis that
is well known in the human sciences, a thesis associated with the names
of M. Weber first of all, and then J. Habermas, to which we should also
add that of K. Popper.

3.1. Polytheism of values


It is M. Weber, drawing his inspiration almost certainly from the
break-up of metaphysics that Kant's work had announced (and in partic-
ular his three critiques), who was the first to theorise on the dissociation
of spheres of validity, speaking of a value polytheism (Weber 1919).
His general hypothesis is that, contrary to early cultural representa-
tions, that were at once:
• metaphysical and religious (based on a transcendent first principle,
outside the world or human experience);
• substantial (enouncing imperatives for concrete, everyday life);
• unified (not making any dissociation between spheres of validity);
the cultural representations specific to modernity will progressively be-
come:
• secular (explanations will refer to immanent principles-empirical
sciences, moral and political humanism)
Philosophical Perspective 95

• formal (abstract principles, procedures, freedoms, "laws of na-


ture")

• and differentiated, that is that the spheres of the '!rue, the Good
and the Beautiful (the targets of Kant's three critiques) will in
future obey forms of validation and argument having differentiated
underlying logics.

It is this final point that Habermas was to pick up and amplify, but
from a somewhat different angle from that of M. Weber.
Whereas Weber interpreted his hypothesis of the polytheism of values
within an irrationalist-or more exactly decisionist perspective as regards
the political-ethical sphere (the choice of values being ultimately unde-
cidable, with reason applying solely to the field of science), Habermas
seeks, on the contrary, to rehabilitate a rationalist perspective, by ad-
mitting not one, but many forms of rationality and validation.
Here is what he writes (Habermas 1970, p. 285):
I will defend the idea that there exist at least four equally original preten-
sions to validity, and that these four pretensions, that is intelligibility,
truth, justness and sincerity form a whole that we can call rationality.

Habermas introduces an important distinction between these four pre-


tensions. For him, in any communication, the pre-condition of intelligi-
bility must first be fulfilled before the three other pretensions to validity
can apply.

3.2. Beyond intelligibility


The earlier remarks on decision-aid processes have not treated these
four pretensions to validity in a differentiated fashion. The presentation
of the model in the constructive approach mainly as a communication
tool has brought us close to the hermeneutic paradigm. The questions
that the actors pose and the problems that they encounter, are always
already interpretations. Through these they give meaning to the envi-
ronment of which they are an active part and to the actions that they
undertake within it. Seen from this viewpoint, the analyst's work would
appear to be to produce a "second" level interpretation, which depends
both on the "first" level interpretations and on specific methodological
contributions which make it possible to place these under a different
light.
This work probably takes place under a double horizon. It is first
of all the horizon of "translation", a paradigm that is abundantly anal-
ysed in the hermeneutic tradition (e.g. Gadamer 1960, p. 230 et seq.)
Such "translation" would nonetheless appear to involve a technicisation
96 AIDING DECISIONS WITH MULTIPLE CRITERIA

of language, and hence to depart, at least partially, from the environ-


ment of "natural language". But, by reason of the very structure of
the request, this horizon of decision-aid is also that of clarification, or
what Rorty calls the "recomposition of beliefs" (Rorty 1991, p. 105 et
seq.), a paradigm that he also proposes for research in general, from an
anti-essentialist position.
In the context of decision-aid, the only way of ensuring the validity
of the modelling would appear to be through asking questions like "is
that what you meant?", "have I understood you correctly?", "does this
feel like you?". In other words, the validation of the model presupposes
the analyst's ability to step back from technical language to natural lan-
guage. It being understood that the acceptance of the model presumes
that the person(s) at the source of the initial discourse accept(s) the ex-
istence of a distance from the translated text (but which we can assume
to be implicit in the request for aid).
The insistence on the hermeneutic paradigm, as well as reference to
the idea of "translation", have thus tended to give the impression that
the essential questions in the interaction between the analyst and the
decision-maker(s) concern above all the demand for intelligibility. Whilst
these questions are without doubt essential, it seems to us that decision-
aid goes further than this initial pretension. It is this that we have
tried to evoke by speaking both of "translation" and of "clarification" .
What we find, across the different analysis methods, are pretensions to
validity which go beyond the mere translation of the decision-maker's
preferences,· thereby taking us out of pure hermeneutics. In fact, the
dialogue with the decision-maker contributes to rendering these preten-
sions explicit and reconstructing them from the angle of the "good"
decision. This question becomes that of knowing what the expression
"good decision" in fact covers.

3.3. Specific forms of rationality


Leaving to one side for the moment the requirement of intelligibility,
the importance of which we have stressed, it is interesting to define more
precisely the status that Habermas confers on the three other pretensions
to validity, so as to seek to clarify what type of requirement for validity
the decision-aid process would fulfil.
The results of Habermas's analyses can be summarised in the table
below.
In Habermas's eyes, this differentiation is the result of a gradual learn-
ing process, contemporaneous with the process of rationalisation that has
marked the history of modern and contemporary societies. An appren-
Philosophical Perspective 97

Table 1. Spheres of values and the corresponding pretensions to validity

Sphere Corresponding Corresponding Characteristic Pretension


of values autonomised world statements to validity
activity
True Science Objective Descriptive Truth
or observational
statements
(we see that)

Good Morality Social Regulatory Normative


jlaw or prescriptive justness
jpolitics statements
(you must;
one ought)

Beautiful Art Subjective Expressive Sincerity


statements
(I feel)

ticeship process that is integrated today in the way we construct our


experiences.
Corresponding to each of these spheres are therefore different forms of
discursiveness, argumentation, calling into question. For this reason, the
statements specific to each sphere must be validated by specific paths:
When the intelligibility of a statement becomes problematic, we pose
questions like: What are you wanting to say'? How should I understand
you'? What does that mean'? The answers given to these questions we
call interpretations. When it is the truth of the propositional content
of a statement that is problematic, we ask questions like: are things
like you say they are'? Why is it like this and not otherwise'? To these
questions we reply with assertions and explanations. When the justness
of the norm underlying the act of verbalisation causes a problem, we pose
questions like: Why are you doing this? Why did you not act differently?
Are you entitled to do that'? Ought you not to act differently,? We answer
with justifications. Finally, when we wish to cast doubt, in an interactive
context, on the sincerity of a person opposite us, we ask questions like:
is he trying to deceive me'? Is he mistaken about himself? True, we do
not address this sort of question to persons who do not appear to us to be
worthy of faith, but to third parties. At most the interlocutor whom we
suspect of being short on sincerity can be "questioned" or, in an analytic
dialogue "led to think" (Habermas 1970, pp. 286-287).
Manifestly such questions are tackled in the dialogue between the
decision-aid specialist and the decision-maker. Beyond the requirements
of intelligibility, this process makes use of intertwined forms of rational-
98 AIDING DECISIONS WITH MULTIPLE CRITERIA

ity, none of which, however, appears to be able to account for the process
as a whole.

3.4. Habermas'positions
Before going further and addressing the more specific question of the
relationship between these theories of Habermas and the questions posed
by decision-aid, we would like to draw the reader's attention to three
points that seem to us to be important.
1 Through his proposals, Habermas seeks to distance himself from
two positions that were very widespread during the 20th century:
• a positivist or scientistic rationalism which tends to reduce
normative questions to factual, technical or similar questions,
that can then be decided according to the forms of rationality
specific to the sphere ofthe true (a trend which was very cer-
tainly present in decision-sciences from the outset and which
is still very likely present today) j
• decisionism according which ultimate choices of values are in
any event irrational and boil down therefore to subjective and
non-argument able preferences (a position which would refuse
any ambition to produce more just norms). This was the
position of M. Weber himself (Weber 1919).
2 Very generally, Habermas adheres to what he calls a consensus
theory of truth. For him, the truth of a proposition is intrinsically
linked to the ability to justify it in a discussion. This means that
he rejects theories which define truth as conformity with an ob-
ject considered as given. This consensual theory of truth applies,
for him, very certainly both to the sphere of truth itself and to
that of normative justness. However, given the nature of the types
of rationality specific to each of these spheres, he insists on the
fact that if the results of practical discussions can lead to chang-
ing social reality, theoretical discussions cannot be directed against
reality (nature) itself, but only against false affirmations about re-
ality (Habermas 1970, p. 296). In this way he gives an intrinsi-
cally constraining dimension to the objective world that gives it
its specificity, but without for as much adhering to a realist vision
of scientific theories.
3 With regard to the validation of regulatory statements (sphere of
the "good"), his reflections were to lead him to a rehabilitation
of discursive practice, argumentation, and public debate. For him,
Philosophical Perspective 99

a norm can be validated only via a public discussion that meets


a certain number of procedural demands leading to the "submis-
sion" to the best argument. These demands include, for example:
freedom of speech, absence of threat, equal access to the right to
speak, etc.

3.5. Habermas versus Kant


In a certain way, Habermas rejoins the Kantian "criterion" of universal
applicability as it appears in the categorical imperative: act solely in
such a way that you can at the same time wish that it become a universal
law (Kant 1797, p. 136). Whilst clearly situating himself within the
Kantian tradition, Habermas nonetheless distances himself from Kant
on a number of points. We will emphasise more particularly two of
these.
On the one hand, unlike the philosopher of Konigsberg, Habermas sets
out to base normative validation on effective discussions, i.e. on inter-
subjectivity in practice. In Habermas's eyes, Kant remains the prisoner
of a philosophical tradition which leads him to conceive the categorical
imperative and the use of the golden rule on the model of a monologal
experience of decentration. Rather than have normative validity arise
out of the effective meeting of the interests involved, Kant seeks to think
himself into the place of the "other". For Habermas, on the contrary,
normative validation requires the effective participation of the actors in
question. This mainly for two reasons:
• because it is only the effective meeting of the interests involved
that can create the conditions for true decentration, that is, the
shift from instrumental or strategic rationality to communicational
rationality;
• because effective discussion, and the joint arriving at an agreement
or a decision, is a participative learning process which confers on
this agreement a legitimacy which it cannot hope to achieve by
other paths that, compared with it, appear authoritarian.
On the other hand, via his consensual approach to truth, Habermas is
proposing a "weakened" conception of the Kantian demand of universal
applicability, in so far as those questions where standards are involved
clearly do not always call for the agreement of a "universal audience"
but simply that of the actors concerned. In other words, the question of
normative validity becomes contextualised.
If therefore we follow Habermas, the validation of a decision involving
normative dimensions (sphere of the good) requires reflection about (but
100 AIDING DECISIONS WITH MULTIPLE CRITERIA

also, as soon as we pursue an ideal of normative justness, with) all the


players concerned by the decision and hence the possibility of calling into
question the context (in particular the institutional context) in which
this decision is built.

4. Orders of validity and decision-aid


Let us now examine in the light of Habermas'theories, certain ques-
tions posed by decision-aid. These questions can be asked at several
distinct levels.
An initial level relates to the products of decision-aid. In order to sim-
plify things, we will consider in general the situation where the process
takes place between an analyst who masters the modelling and reso-
lution methods and technologies and a decision-maker who knows the
concrete context and will take responsibility for the decision. In what
conditions will a model developed in this context and the recommenda-
tions that are formulated be judged as valid? What we have here is an
"internal" validity, "not applicable to third parties", that is to say that
concerns only the analyst and the decision-maker. The more difficult
cases, at the same "internal" level (for example when the decision has to
be taken by a committee or if the information is held by distinct groups
of intervening parties), will hardly be envisaged here.
At the "external" level we can also pose the question of the validity
of the model and the recommendations. To what extent do these take
on a value in respect of third parties, that is persons who have not taken
part in the decision-aid process? We will revert to this question later.

4.1. An inextricable mixture of facts and


judgments
It goes without saying that by reason of the situation in which the
decision-aid work takes place, the analyst finds himself confronted with
contexts in which are mixed up together questions of intelligibility (due
for example to the technical nature ofthe vocabulary), data that has the
pretension to be factual, statements that are intended to be normative
(for example hierarchies of values) to which are added questions as to
the sincerity of the interlocutors (do they really think this or are they
saying it for strategic reasons?). In this way the analyst finds himself
confronted with a series of positions containing multiple pretensions to
validity, all to be taken into account in developing his recommendations
for the decision-maker. This means that the decision-aid work cannot
be considered from the angle of truth or normative justness only. For
example, in his relations with the decision-makers, the analyst can be
Philosophical Perspective 101

confronted with questions as to his sincerity, a questions in respect of


which his techniques, methods and other models give him little or no
validation instrumentality, referring him back to processes that refer to
"common sense" or what Habermas calls the experienced world.
However, our concern here is essentially the inextricable involvement
in the process of questions relating to pretensions to truth and norma-
tive justness, questions that have the property of posing themselves in
a discursive and dialogal manner (which is not the case when it comes
to the pretension to sincerity since one can doubt whether the fact of
asking someone whom one suspects of lying whether or not he is sincere
can convince us of anything whatsoever). The interaction that comes
into being between the analyst and the decision-maker of course in-
terferes with the initial context in which the expectation of a decision
emerges. On top of which, in a certain way, the analyst injects his own
demands for validity in the decision-making process. For example, the
consistency of a particular procedure, the possibility of applying or not
a particular procedure to a particular situation, the congruence between
the techniques used and the decision-maker's ways of reasoning, and so
on. From this interaction between analyst and decision-maker(s) will
come statements that will generally take the form of prescriptions or
recommendations, statements the form of which is more characteristic
of the sphere of normative justness.

4.2. Neither true nor just


In fact, the question that we would like to raise is that of the sta-
tus of these statements. To what type of validity can they really lay
claim? If it is indeed likely that the request made of the analyst by the
decision-maker(s) corresponds to one or more expectations of validity,
the question arises: What type of expectation can in fact be legitimately
honoured by the analyst?
By way of reply, we would advance two elements:
1 It does not appear to us first of all that one can attribute to them
a pretension to truth, for at least two reasons:
• their aim is not (except if we fall back again into the mis-
takes of truth as conformity with reality, for example in a
realist vision in which the right decision were to pre-exist the
decision-making process) to translate or to transcribe ade-
quately what takes place in reality, which would then permit
an empirical confrontation.
• the decision-aid process very certainly includes statements of
fact having a pretension to truth (for example, the cost of
102 AIDING DECISIONS WITH MULTIPLE CRITERIA

a product), but imbedded in regulatory or evaluatory state-


ments, in the form of preferences and value hierarchies. More-
over, given that its intended purpose is to produce recom-
mendations or prescriptions, decision-aid expresses itself, as
we have said, in forms whose pretension to validity is taken
rather from the second sphere. According to these any pre-
tension to truth would be tantamount to reducing normative
rationality to scientific rationality. This is the mistake com-
mitted, for example, by technocratic political theories.
2 Even though they take normative forms, one cannot either, it
seems to us, recognise that the recommendations to which the
decision-aid processes lead meet the requirements of a pretension
to normative validity since, manifestly, the interaction between
the decision-makers and analysts does not guarantee the procedu-
ral conditions required for the construction of a normatively just
decision. For this, the analyst would have to adopt a critical posi-
tion and, for example, allow himself to demand that the debate be
opened up to actors that are excluded from it, whilst being con-
cerned by it. This is a question that appears largely outside the
concerns at the heart of decision-aid practice.

4.3. A validity that is hypothetical and


conditional
The models and recommendations coming from the decision-aid pro-
cess can therefore in all probability not be recognised a priori as having
a "external" validity. Indeed, recognising such validity would be tanta-
mount to unjustifiably extending a pretension to validity to a situation
in which an audience is "concerned" but has not been canvassed for
its opinion. In reality, the "validity" of decision-aid models and results
remains essentially hypothetical and conditional, with these two terms
making reference at one and the same time:
• to the value systems of the decision-makers, the normative validity
of which are generally not called into question, but at most to
clarification processes which can, it is true, influence preferences;
• to their acceptability or admissibility by the decision-maker who
will be required to take the final decision and hence assume a
responsibility that is not that of the analyst;
• to their admissibility by the analyst whose role is to guarantee the
congruence of the model with the available data and the decision
maker's way of reasoning and system of values.
Philosophical Perspective 103

4.4. Procedural conditions of validity


The decision-aid process probably includes intrinsically a hermeneu-
tic dimension, the validation of which no doubt runs up against the
specific difficulties involved in validating hermeneutic procedures. How,
indeed, can we be sure that someone else's understanding can be con-
sidered as a "good" understanding, or as a "valid" understanding? In
reality, the validation of the understanding procedure takes place in the
discursive exchange with the decision-maker, an exchange supported by
methodological processes, and in which validation takes the form of the
construction, falteringly and step-by-step, of an agreement. First and
foremost this is a task of clarification and elucidation, within a system
of expectations that the process can, moreover, contribute to partially
reconstructing.
The construction of models during a decision-aid process has then to
beat out a path for itself which is in conformity with these requirements
of conditionality and admissibility, at the same time avoiding the traps
of abusive interpretations of the pretensions to both truth and norma-
tive justness, that is, either of slipping into realism or into a conception
of truth as conformity with reality, or deceiving oneself as to the pre-
scriptive dimension of the proposals and conferring on them a normative
justness which the use of decision-aid procedures clearly does not guar-
antee.
Such conclusions, which in fact refer to the process viewed as a whole,
appear to us to be imperative. They may well appear disappointing. At
least they are a call to modesty. The fact remains, however, that the
communications with the decision-maker, as well as confrontation with
the various logical systems underlying the proposed models (we come
back on this item, rather extensively, in section 6 below) can most cer-
tainly contribute to meeting certain requirements of validity, which we
would happily qualify as "partial" or "local", but within the context and
within the limits that we have sought to map out. Partial or local, be-
cause limited to the parties intervening in the process (and not discussed
by the entire public concerned). Partial or local also because based on
imprecise, uncertain and incomplete data and on interpretation of the
information gathered on preferences. One can suppose that this work of
validation, even if partial or local, will affect the validity of the decision
itself, if only via the potentially justificatory effect on it of this work of
clarification and elucidation. In other words, the modesty that we claim
for our constructivist position does not imply any relativism.
104 AIDING DECISIONS WITH MULTIPLE CRITERIA

5. Description and interpretation of a


constructivist approach
Since validity is to be sought for in the process, not in the results or
the model, we examine more in detail the decision-aid process, mainly
from the point of view of the analyst, in order to identify the validity
requirements specific to such process.
Let us place ourselves in a constructivist approach, with reference
for example to Roy (Roy 1992, pp. 513-514), where this approach is
defended in opposition to the path of realism:

Taking the constructivist path consists of considering concepts, models,


procedures and results as keys that are capable (or not) of opening cer-
tain locks that may (or may not) be suitable for organising and carrying
forward a situation. Here, concepts, models, procedures and results are
envisaged as tools that are useful for elaborating and developing convic-
tions as well as communicating about the foundations of these convic-
tions. The objective is not to discover a truth that exists outside the
actors involved in the process, but to construct a "set of keys" which
will open doors to them and enable them to move forward, to progress
in accordance with their own objectives and value systems.

5.1. Constructing a model of global preference


Let us stress here that the model is conceived as a communication
tool and let us note that this approach is not limited to multi-criteria
problems. Cases in which several viewpoints (several objectives, con-
tradictory interests) have to be taken into account does, however, pose
special problems. Let us place ourselves in the simple case of interac-
tion between an analyst and a decision-maker. What we need to do is
to help the decision-maker to arbitrate between values or interests that
underlie different viewpoints. This arbitrage can happen informally in
an interaction in natural language or, on the contrary, by constructing
a formalised model. Let us suppose that we envisage using a formal
model. By formal model we understand here an explicit representation
of the partial (single-criterion) preferences, together with a process of
aggregating these partial preferences. Let us observe that such mod-
els are not simple communication tools, but have operational properties
in that they serve to implement procedures which, in a particular de-
cisional situation, operate a "synthesis" of the decision-makers'partial
preferences. Let us imagine that, at a certain stage in the interaction,
the analyst proposes a methodology leading to a representation of the
partial preferences and a procedure for aggregating these. Very gener-
ally, the analyst will propose, for example, evaluating the alternatives
on scales relative to each view point, and then aggregating these eval-
Philosophical Perspective 105

uations in order to construct a representation of the alternatives on a


global scale. Or alternatively, the analyst will propose considering each
pair of alternatives in turn and weighing the arguments in favour of the
global preference of one over the other. The options taken at this stage
are practical and methodological in nature, but have implications as to
the information that one can hope to withdraw from the model. The first
option moves in the direction of constructing a global utility function (or
value function), the second, in the direction of a method operating on
the basis of pair-wise comparisons, for example, an outranking method.
In the first case, if the required information can be supplied in a reliable
manner, the global utility obtained will suggest a total ranking of the
alternatives, in the second case, the structure obtained after aggregating
partial preferences, for example an outranking relation, will be generally
further from a ranking.
If the decision-maker decides to reflect within the framework of the
proposed modelling methodology, the analyst will guide the process to
the point where all the inputs and parameters of a procedure for aggre-
gating partial preferences have been fixed. One can be led to reconsider
the choice of approach at any time if the information that is being asked
for cannot be obtained in a sufficiently natural manner or if the "way of
speaking" about the problem used by the analyst in the framework of
the methodology appears to the decision-maker to violate the "nature
of things" or his way of seeing them. Note that a formal model may
imply consequences that were not explicitly discussed with the decision-
maker during the elaboration of the model; if the analyst is aware of
such consequences he may ask the decision-maker whether he feels in
agreement with these and this may improve the confidence in the model
or on the contrary question its validity. In any event, if we arrive finally
at the end of the construction of the model, it is generally by means of
an iterative process, by trial and error, and the parameters of the model
can only be considered as finally fixed in a given decisional situation (in
particular with a fixed set of alternatives) when, in the prescription of
the model itself, nothing appears unacceptably counter-intuitive to the
decision-maker and the latter is sufficiently convinced of the solidity of
the result (for a presentation of the many aspects requiring validation
during and after the elaboration of a model, see Oral and Kettani 1993)
The description of the construction process that has been sketched out
above shows fairly clearly that the validity of the model lies in proce-
dural demands for genuine dialogue between the decision-maker and the
analyst ("Do we understand ourselves?" "Is there not perhaps a misun-
derstanding as to the meaning of the words, the concepts?", etc.). The
model is finally judged as being valid when the decision-maker and the
106 AIDING DECISIONS WITH MULTIPLE CRITERIA

analyst find "nothing more to re-say", in particular when the decision-


maker does not have the feeling that the results drawn from the model
or the process itself violate his value system or his perceived structure
of preferences. To demand that the decision-maker not feel any contra-
diction with his value system appears to us to be difficult to determine
unequivocally in terms of spheres of validity, in particular because the
interaction itself is a process of clarification and learning which proba-
bly contributes to a partial reconstruction of the perception of reality,
of normative expectation, and even subjective preferences. In itself the
process can force a process of decentration and of taking on board of
points of view which would not otherwise have been taken into account.
Here we measure the extent to which, whilst maintaining a strong de-
mand for validity, we move away from a realist conception of this. The
"agreement" between the final model and the decision-maker's values
does not make the model into a representation of these pre-existing val-
ues or preferences. There is no pretension that the preferences should
take a particular form (ranking, etc.); the model constructed in this way
has no pretension to universality. Simply, the decision-maker admits the
pertinence of the model as constructed and of the prescription in the
particular decision-making situation and considers that these do not run
counter to his values; the model is hypothetical in nature.

5.2. Consensus within science


This does not lead us off the path as long as the validity in the area of
the true that is required in the exact sciences is apparent. In Habermas's
theory, truth is the result of a consensus: if, based on the known facts
and, as the case may be, on specially designed experiments, a scientific
community is of the opinion that it can accept a theory or not seriously
fault it, this theory will be considered as true. In order to be considered
as true, a theory does not necessarily have to be capable of explaining all
the facts in its domain; rather those that it is capable of explaining have
to constitute a set of arguments that are strong enough to convince the
scientific community to accept it, and there must not exist important
facts that are too contradictory with the theory and for which another
alternative theory is at hand which can also sufficiently explain these
and other facts possibly explained by the first theory. This type of
situation has been studied in detail, in particular by Kuhn, with respect
to the emergence of the theory of relativity, a theory which has both
encompassed and gone beyond Newtonian mechanics. This does not
mean that the classical mechanics were wrong, but that their area of
validity is limited to speeds that are low in respect to the speed of light.
Philosophical Perspective 107

Nor has it changed the concept of time in its everyday use. Another
aspect, illustrated by the particle and wave interpretations of quantum
mechanics, is that two descriptions of the same reality can exist side
by side (the wave-particle duality) and both prove pertinent in different
contexts.

5.3. Internal validation


These examples move away from the concept of a truth as conformity
and plead, it seems to us, for a consensus-based concept of truth. They
bring us closer to the concept of validity as used for decision-aid models
which have to be accepted by the parties, this acceptance being based
on an examination of the behaviour of the model in real and imaginary
situations and either consolidating participants'trust in it or provoking
a revision of the model. The behaviour of a decision-aid model may be
judged inadequate where it contradicts either the "data" (for example
the evaluations of alternatives by recognised experts) or the decision-
maker's preferences or value system. We are not far from the experi-
mental method and also the thought experiments dear to Einstein. One
difference is that the validity is tested in the limited circle of the partic-
ipants in the process and not opened up to the criticism of the scientific
community. A second point is that the "facts" that can enter into contra-
diction with the model are of two orders: objective facts (the evaluations
or objective characteristics of the alternatives) or the decision-maker's
value judgements. From the analyst's viewpoint, these two orders of
"facts" both represent constraints; they intervene in the various levels
of validation (conceptual, logical, experimental, operational and data)
that have to be taken in charge by the analyst (according to Landry et
al. 1983).

5.4. Validating in the sphere of the good


What has just been said is concerned with internal validation. What
then would constitute a normative validation (in the sphere of the good)
of the model and of the course of action that it prescribes? Let us imag-
ine a head of company who is required to choose between two items
of equipment, one more productive but more likely to cause accidents.
Here we have a conflict of underlying values that has to be "resolved" if
we want to come to a decision. According to Habermas, the only way to
do this that can be qualified as just is for the persons involved to take
part in the decision and that this be taken at the end of a discussion,
based on a consensus acquired through submission to the best argument.
Even though the concept of "best argument" and its implementation in
108 AIDING DECISIONS WITH MULTIPLE CRITERIA

practice raise serious difficulties, we can image situations in which forms


which come close to normative validity are achieved. Let us note in any
event that, for Habermas (unlike Rawls), the effective participation in
the discussion of the persons concerned is essential. How can one situate
preference aggregation methodologies as against demands for normative
validity? It is difficult to support discussion on values necessarily taking
place during the prescription construction process. Here one is not ar-
guing in terms of the values themselves, one is drawing assistance from a
mathematical model which is based generally on indicators that express
the importance of criteria in order to give concrete form to the influ-
ence of the decision-maker's values in the decision-making situation in
question. One can image, it seems to us, that the process of reflection
of persons availing of such a tool (changing the value of the indicators,
stepping back, refining, etc.) is not the same as an argued debate based
on antagonistic values. This questioning does not in any way prevent
us from recognising that such methods have the virtue of carrying the
debate forward.
In a genuinely multi-decision-maker framework, it would seem to us
also that these methodologies can succeed in constructing global pref-
erences only for sub-groups of decision-makers sharing the same values
(or interests). In a context of groups of decision-makers having opposing
values, the inevitable clash of underlying options and values seems to us
to escape any procedure other than animating or facilitating discussion.
Consensus-based modelling appears out of the question if, for example,
the decision-makers do not ascribe similar importance to the individual
criteria (cf. Roy 1985, English version, p. 274).

5.5. Model and external validity


This having been said, if a decision-maker (but it is a concern that
belongs to the decision-maker alone) desires that the decision be vali-
dated in the sphere of the good, he must then involve himself in a public
discussion with all the actors concerned and argue his positions, in order
to win the consent of the various parties. In this context, a model that
has first been developed in an interaction between this decision-maker
and an analyst can have a role to play. The job of the latter is to render
explicit the data, as well as the value judgements, on which the decision-
maker is basing his position. If the model has been drawn up precisely
with the intention of revealing this as clearly as possible, without seek-
ing to conceal the decision-maker's real reasons, then the model could
indeed contribute to bringing about the climate of true dialogue between
the "decision-maker" and the other parties that will be a crucial part of
Philosophical Perspective 109

the value (in the sphere of the good) of the decision that will be taken.
In developing the model, particular attention can be paid to supporting
the decision-maker's position as well as possible, to putting together the
"file" that will be the most convincing for all the parties concerned. This
is a form of "external" validity for the model; it is here too that qualities
such as the transparency of the model can reveal themselves, as this can
contribute to establishing the climate of trust which is necessary for a
true public discussion.

6. What is a good methodology?


In the constructivist perspective described above, the essential quality
of a methodology is to make it possible, in a wide variety of situations,
to arrive at a prescription which the decision-maker can give his placet
to. This prescription mayor not be supported by a formal model of
the decision maker's preferences; we are chiefly interested here in those
models that are. Very likely the quality of a methodology depends on the
facility with which the decision-maker "enters" into the logic underlying
the construction of the prescribed course of action, on the attractiveness
of the concepts and the ways of reasoning about the preferences that
are proposed in the methodology. Of course the attractiveness of a
methodology is probably strongly influenced by cultural or philosophical
factors. By way of recent illustration, the reader can consult an exchange
of reactions and counter-reactions to an article by A. Scharlig (Scharlig
1996, Zionts 1997, Roy and Vincke 1998, Marchant and Pirlot 1999).
It should be clear also that different methodologies could equally well
be applied in a given decision situation. Since they may be based on
different types of models of preference, they could eventually differ in
the recommendations derived from their respective models and even lead
to contradictory ones (note that this may also happen when the same
methodology is applied at different periods of time to the same decision
situation). Although this may seem disturbing at first glance, it is not
in contradiction with the notion of "local and partial" validity proposed
above, since validity lies in the process, not in the result. We have men-
tioned however in section 4.3 that the analyst may have doubts about the
validity of a model if it appears that the latter is not sufficiently "congru-
ent" with what is known about the decision situation and the preferences
of the decision-maker. Since we do not believe that any model will be
able to adequately represent the decision-maker's preferences in all cir-
cumstances, we think that there is here a specific aspect of validation
that deserves discussion. In effect, the analyst should be aware of a suf-
ficiently large variety of models (and associated methodologies) and able
110 AIDING DECISIONS WITH MULTIPLE CRITERIA

to identify the model most appropriate in a given decision situation. In


this view, the availability of knowledge about the hypotheses underly-
ing the relevant application of the models and their properties is crucial
(this point was raised in Bouyssou et al. 1993). We discuss in the rest of
this paper the role of axiomatic and theoretical results in decision-aid.

6.1. Axiomatic results are ancillary


Let us say from the start that we do not consider that axioms should
be used to impose a methodology or a model on the basis of normative
rationality principles. In our view, the role of the axioms is to shed some
light on the descriptive power of a model. In order to clarify the various
conceptions of the role of theory and axiomatics, we refer again to Roy's
writings on the question.
Alongside the paths of realism and constructivism, Roy discusses (see
Roy 1992 and Roy 1993) a third path that he calls the "quest for pre-
scribing norms" and which he associates with the "axiomatic path".
The term "norms" does not refer here to the sphere of the good, rather
the norms in question are intended as reasoning rules of the rational
decision-maker. Roy warns rightly against the use of theoretical results,
based on apparently irrefutable hypotheses, leading to the imposition of
a particular model, a chosen procedure or a particular form of voting.
We agree with him totally, whilst emphasising a point on which Roy
perhaps places insufficient emphasis (at least in the French version (Roy
1992), the English version (Roy 1993) being more precise on this point):
the axiom-based approach does not constitute in itself an approach to
decision-aid. Its role is an auxiliary one that can serve anyone of the
three approaches, realist, constructivist or normative. Depending on the
approach also, different types of axiomatic results will be looked for and
they will be used differently.

6.2. Theory for helping to understand


The theoretical results tell us nothing about "the world" but about
formal models or procedures (aggregation and exploitation procedures
relating to mathematical objects). One may invoke one or the other re-
sult, interpret and use it in any approach (realistic, constructivist, etc.)
that makes use of formal procedures, in order to gain a good under-
standing of the properties of these procedures and possibly elect for one
procedure as against another. The theoretical results are, however, a
tool for gaining mastery over procedures which are of concern mainly
to the analyst. If we recognise that the requirement of communication
is crucial in the decision-aid process, axiomatic results, at least in their
Philosophical Perspective 111

precise and technical form, have no place here as they would generally
appear as being technocratic, unless the decision-maker himself is able
to perceive their precise scope. In any event, they cannot prevail against
the decision-maker's perceptions of the decision-making situation and of
his own preferences. In particular, where the decision-maker formulates
preferences which, given what is known (for example, evaluations of al-
ternatives), appear to the analyst to be incompatible with one form of
rationality (often translated into axioms, such as monotonicity, transi-
tivity of preferences, etc.), and where the decision-maker sticks to his
viewpoint after the analyst has drawn his attention to what he considers
to be incoherent, it is the analyst's task to question the data, the model
and/or the axioms. In any event it would be totally aberrant to force
the decision-maker to adopt a way of thinking which seems foreign to
him-this would inevitably lead to a breakdown of dialogue and of the
whole decision-aid process.
In general therefore, whilst remaining transparent for the decision-
maker, the theoretical results may, during an aid process, lead the an-
alyst to pose questions (to himself) as to the pertinence of a formal
procedure. One typical case would be for example the use of a proce-
dure making full use of the cardinality of a numerical representation,
whereas it would seem that the decision-maker considers this informa-
tion as ordinal. What we have here are demands for formal consistency
and "faithfulness of translation" that can be borne only by the analyst.
It is in these demands that the analyst places the essential of what,
for him, makes up the validity of the model (the "logical validity" us-
ing the terminology of Oral and Kettani 1993). Where the analyst is
careful to avoid a normative and dogmatic attitude to rationality or a
choice of model, the theoretical results by developing the consequences
of the axioms to their logical conclusion, suggest means of directing the
decision-maker's attention to the implications of his affirmations. For
example, in cases of decision-making under uncertainty, one can invoke
"money-pump" type arguments. Of course, we repeat, it is the affir-
mations of the decision-maker which, in the final instance, carry the
day, possibly forcing the analyst to change the bases on which he has
constructed the decision-aid model.
Apart from suggesting to the analyst ways of testing the pertinence
of the use of a particular procedure in a particular decision-making pro-
cess, axiomatic characterisations can fulfil other roles. More positively,
they can be advanced in order to demonstrate to the decision-maker the
internal consistency, or the limitations, of a particular approach. For ex-
ample, it seems to us that Arrow's theorem (interpreted in the context of
decision-making in the presence of multiple criteria: see Bouyssou 1992
112 AIDING DECISIONS WITH MULTIPLE CRITERIA

and Perny 1992) enables us to understand and accept the following fact:
one cannot expect that the comparison of two alternatives, based mainly
on ordinal considerations, not take into account the other available alter-
natives (independence of irrelevant alternatives) and at the same time,
that the global preference be systematically transitive.

6.3. One example


In order to illustrate what we have said about the possibility of using
axiomatic results in different approaches, let us consider the following
result, which is well known and fundamental in utility theory (or, more
precisely, the theory of value functions).
If the global preference is a ranking, we can find a numeric representa-
tion of the alternatives (a global utility) that takes care of this ranking.
If, in addition, certain conditions are fulfilled (for example independence
in the sense of preferences), in this case the global utility can be obtained
as the sum of partial utilities representing the partial preferences, in a
unique manner (leaving out changes of origin and unit).
This type of result can be used in many different ways. For example,
in a realist approach, one habitually presupposes that preferences define
a ranking of alternatives and one will use the above result to affirm that
one can find this ranking by reconstituting the global utility starting
from partial utilities, themselves obtained as a representation of the
partial preferences. The uniqueness of representation is guaranteed (if
the conditions are fulfilled) and it is that which confers on this approach
an image of objectivity and necessity. This having been said, the theorem
does not pretend that the reconstitution of the global utility in a realistic
approach is the only way to "reveal" the global preference. Let us say
merely that the result suggests an intuitively attractive approach for
doing this: we are accustomed (culturally) to weighted sum evaluations
as this has been and remains a current scholarly evaluation method.
The "devil", however, lies not in the axiom set, but in the hypothe-
sis, postulated as self-evident, that the global preferences define a com-
plete ranking of alternatives (see here Zionts' response to Schiirlig in
Zionts 1997). However, experimental psychology studies, for example,
have shown that one cannot generally expect a high level of stability
of "global preferences" (which undermines their pretension to existence
in a strong sense) nor to a consistency that is as strong as a ranking.
This phenomenon of non-adequation can perhaps be the cause of a dia-
logue break between the analyst and the decision-maker in a decision-aid
process with regard to the realistic way aimed at unveiling the global
utility.
Philosophical Perspective 113

This having been said, and the reader will already have read this
between the lines above, there is nothing to oppose a constructivist
approach aimed at constructing a global utility or a global evaluation
function. The result remains pertinent: it can be interpreted as a pos-
sibility result of an approach consisting of constructing a global evalu-
ation function in an additive fashion. It therefore suggests a particular
construction strategy consisting of collecting information ("preference
fragments") from which it is then possible to construct a partial utility
function for each criterion and to assess the trade-offs, and then sum
these functions, weighted by the trade-offs. Here the theoretical result
serves both to guarantee the consistency of the approach and to define
which "fragments" are needed (see, for example, Fishburn 1967 for a
description of 24 methods of constructing an additive utility function;
these methods apply in different contexts and are based on the gath-
ering of preference fragments. One may even, in this context, consider
abandoning the hypothesis that the global preference is a ranking by
interpreting (exploiting) the global evaluation function more leanly, for
example by using a threshold: given the imprecise nature of the matter
we are dealing with, one can consider that one alternative is globally
preferable to another if the difference between the evaluations of them
exceeds a threshold.

7. Conclusions
It seems to us that Habermas's theory of orders of validity supplies
a framework which allows us to situate in a light-shedding manner the
different decision-aid approaches from an epistemological viewpoint in
the broad sense of the term. In particular, comparing the realist and
constructivist approaches, it seems to us that one of the strengths of con-
structivism is to make it more possible to maintain a true dialogue with
the decision-maker(s). This way of conceiving the validity of a model,
based on genuine dialogue and critical discussion, far from being specific
to the decision-aid process and opposed to the model validation meth-
ods of the sciences, represents a continuation of the validation models
that are valid in all the spheres defined by Habermas, as also in day-to-
day discursive practice. The particular context of decision-aid simply
reduces the "public" discussion to its simplest expression, a form of di-
alogue, and means that the resulting model is generally inapplicable to
third parties. On the other hand, the types of "realities" reflected by the
model are fairly particular, consisting as they do of the decision-maker's
own evaluations and preference judgements. The model constructed in
this way therefore remains hypothetical.
114 AIDING DECISIONS WITH MULTIPLE CRITERIA

In order to reassure (or further disturb) the reader who is afraid that,
with these concepts, the practice of decision-aid moves too far away from
scientific rigour, we shall end by mentioning two positions that show that
similar problems are echoed in other disciplines.

7.1. Models in statistics ...


In statistics, first of all, McCullagh and NeIder, in the introduction to
(McCullagh and NeIder 1993, p. 6), seek to answer the question "What
is a good model? ". According to them the first principle that can guide
the analyst ("the modeller") is:
... all models are wrong ; some, though, are better than others and we
can search for the better ones. The second principle is not to fall in
love with one model, to the exclusion of alternatives. Data will often
point with equal emphasis at several possible models and it is important
that the analyst accepts this.

Even in a context where the concept of "reality" appears less disputable


than in that of decision-making, the uniqueness "of the good model" is
not guaranteed and the realist concept of "best model" is abandoned in
favour of a contextualised concept in which the word "best" refers solely
to the methodology and is, in any event, neither ontological nor strongly
normative.

7.2. . .. and simulation models


In (Kleindorfer et al. 1998), the authors cast a philosophical light on
the validation of simulation models, adopting a mid-way position be-
tween realism and relativism, recognising the importance both of the
empirical verification of the results of the simulation model (confronta-
tion with "reality") and that of communication with the "client" (who
plays a major role in constructing a valid model and in its credibility in
the eyes of the client). The validation process is compared to a court
judgement; the proof of the pertinence of the model lies in the jury be-
ing convinced. The philosophers whom the authors call on to back them
are part of the modern current of hermeneutics (Gadamer, Bernstein
and Rorty). Rather than the model of justness, we have evoked that
of translation, coupled with that of clarification. This ought to draw
attention to the fact that, in decision-aid, of Habermas's four preten-
sions to validity (intelligibility, truth, normative justness and sincerity),
the first very certainly occupies an important place. In the dialogue
with the decision-maker, the first challenge-hermeneutic par excellence-
is to create the conditions of mutual understanding. As Rorty would
no doubt suggest, decision-aid work is a modest attempt to (re)compose
REFERENCES 115

or (re)build systems of beliefs, which can impart meaning to and create


trust in commitments and practice. However, the fact remains that this
hermeneutic work is also a work of mutual learning within which the
other pretensions to validity, in particular those to truth and normative
justness, obviously also become explicit. It is these pretensions that the
decision-aid processes can contribute to meeting, at least within the hy-
pothetical, partial and "local" limits that have been evoked earlier in this
article. Following Habermas's criticisms of the hermeneutic current, in
particular those inspired by Gadamer, we should probably consider that
it is precisely this contribution that prevents decision-aid from falling
into relativism.

7.3. What is a model ?


These viewpoints also appear to us to indicate that the final word has
not yet been said on what a model is. In particular the idea of model,
which comes from the physical sciences, demands to be considerably
generalised and fine-tuned if we want to include under this concept the
models devised in numerous activities such as data analysis or decision-
aid. A better understanding of this concept appears to us all the more
useful and interesting given the fact that it is based on the thinking of
a number of contemporary philosophers and has a crucial impact on the
practice of a large number of disciplines.

Acknowledgments
The authors would like to thank Bernard Roy for the interest he
showed for this work and for his pertinent comments on the original
version. They are indebted to Denis Bouyssou for his many comments
on the second version of this work, which have helped clarify certain
points that were poorly or insufficiently developed. His remarks together
with those of two anonymous referees helped us to improve the overall
readability of the paper. The authors thank Michael Lomax for the
quality of his translation into English of the original text.

References
R.L. Ackoff (1979) "The future of operational research is past," J. OpJ. Res. Soc. 30
(2): 93-104.
R.L. Ackoff (1979) "Resurrecting the future of operational research," J. OpJ. Res.
Soc. 30 (3): 189-199.
D. Bouyssou (1992) "On some properties of outranking relations based on a concordance-
discordance principle," in A. Goicoechea, L. Duckstein and S. Zionts (eds.), Mul-
tiple criteria decision making. Berlin: Springer 93-106.
116 AIDING DECISIONS WITH MULTIPLE CRITERIA

D. Bouyssou, P. Perny, M. Piriot, A. Tsoukias, P. Vincke (1993) "A Manifesto for the
new MCDM era," Journal of Multicriteria Decision Analysis, 2: 125-127.
D. Bauyssou, T. Marchant, P. Perny, M. Pirlot, A. Tsoukias, P. Vincke (2000) Eval-
uation and decision models: a critical perspective. Dordrecht: Kluwer Acad. Pub!.
R. Dery, M. Landry, C. Banville (1993) "Revisiting the issue of model validation in
OR: an epistemological view," European Journal of Operational Research, 66 (2):
168-183.
P.C. Fishburn (1967) "Methods of estimating additive utilities," Management Science,
13 (7): 435-453.
H.G. Gadamer (1960) Wahrheit und Methode. Tiibingen: J.C.B. Mohr. French trans!.:
Verite et methode (1976). Paris: Seui!.
J. Habermas (1970) Zur Logik der Sozialwissenschaften. French trans!.: Logique des
sciences sociales et autres essais (1987). Paris: PUF. English trans!.: On the logic
of social sciences (1990). Boston: MIT Press.
I. Kant (1797) Die Metaphysik der Sitten. French trans!.: Fondements de La metaphysique
des moeurs (1974). Paris: Delagrave.
G.B. Kleindorfer, L. O'Neill, R. Ganesham (1998) "Validation in simulation: various
positions in the philosophy of science," Management Science, 44 (8):1087-1099.
T.S. Kuhn (1962) The structun of scientific revolutions. Chicago: University of Chicago
Press.
T. S. Kuhn (1977) The essential tension. Chicago: University of Chicago Press.
M. Landry 1998 "L'aide a la decision comme support a la construction du sens dans
I'organisation," Systemes d'Information et Management, 3: 5-39.
M. Landry, C. Banville, M. Oral (1983) "Model validation in Operations Research",
European Journal of Operational Research, 14: 207-220.
M. Landry, C. Banville, M. Oral (1996) "Model legitimation in Operations Research,"
European Journal of Operational Research, 92: 443-457.
J.-L. Le Moigne (1980) "Les sciences de la decision: sciences d'analyse ou sciences de
genie? Interpretations epistemologiques," in R. Nadeau et M. Landry (eds.), L'aide
d La decision-Nature, instruments et perspectives d'avenir. Quebec: Les Presses de
I'Universite Lava!.
Th. Marchant, M. Piriot (1999) "Modern decisive wives don't wear corsets," Journal
of Multi-Criteria Decision Analysis, 8: 237-238.
P. McCullagh, J.A. Neider (1983) Generalised linear models. London: Chapman and
Hal!.
M. Oral, O. Kettani (1993) "The facets of the modeling and validation process in
operations research," European Journal of Operational Research, 66 (2): 216-234.
P. Perny (1992) "Sur Ie non-respect de l'axiome d'independance dans les methodes
de type ELECTRE," Bruxelles: Cahiers du CERO 34: 211-232.
R. Rorty (1982) Consequences of pragmatism. Minnesota: University of Minnesota
Press. French trans!.: Consequences du pragmatisme (1993). Paris: Seui!.
R. Rorty (1991) "Objectivity, relativism and truth", Philosophical papers 1. Cam-
bridge: University Press. French trans!.: Objectivisme, relativisme et verite (1994).
Paris: Presses Universitaires de France.
J. Rosenhead (ed.) (1989) Rational analysis for a problematic world. Chichester: Wi-
ley.
REFERENCES 117

B. Roy (1985) Methodologie multicritere d'aide ci. la decision. Paris: Economica.


English version: Multicriteria methodology for decision aiding (1996). Dordrecht:
Kluwer Acad. Pub!.
B. Roy (1992) "Science de la decision ou science de Paide a. la decision?", Revue
Internationale de Systemique, 6 (5): 497-529.
B. Roy (1993) "Decision science or decision-aid science?", European Journal of Op-
erational Research, 66 (2): 184-203.
B. Roy, Ph. Vincke (1998) "The case of the vanishing optimum revisited again,"
Journal of Multi-Criteria Decision Analysis, 7: 351.
A. Schii.rlig (1996) "The case of the vanishing optimum," Journal of Multi-Criteria
Decision Analysis, 5: 160-164.
M. Weber (1919) Politik als Bern! French trans!.: Le savant et le politique (1963).
Paris: Union Generale d'Editions, Collection 10/18, Pion.
S. Zionts (1997) "The case of the vanishing optimum revisited," Journal of Multi-
Criteria Decision Analysis, 6: 247.
III

THEORY AND METHODOLOGY


OF MULTI-CRITERIA DECISION-AIDING
A CHARACTERIZATION
OF STRICT CONCORDANCE RELATIONS

Denis Bouyssou
CNRS - LAMSADE, Universite de Paris Dauphine, France
bouyssou@lamsade.dauphine.fr

Marc Pirlot *
Faculte Poly technique de Mons, Belgium
marc.pirlot@fpms.ac.be

Abstract Based on a general framework for conjoint measurement that allows for
intransitive preferences, this paper proposes a characterization of "strict
concordance relations". This characterization shows that the originality
of such relations lies in their very crude way to distinguish various levels
of "preference differences" on each attribute.

Keywords: MCDM; Conjoint measurement; Nontransitive preferences, Outranking


methods; Concordance relations

1. Introduction
A basic problem in the field of Multiple Criteria Decision Making
(MCDM) is to build a preference relation on a set of alternatives evalu-
ated on several attributes on the basis of preferences expressed on each
attribute and inter-attribute information such as weights or trade-offs.
B. Roy proposed several outranking methods (see Roy, 1968; Roy,
1996b; Roy and Bouyssou, 1993; Vincke, 1992; Vincke, 1999; Bouys-
sou, 2001) as alternatives to the dominant value function approach (see
Fishburn, 1970; Keeney and Raiffa, 1976; Wakker, 1989). In outranking
methods, the construction of a preference relation is based on pairwise

• Corresponding author: Marc Pirlot, Faculte Polytechnique de Mons, rue de Houdain 9, 7000
Mons, Belgium
122 AIDING DECISIONS WITH MULTIPLE CRITERIA

comparisons of the alternatives. This preference relation may either be


reflexive as in the ELECTRE methods (see Roy, 1991) (it is then inter-
preted as an "at least as good" relation) or asymmetric as in TACTIC (see
Vans nick , 1986) (it is then interpreted as a "strict preference" relation).
Most outranking methods, including ELECTRE and TACTIC, make use of
the so-called concordance-discordance principle which consists in accept-
ing a preferential assertion linking an alternative a to an alternative b
if:

• Concordance Condition: a majority of the attributes supports this


assertion and if,

• Non-Discordance Condition: the opposition of the other attributes


is not "too strong".

In this paper we restrict our attention to outranking methods such as


TACTIC aiming at building a crisp (i.e. nonfuzzy) asymmetric preference
relation. Based on a general framework for conjoint measurement that
allows for intransitive preferences (see Bouyssou and Pirlot, 2000), we
propose a characterization of "strict concordance relations", i.e. asym-
metric binary relations resulting from the application of the concordance
condition in such methods. This characterization shows that the essen-
tial distinctive feature of these relations lies in their very crude way to
distinguish various levels of "preference differences" on each attribute.
This paper is organized as follows. In section 2, we briefly recall
some notions on outranking relations and define "strict concordance re-
lations". Section 3 presents our general framework for conjoint measure-
ment that allows for intransitive preferences. This framework is used in
section 4 to characterize strict concordance relations. A final section dis-
cusses our findings and indicates directions for future research. Through-
out the paper, unless otherwise mentioned, we follow the terminology of
Bouyssou, 1996 concerning binary relations.

2. Outranking methods leading to an


asymmetric relation
2.1. TACTIC (Vansnick, 1986)
Consider two alternatives x and Y evaluated on a family N = {I, 2, ... ,
n} of attributes. A first step in the comparison of x = (Xl,X2, .•. ,xn )
and Y = (Yl, Y2, . .. ,Yn) is to know how they compare on each attribute.
In TACTIC, it is supposed that evaluations on an attribute can be com-
pared using an asymmetric binary relation Pi that is a strict semiorder
(i.e. an irreflexive, Ferrers and semi-transitive relation). The asymmetry
Strict Concordance Relations 123

of Pi implies that one and only one of the following propositions is true:
XiPiYi or YiPiXi or XiIiYi (i.e. N ot[XiPiYi] and N ot[YiPiXi])
When comparing x to y, the following subsets of attributes play a
vital part in TACTIC:

I(x, y) = I(y, x) = {i EN: xiIiyd and


P(y,x) = {i EN: YiPiXd.
Since Pi is asymmetric, we have P(x,y) n P(y,x) = 0. Note that, by
construction, I(x, y) = I(y, x), P(x, y)nI(x, y) = 0 and P(x, y)UI(x, y)U
P(y,x) = N.
In its concordance part, TACTIC declares that x is preferred to y (xPy)
if the attributes in P(x, y) are "strictly more important" than the at-
tributes in P(y, x). Since it appears impractical to completely assess an
importance relation between all disjoint subsets of attributes, TACTIC
assigns a weight to each attribute and supposes that the importance of
a subset of attributes is derived additively. More precisely, if Wi > 0 is
the weight assigned to attribute i EN, we have in the concordance part
of TACTIC:
xPy {:} L Wi > P L Wj (1)
iEP(x,y) jEP(y,x)
where P ~ 1 is a concordance threshold.
The preceding analysis based on concordance does not take into ac-
count the magnitude of the preference differences between the evalua-
tions of x and y on each attribute besides the distinction between "posi-
tive", "negative" and "neutral" differences. This may be criticized since,
if on some j E P(y, x) the difference of preference in favor of y is "very
large", it may be risky to conclude that xPy even if the attributes in
P(x,y) are strictly more important than the attributes in P(y,x). This
leads to the discordance part of the method. The idea of very large
preference differences is captured through a strict semiorder Vi ~ Pi on
each attribute i E N and the discordance part of the method forbids to
have xPy whenever YiVjXj, for some j E P(y, x) . In summary, we have
in TACTIC:

xPy {:}
L':iEP(X,y) Wi > P L':jEP(y,X) Wj
and (2)
N ot[Yj VjXj] for all j E P(y, x)
124 AIDING DECISIONS WITH MULTIPLE CRITERIA

where Pi and Vi are strict semiorders such that Vi ~ Pi, Wi > 0 and
p ~ 1. We refer to Vansnick, 1986 for a thorough analysis of this method
including possible assessment techniques for Pi, Vi, Wi and p.
Simple examples show that, in general, a relation P built using (1)
or (2) may not be transitive and may even contain circuits. The use
of such a relation P for decision-aid purposes therefore calls for the
application of specific techniques, see Roy, 1991; Roy and Bouyssou,
1993; Vanderpooten, 1990.

2.2. Strict concordance relations


Relation (1) is only one among the many possible ways to implement
the concordance principle in order to build an asymmetric relation. The
following elements appear central in the analysis:

• an asymmetric relation Pi on each Xi allowing to partition N into


P(x, Y), P(y, x) and J(x, y),
• an asymmetric importance relation C> between disjoint subsets
of attributes, allowing to compare P(x,y) and P(y,x), which is
monotonic (with respect to inclusion), i.e. such that:

[A C> B, C;2 A, B ;2 D, C n D = 0] => [C C> D].


This motivates the following, inspired by Fargier and Perny, ming:
Definition 1 (Strict concordance relations)
Consider a set Y ~ Xl X X 2 X .•. X Xn of alternatives evaluated on a
set N = {1, 2, ... ,n} of attributes. A binary relation P on Y is said to
be a strict concordance relation if there are:

• an asymmetric binary relation C> between disjoint subsets of N that


is monotonic and,

• an asymmetric binary relation Pi on each Xi (i = 1,2, ... ,n),


such that, for all x, y E Y:

xPy ¢:> P(x, y) C> P(y, x), (3)

where P(x,y) = {i EN: XiPiYi}.


It should be clear that any binary relation built using (1) is a strict
concordance relation.
The above definition does only require the asymmetry of the relations
Pi. Although this it is at variance with what is done in most outranking
Strict Concordance Relations 125

methods (Pi generally being strict semiorders), this additional generality


will prove to have little impact in what follows. We defer to section 5
the discussion of a possible introduction of discordance in our analysis.
We already noticed with TACTIC that P may be a strict concordance
relation without being transitive or without circuit. This does not imply
that, for a given number of attributes and a given set of alternatives,
any asymmetric relation is a strict concordance relation. The purpose of
this paper is to provide a characterization of such relations when the set
of alternatives is rich, i.e. when Y = X = Xl X X2 x··· X Xn (Bouyssou,
1996 studies the, simpler, case in which the number of attributes is not
fixed).

3. A general framework for


nontransitive conjoint measurement
In the rest of this paper, we always consider a set X = I1~=1 Xi with
n 2: 2; elements of X will be interpreted as alternatives evaluated on
a set N = {1, 2, ... ,n} of attributes. Unless otherwise stated, in order
to avoid unnecessary complications, we suppose throughout that X is
finite. When J ~ N, we denote by XJ (resp. X-J) the set I1iEJ Xi (resp.
I1i~J Xi). With customary abuse of notation, (xJ, Y-J) will denote the
element W E X such that Wi = Xi if i E J and Wi = Yi otherwise (when
J = {i} we simply write X-i and (Xi,Y-i)).
Let >- be a binary relation on X interpreted as "strict preference".
The absence of strict preference is denoted by "-' (i.e. x "-' Y {:} N ot[x >- yl
t t
and Not[y >- xl) and we define on X letting x Y {:} [x >- Y or x "-' yl·
We define the following binary relations on XJ with J ~ N:

XJ >-J YJ iff (xJ,z-J) >- (YJ,z-J), for some Z-J E X-J,


where XJ,YJ E XJ (when J = {i} we write >-i instead of >-{i})·
If, for all x], YJ E XJ, XJ >-J YJ implies XJ >- J YJ, we say that >-
is independent for J. If >- is independent for all nonempty subsets of
attributes we say that >- is independent. It is not difficult to see that a
binary relation is independent if and only if it is independent for N \ {i},
for all i E N, see e.g. Wakker, 1989.
We say that attribute i E N is influent (for >-) if there are Xi, Yi, Zi, Wi
E Xi and X-i,Y-i E X-i such that (Xi,X-i) >- (Yi,Y-i) and Not[(Zi,X_i)
>- (Wi, Y-dl and degenerate otherwise. It is clear that a degenerate
attribute has no influence whatsoever on the comparison of the elements
of X and may be suppressed from N.
126 AIDING DECISIONS WITH MULTIPLE CRITERIA

We say that attribute i E N is essential (for ~) if ~i is not empty.


It should be clear that any essential attribute is influent. The converse
does not hold however. It will not be supposed here that all attributes
are essential.
We envisage in this section relations ~ that can be represented as:

where Pi are real-valued functions on xl that are skew symmetric (i.e.


such that Pi(Xi, yd = -Pi (Yi, Xi), for all Xi, Yi E Xi) and F is a real-
valued function on IT?=l Pi(Xf) being nondecreasing in all its arguments
and odd (i.e. such that F(x) = -F(-x), abusing notations in an ob-
vious way). We summarize some useful properties of model (M) in the
following:

Proposition 1 If ~ satisfies model (M) then:


z. ~ is asymmetric and independent,
ZZ. [Xi h Yi for all i E J ~ N] =* [XJ ~J YJ].

Proof of proposition 1
i. The asymmetry of ~ follows from the skew symmetry of all Pi and
the oddness of F. Since Pi(Xi, Xi) = 0, the independence of ~ follows.
ii. Observe that Xi ~i Yi is equivalent to F(Pi(Xi, yd, 0) > (using
obvious notations). Since F(O) = 0, the nondecreasingness of F leads to
°
Pi(Xi, Vi) > 0. The desired property easily follows using the nondecreas-
ingness of F. 0

Two conditions, inspired by Bouyssou and Pirlot, 2000, will prove


useful for the analysis of model (M). Let ~ be a binary relation on a set
X = IT?=l Xi· This relation is said to satisfy:
ARC1i if
(Xi, a-i) ~ (Yi, L i ) (Xi, cd ~ (Yi, d-i)
and or
(Zi' C-i) ~ (Wi, d_ i ) (Zi, a-i) ~ (Wi, b-d,
ARC2i if
(Xi, a-i) ~ (Yi, L i) (Zi, a-i) ~ (Wi, Li)
and or
(Yi, c-d ~ (Xi, d-d (Wi, cd ~ (Zi' d-d,
Strict Concordance Relations 127

for all Xi, Yi, Zi, Wi E Xi and all a-i, L i , C-i, d_ i E X-i. We say that r-
satisfies ARC1 (resp. ARC2) if it satisfies ARC1i (resp. ARC2i) for all
i E N.
Condition ARC1i (Asymmetric inteR-attribute Cancellation) suggests
that r- induces on Xl a relation that compares "preference differences"
in a well-behaved way: if (Xi, Yi) is a larger preference difference than
(Zi,Wi) and (zi,c-d r- (wi,d_i) then we should have (xi,c-d r- (Yi,d_d
and vice versa. The idea that the comparison of preference differences is
central to the analysis of conjoint measurement models was powerfully
stressed by Wakker, 1988; Wakker, 1989.
Condition ARC2i suggests that the preference difference (Xi, Yi) is
linked to the "opposite" preference difference (Yi, Xi). It says that if the
preference difference between Zi and Wi is not larger than the preference
difference between Xi and Yi then the preference difference between Wi
and Zi should be larger than the preference difference between Yi and
Xi· Taking Xi = Yi, Zi = Wi, a-i = Ci and b-i = d-i shows that ARC2i
implies that r- is independent for N \ {i} and, hence, independent.
The following lemma shows that these two conditions are independent
and necessary for model (M).

Lemma 1
z. Model (M) implies ARC1 and ARC2,
ii. In the class of asymmetric relations, ARC1 and ARC2 are inde-
pendent conditions.

Proof of lemma 1
i. Suppose that (xi,a-i) r- (Yi,b_d and (zi,c-d r- (wi,d_i). Using
model (M) we have:

and
F(Pi(Zi, Wi), (Pj(Cj, dj))j:;I=i) > 0
abusing notations in an obvious way.
If Pi(Xi, Yi) ~ Pi(Zi, Wi) then using the nondecreasingness of F, we
have F(Pi(Xi, Yi), (Pj(Cj, dj))j:;I=i) > 0 so that (Xi, c-d r- (Yi, d-i). If
Pi(Zi, Wi) > Pi(Xi, Yi) we have F(Pi(Zi, Wi), (pj(aj, bj))j:;I=i) > 0 so that
(Zi' a-i) r- (Wi, b-i). Hence ARC1 holds.
Similarly, suppose that (Xi, a-i) r- (Yi, b-i) and (Yi, cd r- (Xi, d-d·
We thus have:
128 AIDING DECISIONS WITH MULTIPLE CRITERIA

and
F(Pi(Yi, Xi), (Pj(Cj, dj))ji:i) > O.
If Pi(Xi,Yi) ~ Pi(Zi,wd, the skew symmetry of Pi implies Pi(Wi,Zi) >
Pi(Yi,xd. Using the nondecreasingness of F we have F(Pi(Wi,Zi), (Pj(Cj,
dj))ji:i) > 0 so that (Wi, C-i) >- (Zi' d-i). Similarly, ifpi(zi, Wi) > Pi(Xi, Yi)
we have, using the nondecreasingness of F, F(Pi(Zi, Wi)' (pj(aj, bj))ji:i) >
o so that (Zi' a-i) >- (Wi, b_ i ). Hence ARC2 holds.
ii. It is easy to build asymmetric relations violating ARC1 and ARC2.
Using theorem 1 below, it is clear that there are asymmetric relations
satisfying both ARC1 and ARC2. We provide here the remaining two
examples.

1 Let X = {a, b, c} x {x, y, z} and let >- on X be empty except that


(a, x) >- (b, y) and (a, x) >- (c, z). Relation>- is asymmetric. Since
Not[(a,x) >- (b,z)] and Not[(a,x) >- (e,y)], >- violates ARCl.
Condition ARC2 is trivially satisfied.

2 Let X = {a,b} x {x,y} and >- on X be empty except that (a,x) >-
(a, y). It is clear that >- is asymmetric but not independent, so
that ARC2 is violated. Condition ARC1 is trivially satisfied. 0

In order to interpret conditions ARC1 and ARC2 in terms of prefer-


ence differences, we define the binary relations ti and ti* on Xl
letting,
for all Xi, Yi, Zi, Wi E Xi,

(Xi, Yi) ti (Zi' Wi) {:}


[for all a_i, Li E X-i, (Zi' a-i) >- (Wi, Ld ~ (Xi, a-i) >- (Yi, Li)]

and

It is easy to see that ti (and, hence, ti*) is transitive by construction


and that the symmetric parts of these relations ("Vi and "Vi*) are equiv-
alence relations (the hypothesis that attribute i E N is influent meaning
that "Vi has at least two distinct equivalence classes). Observe that, by
construction, ti* is reversible, i.e. (Xi, yd ti* (Zi' wd {:} (Wi, zd ti*
(Yi,Xi).
The consequences of ARC1i and ARC2i on relations ti and ti* are
noted in the following lemma; we omit its straightforward proof.
Strict Concordance Relations 129

Lemma 2
i ARC1i {:} [ti is complete],
ii ARC2i {:}
flor all Xi,Yi,Zi,Wi E Xi,Not[(Xi,Yi) ti (Zi,Wi)] ~ (Yi,xd ti
(Wi, zdJ,
iii [ARC1i and ARC2i] {:} [ti* is complete].

For the sake of easy reference, we note a few useful connections be-
tween ti, ti* and >- in the following lemma.

Lemma 3 For all x, Y E X and all Zi, Wi E Xi,


i [x >- Y and (Zi' wd ti (Xi, Vi)] ~ (Zi' X-i) >- (Wi, V-i),
ii [(Zi' Wi) ""i (Xi, Vi) for all i E N] ~ [x >- Y {:} Z >- w]
iii [x t Y and (Zi' Wi) ti* (Xi, Vi) ~ (Zi' x-d t (Wi, V-i)]
iv [(Zi' Wi) ""i* (Xi, Vi) for all i E N] ~ [x >- Y {:} Z >- w] and
[y >- X {:} W >- z].
Proof of lemma 3
i. is obvious from the definition of ti and ii. is immediate from i.
iii. Suppose that x"" y, (Zi' Wi) ti* (Xi, Vi) and (Wi, V-i) >- (Zi' X-i).
By hypothesis, we have Not[(Yi, V-i) >- (Xi, X-i)], Since (Wi, y-d >-
(Zi' X-i), this implies Not[(Yi, Xi) ti (Wi, Zi)]. Since ARC1 and ARC2
hold, we know that ti* is complete so that (Wi,Zi) >-i* (Yi,Xi), a con-
tradiction. Part iv. is immediate from ii. and iii. 0

For finite or countably infinite sets X conditions ARC1, ARC2 com-


bined with asymmetry allow to characterize model (M). We have:

Theorem 1 Let >- be a binary relation on a finite or countably infinite


set X = nr=l
Xi. Then >- satisfies model (M) iff it is asymmetric and
satisfies ARC1 and ARC2.
Proof of theorem 1
Necessity results from lemma 1 and proposition 1. We establish suf-
ficiency below.
Since ARC1i and ARC2i hold, we know from lemma 2 that ti* is
complete so that it is a weak order. This implies that ti is a weak order
and, since X is finite or countably infinite, there is a real-valued function
qi on xl such that, for all Xi, Vi, Zi, Wi E Xi, (Xi, Vi) ti (Zi' Wi) {:}
qi(Xi, yd ~ qi(Zi, Wi). Given a particular numerical representation qi of
130 AIDING DECISIONS WITH MULTIPLE CRITERIA

:::i,
let Pi(Xi, Yi) = qi(Xi, Yi) - qi(Yi, Xi). It is obvious that Pi is skew
symmetric and represents :::i*.
Define F as follows:

F(PI (Xl, yt},P2(X2, Y2), ... ,Pn(Xn , Yn» =


f(g(Pt{XI,yt},P2(X2,Y2), ... ,Pn(xn,Yn))) if x~Y,
{ o if x'" Y,
- f( -g(PI(XI, yt},P2(X2, Y2), ... ,Pn(xn , Yn))) otherwise,

where 9 is any function from IR.n to IR. increasing in all its arguments and
odd (e.g. E) and f is any increasing function from IR. into (0, +00) (e.g.
exp(·) or arctan(·) + ~).
The well-definedness of F follows from part iv. of lemma 3 and the
definition of the pi'S. It is odd by construction.
To show that F is nondecreasing, suppose that Pi(Zi, Wi) > Pi(Xi, Yi),
i.e. that (Zi, Wi) ~i* (Xi, Yi). If X ~ Y, we know from part i. of lemma 3
that (Zi, X-i) ~ (Wi, Y-i) and the conclusion follows from the definition
of F. If x'" Y, we know from part iii. of lemma 3 that Not[(Wi, Y-i) ~
(Zi, X-i)] and the conclusion follows from the definition of F. If Y ~ X we
have either (Wi,Y-i) ~ (Zi,X-i) or (Zi,X-i) ::: (Wi,Y-i). In either case,
the conclusion follows from the definition of F. 0

Following Bouyssou and Pirlot, 2000, it is not difficult to extend this


result to sets of arbitrary cardinality adding a, necessary, condition im-
plying that the weak orders :::i* have a numerical representation. It
should be observed that model (M) seems sufficiently general to contain
as particular cases most conjoint measurement models including: addi-
tive utilities (see Krantz et al., 1971j Wakker, 1989), additive differences
(see Tversky, 1969j Fishburn, 1992) and additive nontransitive mod-
els (see Bouyssou, 1986j Fishburn, 1990bj Fishburn, 1990aj Fishburn,
1991j Vind, 1991). We show in the next section that it also contains
strict concordance relations.
It should be observed that in model (M), the function Pi does not
necessarily represent :::i*. It is however easy to see that we always have:

(4)

I
Hence IPi(Xl> is an upper bound for the number of equivalence classes
of :::i*.
Strict Concordance Relations 131

4. A characterization of
strict concordance relations
Our main result in this section says that all strict concordance rela-
tions (definition 1) can be represented in model (M) with relations ti*
having at most three equivalence classes and vice versa.
Theorem 2 The following are equivalent:
i >- has a representation in model (M) with all relations ti* having
at most three distinct equivalence classes,
ii >- is a strict concordance relation.
Proof of theorem 2
ii => i. Given equation (4), the claim will be proven if we build a
representation of >- in model (M) with functions Pi taking only three
distinct values. Define Pi as:
I if XiPiYi,
Pi(Xi, yd = { 0 if xihYi,
-1 ifYiPixi.
Since ~ is asymmetric, the function Pi is well-defined and skew-symmetric.
Define F letting:
I if x >- y,
F(PI (Xl, Yl),P2{X2, Y2), ... ,Pn(xn , Yn)) = { -1 if Y >- x,
o otherwise.
Since, by hypothesis, [P{x, y) = P(z, w) and P(y, x) = P(w, z)] => [x >-
Y ¢:} z >- w], it is easy to see that F is well-defined. It is clearly odd. The
monotonicity of [> implies that F is nondecreasing in all its arguments.
i => ii. Define Pi letting, for all Xi, Yi E Xi, Xi~Yi ¢:} (Xi, Yi) >-i*
(Yi,Yi).
Suppose that XiPiYi and YiPiXi so that (Xi, Yi) >- i* (Yi, Yi) and (Yi, Xi)
>-i* (Xi, Xi). Since >- is independent, we have (Yi, Yi) "'i* (Xi, xd so that
(Yi, Xi) >-i* (Yi, Yi). The reversibility of ti* leads to (Yi, Yi) >-i* (Xi, Yi),
a contradiction. Hence, Pi is asymmetric.
Two cases arise:
• If attribute i E N is degenerate then >-i= 0. Hence ti* has only
equivalence class and Pi is empty. We clearly have [Xi Ii Yi and
ziIiwi] => (Xi,Yi) rvi* (zi,wd·
• If attribute i E N is influent, we claim that Pi is non empty
and that ti* has exactly three equivalence class. Indeed, ti
132 AIDING DECISIONS WITH MULTIPLE CRITERIA

being complete, there are Zi, Wi,Xi, Yi E Xi such that (Xi, Yi) >-i
(Zi' Wi). Since ti* is complete, this implies (Xi, Yi) >-i* (Zi' Wi). If
(Xi,Yi) >-i* (Yi,Yi) then XiPiYi. If not, then (Yi,Yi) ti* (Xi,Yi) and
ti* being a weak order, we obtain (Yi,Yi) >-i* (Zi,Wi). Using the
definition of ti*, this clearly implies (Wi, Zi) >- i* (Yi, yd. Since>- is
independent, we have (Yi,Yi) "'i* (zi,zd· Thus (wi,zd >-i* (Zi,Zi)
so that WiPiZi. Therefore Pi is not empty.
Since ti* has at most three distinct equivalence classes and XiPiYi
{:::} (Xi, Yi) >-i* (Yi, Yi) {:::} (Yi, Yi) >-i* (Yi, Xi), we conclude that ti*
has exactly three distinct equivalence classes. Therefore, XiPiYi
implies that (Xi, Yi) belongs to the first equivalence class of ti*·
This implies [XiPiYi and ZiPiWi] => (Xi, Yi) "'i* (Zi' Wi). Similarly,
it is easy to prove that [xi1iYi and zi1iwi] => (Xi, Yi) "'i* (Zi' Wi).
Therefore, [P(x,y) = P(z,w) and P(y,x) = P(w,z)] implies [(Zi,Wi)
"'i* (Xi, Yi), for all i EN]. From part iv. of lemma 3 we obtain:
[P(X, y) = P(z, w) and P(y, x) = P(w, z)] => [x >- Y {:::} Z >- w]. (5)

Using the nondecreasingness of F it is easy to prove that:

[P(X, y) ~ P(z, w) and P(y, x) :2 P(w, z)] => [x >- Y => Z >- w]. (6)

Consider any two disjoint subsets A, BeN and let:

A c> B {:::}
[x >- y, for some X,Y E X such that P(x,y) = A and P(y,x) = B]
Equations (5) and (6) show that c> is asymmetric and monotonic. In
view of (5), it is clear that:

X >- Y {:::} P(x, y) c> P(y, x) o


The binary relation >- is said to be coarse on attribute i E N (Ci ) if,

(Xi,Yi) >-i (Yi,Yi)} {Not[(Zi'Wd >-i (xi,yd]


or => and
(Yi,Yi) >-i (Yi,Xi) Not[(Yi, Xi) >-i (Wi,Zi)]
for all Xi, Yi, Zi, Wi E Xi.
Intuitively, a relation is coarse on attribute i E N if as soon as a given
preference difference is larger than a null preference difference then it
Strict Concordance Relations 133

cannot be beaten and its "opposite" cannot beat any preference differ-
ence. Similarly, if a preference difference is smaller than a null preference
difference, then it cannot beat any preference difference and its "oppo-
site" cannot be beaten. It is not difficult to find relations >- satisfying
Ci but not Cj for j =1= i. We say that >- is coarse (C) if it is coarse on
alliEN.

Proposition 2 We have:
C, ARC1 and ARC2 are independent conditions,

zz if>- satisfies ARC1 and ARC2 then [C holds] ¢:} [~i* has at most
three equivalence classes, for all i EN].

Proof of proposition 2
i. Using a nontrivial additive utility model, it is easy to build ex-
amples of relations satisfying ARC1 and ARC2 and violating C. The
two examples used in the proof of part ii .. of lemma 1 show that there
are asymmetric relations >- satisfying C and ARC1 (resp. ARC2) but
violating ARC2 (resp. ARC1).
ii. Suppose that ARC1 and ARC2 hold. Let us show that [~i* has
at most three equivalence classes, for all i E N] ::} C. Suppose that C
is violated with (Xi, Yi) >-i (Yi, Yi). We have either (Zi' wd >-i (Xi, Yi)
or (Yi, Xi) >-i (Wi, zd, for some Zi, Wi E Xi. Since >-i ~ >-i* and ~i*
is a reversible weak order, it is easy to see that either case implies that
~i* has at least five equivalence classes. The case (Yi, yd >-i (Yi, xd is
similar.
Let us now show that C ::} [~i* has at most three equivalence classes,
for all i EN]. Suppose that (Xi,Yi) >-i* (Yi,Yi) so that either (xi,yd >-i
(Yi, Yi) or (Yi, yd >-i (Yi, Xi)' In either case, C implies, for all Zi, Wi E Xi,
(Xi, yd ~i (Zi' wd and (Wi, Zi) ~i (Yi, Xi) so that (Xi, Yi) ~i* (Zi' Wi).
Therefore if (Xi, Yi) >-i* (Yi, Yi) then (Xi, Yi) ~i* (Zi' Wi) for all Zi, Wi E
Xi. Similarly, it is easy to prove that (Yi, Yi) >-i* (Xi, Yi) implies (Zi' Wi)
~i* (Xi, yd for all Zi, Wi E Xi. This implies that ~i* has at most three
equivalence classes. 0

Combining theorem 2 with proposition 2 therefore leads to a charac-


terization of strict concordance relations. We have:

Theorem 3 Let >- be a binary relation on a finite set X = Il~=l Xi.


The following are equivalent:
i >- is asymmetric and satisfies ARC1, ARC2 and C,
134 AIDING DECISIONS WITH MULTIPLE CRITERIA

ii ~ is a strict concordance relation.

It is interesting to observe that this characterization uses two condi-


tions (ARC1 and ARC2) that are far from being specific to concordance
methods. In fact, as shown in Bouyssou and Pirlot, 2000, these condi-
tions can be considered as the building blocks of most conjoint mea-
surement models. The specificity of strict concordance relations lies in
condition C which imposes that only a very rough differentiation of pref-
erence differences is possible on each attribute. Clearly, C should not
be viewed as a condition with normative content. In line with Bouyssou
et al., 1993, it is simply used here as a means to point out the specifici-
ties of strict concordance relations. It is easy, but not very informative,
to reformulate C in terms of ~. We leave to the reader the easy proof
of the following:

Proposition 3 If ~ satisfies ARC1 and ARC2 then C holds if and only


if, for all i E N, all Xi,Yi E Xi, all X-i,Y-i E X-i and all z,w EX,

}~
(Xi, X-i) ~ (Yi,Y-i) and Not[(Yi,X-d ~ (Yi,Y-i)]
or
N ot[(Yi, x-d ~ (Xi, Y-i)] and (Yi, x-d ~ (Yi, Y-i)

z ~ W => (Xi, Z-i) ~ (Yi, W-i)


{ and
(Yi, W-i) ~ (Xi, Z-i) => W ~ Z

5. Discussion and remarks


5.1. Strict concordance relations and
noncompensatory preferences
It has long been thought (see Bouyssou, 1986; Bouyssou and Vansnick,
1986) that the notion of noncompensatory preferences, as defined in
Fishburn, 1976, provided the adequate framework for the characteri-
zation of strict concordance relations. We think that the framework
provided by model (M) is more general and adequate for doing so.
P.C. FISHBURN'S definition of noncompensatory preferences (see Fish-
burn, 1976) starts with an asymmetric binary relation ~ on X = TI?=l Xi.
Let ~ (x,y) = {i : Xi ~i Yi} and", (x,y) = {i : Xi "'i Yi}. It is clear
that, for all X,Y E X, ~ (x,y) n ~ (y,x) = 0, '" (x,y) = '" (y,x) and
~ (x, y) n '" (x, y) = 0. Note that, in general, it is not true that
~ (x, y) U '" (x, y) U ~ (y, x) = N since the relations ti might be
incomplete.
Strict Concordance Relations 135

Definition 2 «Fishburn, 1976»


The binary relation >- is said to be noncompensatory (in the asymmetric
sense) if:

>- (x, y) -- >- (z , w) } :::} [x >- Y ¢:} Z >- w] (NC)


>-(y,x) = >-(w,z) ,
for all x, y, Z, W EX.

Hence, when >- is noncompensatory, the preference between x and Y


only depends on the subsets of attributes favoring x or y. It does not
depend on preference differences between the various levels on each at-
tribute besides the distinction between "positive", "negative" and "neu-
tral" attributes. Some useful properties of noncompensatory preferences
are summarized in the following:

Proposition 4 If an asymmetric relation >- is noncompensatory, then:


i >- is independent,

ii Xi "'i Yi for all i EN:::} x '" y,

iii Xj >-j Yj for some j E N and Xi "'i Yi for all i E N \ {j} :::} x >- y,
iv all influent attributes are essential.

Proof of proposition 4
i. Since "'i is reflexive by construction, the definition of noncompensa-
tion implies that >- is independent for N\ {i}. Hence, >- is independent.
ii. Suppose that Xi "'i Yi for all i EN and x >- y. Since >- is noncom-
pensatory and "'i is reflexive, this would lead to x >- x, contradicting
the asymmetry of >-.
iii. By definition, x >-i Y ¢:} [(Xi, Z-i) >- (Yi, z-d for all Z-i E X-i].
Since "'i is reflexive, the desired conclusion follows from the definition
of noncompensation.
iv. Attribute i E N being influent, there are Xi, Yi,Zi, Wi E Xi and
X-i, Y-i E X-i such that (Xi, X-i) >- (Yi, Y-i) and N ot[(Zi, x-d >- (Wi, Y-i)].
In view of NC, it is impossible that Xi "'i Yi and Zi "'i Wi. Hence at-
tribute i is essential. 0

It is not difficult to see that there are strict concordance relations


violating all conditions in proposition 4 except independence. Examples
of such situations are easily built using a strict concordance relation
defined by:
136 AIDING DECISIONS WITH MULTIPLE CRITERIA

xPy {:} L Wi > L Wj +C (7)


iEP(x,y) jEP(y,x)

where c > 0, Wi > 0 for all i E N. Letting Wj < c on some attributes


easily leads to the desired conclusions (e.g. an attribute such that Wj < c
is not essential but may well be influent).
Hence basing the analysis of concordance relations on condition NC
leads to a somewhat narrow view of concordance relations. Noncompen-
sation implies that all influent attributes are essential, whereas this is
not the case for strict concordance relations.
When >- is noncompensatory, it is entirely defined by the partial pref-
erence relations on each attribute and an asymmetric importance rela-
tion between disjoint subsets of attributes. We formalize this idea below
using a strengthening of NC including an idea of monotonicity (see also
Fargier and Perny, ming).

Definition 3
The binary relation >- is said to be monotonically noncompensatory (in
the asymmetric sense) if:

>-(x,y) c >-(Z,W)} (MNC)


>-(y,x)::) >-(w,z) ::::}[x>-y::::}z>-w),

for all X,y,Z,W E X.

It is clear that MNC ::::} NC. We have:

Proposition 5 The following are equivalent:


i >- is a strict concordance relation in which all attributes are essen-
tial,

ii >- is an asymmetric binary relation satisfying MNC.

Proof of proposition 5
i. ::::} ii. Since each attribute is essential, it is easy to see that {i} [> 0
so that Pi = h. The conclusion therefore follows.
ii. ::::} i. Letting Pi = >-i and defining [> by:

[A [> B) {:}
[x >- y, for some X,y E X such that >- (x,y) = A and >- (y,x) = B).
easily leads to the desired conclusion. o
Strict Concordance Relations 137

Therefore, all asymmetric relations satisfying MNC are strict concor-


dance relations and the converse is true as soon as all attributes are
supposed to be essential. In our nontransitive setting, assuming that
all attributes are essential is far from being an innocuous hypothesis.
It implies that the relations Pi used to show that ?- is a strict concor-
dance relation must coincide with the relations ?-i deduced from ?- by
independence. Equation (7) shows that this is indeed restrictive.
Therefore, it seems that the use of NC or MNC for the analysis of
strict concordance relations:

i leads to a somewhat narrow view of strict concordance relations


excluding all relations in which attributes may be influent without
being essential,

ii does not allow to point out the specific features of strict concor-
dance relations within a general framework of conjoint measure-
ment (conditions NC and MNC are indeed quite different from the
classical cancellation conditions used in most conjoint measure-
ment models, and most importantly, the additive utility model
(see Krantz et al., 1971; Debreu, 1960; Fishburn, 1970; Wakker,
1989)),

iii amounts to using very strong conditions (see the simple proof of
proposition 5).

5.2. Transitivity of partial preferences


Our definition of strict concordance relations (3) does not require
the relations Pi to possess any remarkable property besides asymmetry.
This is at variance with what is done in most outranking methods which
use relations Pi being strict semiorders. It might be thought that this
additional condition might lead to an improved characterization of strict
concordance relations. However, it is shown in Bouyssou and Pirlot,
2001 that the various conditions that can be used to decompose the
functions Pi in model (M) so as to consider preference differences which
are governed by an underlying weak order (as in the case of semiorders)
are independent from ARC1 and ARC2. These additional conditions
are furthermore independent from C. Therefore there is little hope to
arrive at a more powerful characterization adding the hypothesis that
Pi are strict semiorders.
138 AIDING DECISIONS WITH MULTIPLE CRITERIA

5.3. Transitivity of concordance relations and


Arrow's theorem
One advantage of the use of conditions NC and MNC is that they allow
to clearly understand the conditions under which ~ may possess "nice
transitivity properties". This is not surprising since NC (resp. MNC) is
very much like a "single profile" analogue of Arrow's Independence of
Irrelevant Alternatives (see Arrow, 1963) (resp. the NIM condition used
in Sen, 1986). Therefore, as soon as the structure of X is sufficiently
rich, imposing nice transitivity properties on a noncompensatory relation
~ leads to a very uneven distribution of "power" between the various
attributes (see Fishburn, 1976; Bouyssou, 1992).
It is not difficult to see that similar results hold with strict concor-
dance relations. We briefly present below one such result as an example,
extending to our case a single profile result due to Weymark, 1983. Other
results in Fishburn, 1976; Bouyssou, 1992; Perny and Fargier, 1999 can
be reformulated in a similar way.
Proposition 6 Let >- be a nonempty strict concordance relation on a
finite set X = n:=1
Xi. Suppose that ~ has been obtained using, on each
i E N, a relation Pi for which there are ai, bi, ci E Xi such that aiPibi,
biPtCi and aiPtCi. Then, if ~ is transitive, it has an oligarchy, i.e. there
is a unique nonempty 0 ~ N such that, for all x, y EX:
• XiPtYi for all i EO=> x >- y,
• XiPiYi for some i EO=> Not[y ~ x].

Proof of proposition 6
We say that a nonempty set J ~ N is:
• decisive if, for all x, Y E X, [XiPiYi for all i E J] => x ~ y,
• semi-decisive if, for all x, Y EX, [XiPiYi for all i E J] => N ot[y ~
x],
Hence, an oligarchy 0 is a decisive set such that all {i} ~ a are semi-
decisive.
Since >- is a strict concordance relation, it is easy to prove that:
[P(X,y) = J,P(y,x) = N\ J and x ~ y, for some X,Y E X]

=> J is decisive,
and
[P(x,y} = J,P(y,x) = N \ J and Not[y ~ x], for some X,y E X]
Strict Concordance Relations 139
::::} J is semi-decisive.
Since )- is nonempty, we have, for all x, Y E X:

so that N is decisive.
Since N is finite, there exists (at least) one decisive set of minimal
cardinality. Let J be one of them. We have [XiPiYi for all i E J] ::::}
x )- y. If IJI = 1, then the conclusion follows. If not, consider i E J
and use the elements ai, bi , Ci E Xi such that aiPibi, biPiCi and aiPici to
build the following alternatives in X:
{i} J \ {i} N\J
a Ci aj bt
b ai bj Ct
c bi Cj at

J being decisive, we have b )- c. If a )- c, then J \ {i} is decisive,


violating the fact that J is a decisive set of minimal cardinality. We
thus have N ot[a )- c] and the transitivity of )- leads to N ot[a )- b].
This shows that {i} is semi-decisive. Therefore all singletons in J are
semi-decisive.
The proof is completed observing that J is necessarily unique. In fact
suppose that there are two sets J and J' with J =I- J' satisfying the
desired conclusion. We use the elements ai, bi E Xi such that aiPibi to
build the following alternatives in X:
J l' \ J N \ [J U 1']
d aj bk at
e bj ak at

We have, by construction, e )- d and N ot[e )- d], a contradiction. 0

5.4. A possible definition of the degree of


compensation of a binary relation
Within the general framework of model (M), our results show that
relations ti* seem central to understand the possibility of trade-offs
between attributes.
We therefore tentatively suggest that the "degree of compensation"
of an asymmetric binary relation )- on a finite set X = Xl X X 2 X
.,. X Xn satisfying ARC1 and ARC2 should be linked to the number
ci* of distinct equivalence classes of ti* on each attribute. We have
ci* :::; 3, for all i E N if and only if )- is a strict concordance relation (see
theorem 2). Letting IXil = ni, ci* can be as large as ni x (ni -1)+ 1 when
140 AIDING DECISIONS WITH MULTIPLE CRITERIA

>- is representable in an additive utility model or an additive difference


model.
A reasonable way of obtaining an overall measure of the degree of
compensation of >- consists in taking:
c** = . max... ,n ci*.
~=1,2,

This leads to c** ~ 3 iff >- is a strict concordance relation.


An aggregation technique can produce a whole set of binary relations
on a finite set X = XIX X 2 X ... X Xn depending on the choice of various
parameters. We suggest to measure the degree of compensation of an
aggregation technique (always producing asymmetric binary relations
satisfying ARCl and ARC2) as the maximum value of c** taken over
the set of binary relations on X that can be obtained with this technique.
Since an additive utility model can be used to represent lexicographic
preferences on finite sets, the choice of the operator "max" should be
no surprise: using "min" would have led to a similar measure for meth-
ods based on concordance and methods using additive utilities and it
is difficult to conceive an "averaging" operator that would be satisfac-
tory. Using such a definition, aggregation methods based on concordance
have the minimal possible measure (i.e., 3), whereas the additive utility
model has a much higher value (the precise value depends on ni and n).
It should finally be noted that our proposals are at variance with Roy,
1996a who uses a more topological approach to the idea of compensation.
The validation of our proposals and their extension to sets of arbitrary
cardinality clearly call for future research.

5.5. Discordance
An immediate generalization of definition 1 is the following:
Definition 4 (Strict concordance-discordance relations)
A binary relation P on X is said to be a strict concordance-discordance
relation if there are:
• an asymmetric binary relation!> between disjoint subsets of N that
is monotonic and,
• asymmetric binary relations Pi and Vi such that Vi ~ Pi on each
Xi (i = 1,2, ... ,n),
such that, for all x, y E Y:
xPy ¢:} [P(x,y)!> P(y,x) and (Not[YjVjXj]' for all j E P(y,x))],(8)

where P(x,y) = {i EN: xiPiyd.


Strict Concordance Relations 141

The only attempt at a characterization of discordance effects in out-


ranking methods we are aware of is Bouyssou and Vansnick, 1986. It is
based on an extension of NC allowing to have x >- y and N ot[z >- w]
when >- (x, y) = >- (z, w) and >- (y, x) = >- (w, z). This analysis, based on
NC, is therefore subject to the criticisms made in section 5.1 (let us also
mention that such an analysis cannot be easily extended to outranking
methods producing binary relations that are not necessarily asymmet-
ric, e.g. ELECTRE I; in that case, discordance effects may well create
situations in which x >- y and w >- z while P(x,y) = P(z,w) and
P(y, x) = P(w, z), through destroying what would have otherwise been
indifference situations x "" y and z "" w). Furthermore, the above-
mentioned extension of NC is far from capturing the essence of discor-
dance effects, i.e. the fact that they occur attribute by attribute, leaving
no room for possible interactions between negative preference differences.
The prevention of such interactions has led to the introduction of rather
ad hoc axioms in Bouyssou and Vansnick, 1986.
It is not difficult to see that strict concordance-discordance relations
always satisfy ARC1 and ARC2 with relations ti* having at most 5 dis-
tinct equivalence classes (compared to strict concordance relations, the
two new classes correspond to "very large" positive and negative prefer-
ence differences). However, model (M) is clearly not well adapted to pre-
vent the possibility of interactions between very large negative preference
differences, as is the case for discordance effects. Simple examples show
that if the class of relations >- satisfying ARC1 and ARC2 with relations
ti* having at most 5 equivalence classes contains all strict concordance-
discordance relations, it contains many more relations. This clearly calls
for future research. We nevertheless summarize our observations in the
following:

Proposition 7
If >- is a strict concordance-discordance relation then >- satisfies
model (M) with all relations ti* having at most 5 distinct equiva-
lence classes.

ii There are relations t satisfying model (M) with all relations ti*
having at most 5 equivalence classes which are not strict concordan-
ce-discordance relations.

Proof of proposition 7
i. Given the properties of model (M), the claim will be proven if we
build a representation of >- in model (M) with functions Pi taking only
142 AIDING DECISIONS WITH MULTIPLE CRITERIA

five distinct values. Define Pi as:

2 if Xi ViYi,
1 if XiPiYi and N ot[Xi ViYi],
o if xi1iYi,
-1 if YiPiXi and N ot[Yi ViXi],
-2 if Yi ViXi.

Since Vi and Pi are asymmetric and Vi ~ Pi, the function Pi is well-


defined and skew-symmetric.
Define F letting:

1 if x ~ y,
-1 if Y ~ x,
o otherwise.
Using the definition of a strict concordance-discordance relation, it is
routine to show that F is well-defined, odd and nondecreasing.
ii. Using an additive utility model, it is easy to build examples of
relations having a representation in model (M) with all relations ti*
having at most 5 equivalence classes which are not strict concordance-
discordance relations. 0

5.6. Discussion
The main contribution of this paper was to propose a characterization
of strict concordance relations within the framework of a general model
for nontransitive conjoint measurement. This characterization allows
to show the common features between various conjoint measurement
models and to isolate the specific feature of strict concordance relations,
i.e. the option not to distinguish a rich preference difference relation on
each attribute. It was shown to be more general than previous ones
based on NC or MNC.
Although we restricted our attention to asymmetric relations, it is not
difficult to extend our analysis, using the results in Bouyssou and Pirlot,
2000, to cover the reflexive case studied in Fargier and Perny, ming in
which:
xSy {::> [S(x, y) ~ S(y, x)]
where S is a reflexive binary relation on X, Si is a complete binary
relation on Xi, ~ is a reflexive binary relation on 2N and S(x, y) = {i E
N: xiSiyd.
REFERENCES 143

Further research on the topics discussed in this paper could involve:

• the extension of our results to cover the case of an homogeneous


Cartesian product, which includes the important case of decision
under uncertainty. "Ordinal" models for decision under uncer-
tainty (e.g. lifting rules) have been characterized in Perny and
Fargier, 1999 using variants of NC and MNC. It appears that our
analysis can be easily extended to cover that case, see Bouyssou
et al., 2000.

• a deeper study of discordance effects within model (M). Such a


work could possibly allow for a characterization of strict concordan-
ce-discordance relations in our conjoint measurement framework.

• a study of various variants of model (M) following the approach in


Bouyssou and Pirlot, 2000; Bouyssou and Pirlot, 2001.

Acknowledgments
We wish to thank Patrice Perny for his helpful comments on an earlier
draft of this text. The usual caveat applies.

References
Arrow, K.J. (1963). Social choice and individual values. Wiley, New York, 2nd edition.
Bouyssou, D. (1986). Some remarks on the notion of compensation in MCDM. European
Journal of Operational Research, 26:150-160.
Bouyssou, D. (1992). On some properties of outranking relations based on a concordance-
discordance principle. In Duckstein, L., Goicoechea, A., and Zionts, S., editors,
Multiple criteria decision making, pages 93-106. Springer-Verlag, Berlin.
Bouyssou, D. (1996). Outranking relations: Do they have special properties? Journal
of Multi-Criteria Decision Analysis, 5:99-11l.
Bouyssou, D. (2001). Outranking methods. In Floudas, C. and Pardalos, P., editors,
Encyclopedia of optimization. Kluwer.
Bouyssou, D., Perny, P., and Pirlot, M. (2000). Nontransitive decomposable conjoint
measurement as a general framework for MCDM and decision under uncertainty.
Communication to EURO XVII, Budapest, Hungary, 16-19 July.
Bouyssou, D., Perny, P., Pirlot, M., Tsoukias, A., and Vincke, Ph. (1993). A manifesto
for the new MCDM era. Journal of Multi-Criteria Decision Analysis, 2:125-127.
Bouyssou, D. and Pirlot, M. (2000). Non transitive decomposable conjoint measure-
ment: General representation of non transitive preferences on product sets. Work-
ing Paper.
Bouyssou, D. and Pirlot, M. (2001). 'Additive difference' models without additivity
and subtractivity. Working Paper.
Bouyssou, D. and Vansnick, J.-C. (1986). Noncompensatory and generalized noncom-
pensatory preference structures. Theory and Decision, 21:251-266.
144 AIDING DECISIONS WITH MULTIPLE CRITERIA

Debreu, G. (1960). Topological methods in cardinal utility theory. In Arrow, K,


Karlin, S., and Suppes, P., editors, Mathematical methods in the social sciences,
pages 16-26. Stanford University Press.
Fargier, H. and Perny, P. (forthcoming). Modelisation des preferences par une regie
de concordance generalisee. In et al., B. R., editor, AMCDA, Selected papers from
the 49th and 50th meetings of the EURO Working Group on Multicriteria Aid for
Decisions. European Union.
Fishburn, P.C. (1970). Utility theory for decision-making. Wiley, New York.
Fishburn, P.C. (1976). Noncompensatory preferences. Synthese, 33:393-403.
Fishburn, P.C. (1990a). Additive non-transitive preferences. Economic Letters, 34:317-
321.
Fishburn, P.C. (1990b). Continuous nontransitive additive conjoint measurement.
Mathematical Social Sciences, 20:165-193.
Fishburn, P.C. (1991). Nontransitive additive conjoint measurement. Journal of Math-
ematical Psychology, 35:1-40.
Fishburn, P.C. (1992). Additive differences and simple preference comparisons. Jour-
nal of Mathematical Psychology, 36:21-31.
Keeney, R.L. and Raiffa, H. (1976). Decisions with multiple objectives: Preferences
and value tradeoffs. Wiley.
Krantz, D.H., Luce, RD., Suppes, P., and Tversky, A. (1971). Foundations of mea-
surement, volume 1: Additive and polynomial representations. Academic Press, New
York.
Perny, P. and Fargier, H. (1999). Qualitative decision models under uncertainty with-
out the commensurability assumption. In Laskey, K. and Prade, H., editors, Pro-
ceedings of Uncertainty in Artificial Intelligence, pages 188-195. Morgan Kaufmann
Publishers.
Roy, B. (1968). Classement et choix en presence de points de vue multiples (Ia methode
ELECTRE). RIRO, 2:57-75.
Roy, B. (1991). The outranking approach and the foundations of ELECTRE methods.
Theory and Decision, 31:49-73.
Roy, B. (1996a). Les logiques compensatoires et les autres. Research paper # 16,
LAMSADE, Universite de Paris-Dauphine.
Roy, B. (1996b). Multicriteria methodology for decision aiding. Kluwer, Dordrecht.
Original version in French: "Methodologie multicritere d'aide a la decision", Eco-
nomica, Paris, 1985.
Roy, B. and Bouyssou, D. (1993). Aide multicritere a la decision: Methodes et cas.
Economica, Paris.
Sen, A.K (1986). Social choice theory. In Arrow, KJ. and Intriligator, M.D., editors,
Handbook of mathematical economics, volume 3, pages 1073-1181. North-Holland,
Amsterdam.
Tversky, A. (1969). Intransitivity of preferences. Psychological Review, 76:31-48.
Vanderpooten, D. (1990). The construction of prescriptions in outranking methods.
In Bana e Costa, C.A., editor, Readings in multiple criteria decision aid, pages
184-215. Springer Verlag, Berlin.
Vansnick, J.-C. (1986). On the problems of weights in MCDM (the noncompensatory
approach). European Journal of Operational Research, 24:288-294.
REFERENCES 145
Vincke, Ph. (1992). Multi-criteria decision aid. Wiley, New York. Original version in
French: "L 'aide multicritere Ii la decision", Editions de I'Universite de Bruxelles-
Editions Ellipses, Brussels, 1989.
Vincke, Ph. (1999). Outranking approach. In Gal, T., Stewart, T., and Hanne, T., ed-
itors, Multicriteria decision making, Advances in MCDM models, algorithms, theory
and applications, pages 11.1-11.29. Kluwer.
Vind, K. (1991). Independent preferences. Journal of Mathematical Economics, 20:119-
135.
Wakker, P.P. (1988). Derived strength of preference relations on coordinates. Eco-
nomic Letters, 28:301-306.
Wakker, P.P. (1989). Additive representations of preferences - A new foundation of
decision analysis. Kluwer, Dordrecht.
Weymark, J. (1983). Arrow's theorem with quasi-orderings. Public Choice, 42:235-
246.
FROM CONCORDANCE / DISCORDANCE
TO THE MODELLING
OF POSITIVE AND NEGATIVE REASONS
IN DECISION AIDING

Alexis Tsoukias
LAMSADE - CNRS, Universite Paris Dauphine, France
tsoukias@lamsade.dauphine.fr

Patrice Perny
LIP6, Universite Paris 6, France
Patrice.Perny@lip6.fr

Philippe Vincke
SMG - ISRO, Universite Libre de Bruxelles, Belgium
pvincke@ulb.ac.be

Abstract The principle of concordance / discordance was introduced by B. Roy


in his very early work on Multiple Criteria Decision Analysis. Although
such a principle is grounded by strong evidence from real life decision
situations, the way in which it has been implemented in existing MCDA
methods allows only for its partial and limited use. Indeed, the principle
lacks a theoretical frame enabling a more general use in decision analy-
sis. The paper presents a possible generalisation of this principle under
the concepts of positive and negative reasons. For this purpose, a new
formalism, (a four valued logic) is suggested. Under such a formalism
the concordance test is seen as the evaluation of the existence of positive
reasons supporting the sentence "x is at least as good as y", while the
discordance test can be viewed as the evaluation of the existence of neg-
ative reasons against the same sentence. A number of results obtained
in preference modelling and aggregation shows the potentiality of this
approach.

Keywords: Concordance/discordance principle; Preference modelling; Positive and


negative reasons; Four-valued logic
148 AIDING DECISIONS WITH MULTIPLE CRITERIA

1. Introduction
Consider a Parliament. The government has the support of the ma-
jority of seats, although not a very strong one. Suppose now that a law
on a very sensitive issue (such as education, religion, national defence,
minority rights etc.) is introduced for discussion by the government.
Several political, social and ethical issues are involved. Suppose finally
that the opposition strongly mobilises, considering that the law is a ma-
jor attack against "something". Massive demonstrations are organised,
an aggressive media campaign is pursued etc.. It is quite reasonable
that the government will try to find a compromise on some aspects of
the law in order to improve its "acceptability". Note, however, that
such a compromise concerns aspects argued by the minority and not the
majority.
Which decision rule is the government using to choose an appropriate
law proposal in such a situation? A law proposal x is considered "better"
than proposal y iff it meets the majority will and does not mobilise the
minority aversion. It should be observed that the minority is considered
here as an independent decision power source. Such a "decision rule"
is a regular practice in all mature democracies. Although the minority
does not have the power to impose its political will, it has the possibility
of expressing a "veto", at least occasionally. Such a "negative power"
may not necessarily be codified somewhere, but is accepted. Actually, it
is also a guarantee of the democratic game. When the present majority
becomes a minority it will be able to use the same "negative power".
Consider now the Security Council of the United Nations. Here, a
number of nations are officially endowed with a veto power such that
resolutions taken with a majority of votes (even the highest ones) can
be withdrawn if such a veto is used. We observe that in this case the
decision rule "x is better than y if it is the case for the majority and no
veto is used against x" is officially adopted. Again we observe that the
countries having a veto power do not have a "positive power" (impose a
decision), but only a "negative" one.
Finally, consider the very common situation where the faculty has to
deliberate on the admission of candidates to a course (let's say a manage-
ment course). Then consider two candidates: the first, x, having quite
good grades, systematically better than the second, y, but with a very
bad grade in management science; then candidate y, who is systemati-
cally worse than x, but has an excellent grade in management science.
Several faculty members will claim that, although candidate y is not
better than candidate x, it is also difficult to consider x better than y
due to their inverse quality concerning the key class of the course, man-
Modelling Positive and Negative Reasons in Decision Aiding 149
agement science. The same faculty members will also claim that the two
candidates cannot be considered indifferent because they are completely
different. These members are intuitively adopting the same decision rule
as in the previous two examples: candidate x is better than candidate y
iff (s}he has a majority of grades in (her}his favour and is not worse in
a number of key classes. For an extensive discussion on the question of
grades in decision support see Bouyssou et al., 2000.
If we consider a class grade of a candidate as (her)his value on a
criterion, the reader will observe that in the above decision rule there
exist criteria having a "negative power". Such a "negative power" is not
compensated by the "positive power" of the majority of criteria. It acts
independently and only in a negative sense.
We could continue with several other real life examples going from
vendor rating to bid selection and loan allowance. In all such cases it is
frequent to find the intuitive decision rule: alternative x is better than
alternative y iff there is a majority of "reasons" supporting x wrt to y
and there is no strong opposition to x wrt to y.
In order to be more formal we will use a large preference relation of
the type "x is at least as good as y" (denoted S (x, y), also known as
"outranking" relation) such that:
S(x, y) {::::::::? C(x, y) /\ -,D(x, y) (1)
where:
C(x, y) means there is a majority of reasons supporting x wrt to y;
D(x, y) means there is a strong opposition to x wrt to y;
/\ and -, being the conjunction and negation operators respectively.

We use the predicate C(x, y) in order to verify a concordance test


concerning x wrt to y and the predicate D(x, y) in order to verify a
discordance test concerning x wrt to y.
As we saw, this is a widely used empirical decision rule. The legitimate
questions are: how can such a rule be used in a decision support method?
Under which conditions can it be applied and what type of results should
we expect? On which theoretical grounds can such a rule be formalised
as a general principle?
In this paper we will try to contribute to the discussion on the above
questions. Section 2 introduces, in general terms, the methods adopting
the concordance / discordance principle in the area of Multiple Criteria
Decision Analysis. Such methods are well known under the name of out-
ranking methods. A critical discussion on a number of problems arising
from such methods is introduced in this section. Then section 3 sug-
gests a generalisation of the concordance /discordance principle under
150 AIDING DECISIONS WITH MULTIPLE CRITERIA

the positive and negative reasons approach. Such an approach suggests


a general frame under which different problems of preference modelling
and aggregation can be viewed. In this section we introduce a number
of theoretical results based on the use of new formalisms extending the
expressive power of first order languages. Several open questions are
also introduced.

2. Concordance / Discordance in MCDA


2.1. Crisp outranking relations
The use of the concordance / discordance principle in decision support
methods dates back to the seminal paper of Roy, 1968, where it was first
introduced, beginning the well known ELECTRE family of the Multiple
Criteria Decision Analysis methods (see Roy, 1991).
The idea is simple. Consider formula l.If we are able to associate a
criterion on which alternatives can be compared to each "reason" (for
the concept of criterion and of coherent family of criteria under this
perspective, see Roy and Bouyssou, 1993; Vincke, 1992b), then C(x,y)
represents the existence of a "significant" coalition of criteria for which
"x is at least as good as y" and D(x, y) represents the existence of a
"significant opposition" against this proposition. To give an example,
due to Roy, 1968, we can use the following definitions:

C(x,y) (2)

D(x, y) (3)

where:

gj, j = 1, ... , n are the criteria, to be maximised;


Wj are importance coefficients associated to each criterion;
J~y represents the set of criteria for which x is at least as good as y;
more precisely, J~y = {j E {1, ... ,n},gj(y)-gj(x) ~ qj} whereqj is the
indifference threshold attached to criterion gj
'Y is a majority threshold;
Vj is a veto threshold on criterion j.

In this case a sufficiently strong, let us say positive, coalition is any


subset of criteria of which the sum of the importance coefficients is at
least 'Y. A sufficiently strong, let's say negative, coalition is any single
criterion provided it is endowed with veto power. The relation S is better
Modelling Positive and Negative Reasons in Decision Aiding 151

known as an "outranking relation" (see also Ostanello, 1985; Vincke,


1999). A large part of the so-called "outranking methods" is based on
this principle with a number of possible variations, since C(x, y) and
D(x, y) can be defined using a large variety of formulas.
Besides such variations in the definition of C and/or D it should
be noted that various sophistications of classical concordance and non-
discordance rules have been proposed to extend their ability to discrim-
inate or their descriptive power. In this respect, let us observe that:

• defining concordance and discordance tests in terms of all or noth-


ing conditions is not always adequate (see e.g. Perny and Roy,
1992). As we shall see in the next subsection, in some situations,
it is worthwhile to consider a "concordance" and a "discordance"
index for each ordered pair of alternatives, opening the way to the
establishment of a "valued outranking relation" .
• representing the strength of coalitions of criteria by an additive
and/or decomposable measure is not necessarily adequate. As
shown in Fargier and Perny, 2001; Grabisch and Perny, 2001, in
some situations, preferences require non-additivity to be repre-
sentable by a concordance rule.
Readers aware of social choice theory will recognise in the above for-
mula a variation of a Condorcet type majority rule. From such a per-
spective it should be noted that:

• the binary relation S defined in this way can only be guaranteed


to be reflexive (on this point see Bouyssou, 1996);
• in other terms the relation S is not an ordering relation (neither
completeness nor transitivity can be guaranteed) and, from an
operational point of view, can be of little help on its own;
• from the above reasons it appears necessary, once the relation S
is established, to use a so-called "exploitation procedure", which
is an algorithm that transforms such a relation into an ordering
relation (at least a partial order).

Concerning "exploitation procedures" and more generally outrank-


ing methods, see Vanderpooten, 1990; Vincke, 1992a; Bouyssou, 1992a;
Bouyssou, 1992b; Bouyssou and Perny, 1992; Bouyssou and Pirlot, 1997;
Marchant, 1996; Pirlot, 1995, for a more detailed and formal discussion.
We are not going to further analyse the so-called outranking methods, al-
though we will briefly discuss three remarks corresponding to important
research directions concerning such methods.
152 AIDING DECISIONS WITH MULTIPLE CRITERIA

1 It is clear that the importance parameters and the concordance


and discordance thresholds are strongly related (see also Roy and
Mousseau, 1996). Actually both concepts are used in order to
establish what the coalitions of criteria enabling to confirm the
sentence "x is at least as good as y" are or to confirm its negation.
In fact, consider a three criteria setting where the importance pa-
rameters are fixed at WI = 0.45, W2 = 0.35 and W3 = 0.2. If we fix
the concordance threshold at 0.7, it is equivalent to claiming that
only criteria CI and C2 can form a winning positive coalition (ex-
cept for unanimity), therefore both CI and C2 are strictly necessary
for such coalitions. If we fix the concordance threshold to 0.6, it
is equivalent to claiming that the winning positive coalitions now
include (except for the previous ones) the one formed by CI and
C3. Only criterion CI is strictly necessary now. Therefore, such pa-
rameters are just convenient numerical representations of a more
complex issue concerning the "measurement" of the strength of
each coalition of criteria with respect to the sentence "x is at least
as good as y".
2 All MCDA methods based on the use of "outranking relations" are
based on a two step procedure: the first establishing the outranking
relation itself through any of its many variants and the second
transforming the outranking relation into an ordering relation. Up
to now, there is no way of establishing whether a specific formula
of outranking must correspond to a specific form of "exploitation
procedure". Any combination of the two steps appears legitimate
provided it satisfies the requirements of the decision process and
the client's concerns.
3 Recently Bouyssou and Vincke, 1997; Bouyssou et al., 1997; Pirlot,
1997; Bouyssou and Pirlot, 1999; Greco et al., 2001, showed that
the precise way by which the outranking relations are defined can
be seen as an instance of non transitive, non additive conjoint
measurement. More has to be done in this direction, but a unifying
frame with other approaches in MCDA is now possible and has to
be thoroughly investigated.

2.2. Fuzzy Outranking relations


The concordance test defined by (2) relies on a simple definition of
the set J;y of criteria concordant with the assertion x is at least as good
as y. It supposes implicitly that we are able to decide clearly whether
a criterion is concordant with the assertion or not. As recalled above, a
criterion gj is considered concordant with respect to proposition S(x, y)
Modelling Positive and Negative Reasons in Decision Aiding 153

iff the score difference gj(Y) - gj(x) does not exceed a (indifference)
threshold qj. However, fixing a precise value for qj is not easy and
the concordance test (2) can be artificially sensitive to modifications of
criterion values, especially when the scale of criterion gj is continuous
(see Perny and Roy, 1992 and Perny, 1998 for a precise discussion on
this topic). A useful solution to overcome this difficulty was proposed
by B. Roy a long time ago (see Roy, 1978). The idea was to define
a concordance index Cj(x, y), valued in the unit interval, and defined
from the quantities gj(x) and gj(Y) for each criterion gj and each pair
(x,y). By convention, Cj(x,y) = 1 means that the criterion gj is fully
concordant with the assertion S (x, y) whereas Cj (x, y) = 0 means that
criterion gj is definitely not concordant with this assertion. There is,
of course, the possibility of considering intermediate values between 1
and 0 which makes the construction more expressive. It leaves room for
a continuum of intermediary situations between concordance and non-
concordance. As an example, we recall the definition of concordance
indices proposed by Roy in the Electre III method (Roy, 1978)

where:

• qj is an indifference threshold, which is a real-valued function such


that, for any pair of alternatives (x,y), qj(gj(x)) is the maximal
value of a score difference of type gj(Y) - gj(x) that could be com-
patible with indifference between x and y;

• Pj is a preference threshold, which is a real-valued function such


that, for any pair of alternatives (x,y), pj(gj(Y)) is the minimal
positive value of a score difference of type gj(x) - 91(y) that could
be compatible with the preference of x over y. The condition
\:jz E JR, qj(z) < Pj(z) is assumed.

Such a concordance index is pictured in figure l.


Note that similar ideas apply to defining concordance indices with
respect to a strict preference P(x, y) (see Brans and Vincke, 1985) and to
indifference I(x, y) (Perny, 1998). In any case, the concordance index can
be interpreted as the membership degree of criterion j to the concordant
coalition Jfy (or J{y, J~y).
Coming back to outranking relations, the concordant coalition with re-
spect to S(x, y) must be seen as a fuzzy subset of {I, ... ,n} characterised
by membership function f.-LJSxy (j) = CJo(x, y). Thus, the concordance test
154 AIDING DECISIONS WITH MULTIPLE CRITERIA

o~--~----~------~~----------~------~
gj(Y)

Figure 1. Valued outranking indices in Electre III

must be modified to take this sophistication into account. Two main


ideas were suggested by Roy:
1 adapting the Electre I concordance test (2) so as to use con-
cordance indices. A simple solution derived from the Electre IS
method (Roy and Skalka, 1984) is given by the following concor-
dance test:

(5)

2 interpreting the concordance test in a multi-valued logic. This is


the option used in Electre III (Roy, 1978). This amounts to defin-
ing the level to which the concordance test is fulfilled. Consistently
with the previous propositions the truth value c(x, y) E [0,1] re-
turned by the concordance test can be defined, for example, by:
"2:,jEJS WjCj{x, y)
c( x, y) = ---...:---...:"'=::y=------- (6)
"2:,j Wj

The same ideas apply to the discordance test whose role is to check
whether some criteria are strongly conflicting with the proposition S(x, y).
The classical test, with the veto threshold, is not always convenient. This
is particularly true when the criterion scale is continuous; it does not
seem appropriate to declare that a given criterion gj should have a right
of veto over S(x, y) when gj{Y) - gj(x) > Vj{gj{x)) but should entirely
lose this right as soon as the inequality no longer holds. A continuous
transition seems preferable. For this reason, discordance indices mea-
suring the extent to which criterion j is strongly opposed to a statement
S(x, y) were introduced in (Roy, 1978).
Modelling Positive and Negative Reasons in Decision Aiding 155

where pj(x) < Vj(x) for all x.

o L....-............_ _- - ' -_ _ _ _ _ _ _:..--_ _ _ _--'--_ ____...yj (y)

Figure 2. The discordance index D j (x, y) in the Electre III method

Thus, the discordant coalition can also be seen as a fuzzy subset of


{I, ... , n} characterised by the membership function J-LJD
xy
(j) = Dj(x, y).
The discordance test must therefore be modified to take this sophistica-
tion into account. Consistently with the concordance tests introduced
above, two main ideas can be put forward:
1 adapting the Electre I and Electre III concordance test (2) so as to
use discordance indices. A simple solution inspired by the Electre
III method (Roy, 1978) is given by the following discordance test:

D(x, y) {:::::} 1 - II (1 - Dj(x, y)) >0 (7)


jEJ~"I

where J~'Y = {j E {1, ... ,n},Dj(x,y) > 'Y} and 'Y,o E (0,1) are
the overall concordance and discordance thresholds respectively.
Note that the test is defined in such a way that the presence of at
least one fully discordant criterion 9j (such that Dj(x, y) = 1) is
sufficient to make the discordant test positive;
2 interpreting the discordance test in a multi-valued logic. This is
the option implicitly used in Electre III (Roy, 1978). This amounts
to defining the level to which the discordance test is fulfilled. Con-
sistent with the previous proposition the truth value d(x, y) E [0,1]
returned by the discordance test can be defined, for example, by:

d(x,y) = 1- II (1- Dj(x,y)) (8)


jEJ~"Y
156 AIDING DECISIONS WITH MULTIPLE CRITERIA

Note that this formulation avoids possible discontinuities due to


the use of the cutting threshold 8.

When the concordance test is (5) and the discordance test is (7) the
construction of the outranking relation S is obviously defined by equa-
tion (1). When the concordance test is (6) and the discordance test is
(8) the equation (1) must be interpreted in a multivalued logic. This
leads to defining the overall outranking index s(x,y) E [0,1] for any pair
(x, y) of alternatives as a non-decreasing function of c(x, y) and a non-
increasing function of d(x, y). As an example, B. Roy uses the following
equality in Electre III:

s(x,y) = c(x,y)(l- d(x,y)) (9)

The reader is referred to Perny and Roy, 1992 and Perny, 1998 for a
more general and systematic construction of outranking relations in the
framework of fuzzy set theory.

2.3. Problems
The use of the so-called outranking methods in MCDA is now largely
acknowledged and several empirical validations may be found in the
literature (see Roy and Bouyssou, 1993; Vincke, 1992b; Bouyssou et al.,
2000). It is nevertheless possible to note a number of significant open
questions.

• The definition of "outranking" makes use of a concordance and


a non-discordance test, which both have to be verified in order
to establish that the outranking relation holds. If any of the two
tests fails for a given ordered pair of alternatives, the conclusion
is that the outranking relation does not hold for this ordered pair.
However, the reader can note that there is a big semantic difference
between a situation where a majority of criteria supports that "x
is at least as good as y", but there is a veto and a situation where
there is neither majority nor veto.
In other words, when comparing two alternatives x and y the use
of the concordance / discordance principle introduces four different
epistemic situations:

- concordance and non-discordance;


- concordance and discordance;
- non-concordance and non-discordance;
- non-concordance and discordance.
Modelling Positive and Negative Reasons in Decision Aiding 157

but only two valuations are possible (either the outranking relation
holds or it does not).
• The definition of the overall outranking relation, at least as it
usually appears in outranking methods, implicitly imposes that
the criteria to be aggregated should at least be weak orders.
If the preference models of the criteria to be aggregated are Pseudo-
orders (preference structures allowing a numerical representation
using thresholds), there is no way to use such a specific informa-
tion in the establishment of the outranking relation. Only in the
case where the outranking relation is a fuzzy binary relation is it
possible to use the specific information included in pseudo orders
when these are represented as fuzzy relations themselves (see Roy,
1991). If the preference models of the criteria under aggregation
are partial orders then it is possible that the absence of preference
or indifference at the single criterion level could lead to a "non out-
ranking", not due to conflicting preferences, but due to ignorance.
There is however no way to distinguish such situations.
• As already mentioned by Vincke, 1982, each preference aggregation
step leads to a result which is (from a relational point of view)
poorer than the original information. This is obvious, since the
aggregation procedure eliminates some information. Moreover, as
already reported by Bouyssou, 1996, an outranking relation is not
necessarily a complete relation (not even a partial order). From
this point of view, there is a problem if such an approach has to be
used in presence of a hierarchy of criteria. If at each layer we keep
the result of the aggregation as it is and then we aggregate at the
next layer, we will very soon obtain an (almost) empty relation.
On the other hand, if at each layer, after aggregation, we transform
the outranking relation into a weak order (so that we can correctly
apply the aggregation procedure again), we introduce a bias in each
aggregation step the consequences of which are unknown. While
in usual situations of decision support the use of an exploitation
procedure can be discussed with the client, this is not possible
in a hierarchical aggregation problem and the above problem can
become severe.

From the above discussion it is clear that the principle of concordance


/ discordance, as it is applied in the so-called outranking methods, can
be used locally (only in preference aggregation). On the other hand,
it cannot be applied for broader classes of modelling purposes since it
lacks a sufficient abstraction level. Besides the above criticism, it should
158 AIDING DECISIONS WITH MULTIPLE CRITERIA

be noted that there is no single-criterion preference model based on the


principle of concordance / discordance.

3. Positive and negative reasons


The discussion in the previous section cannot conceal the fact that the
concordance / discordance principle is based on a solid empirical ground.
When comparing two alternatives under one or more criteria we are often
led to consider what is "for" and what is "against" a preference among
the two alternatives separately. Quite often "for" is not the complement
of what is "against" and vice-versa. It is quite difficult to justify a
preference by just saying "there is nothing against it". When decisions
have been elaborated, the concordance / discordance principle is in fact
deeply rooted in common sense. We therefore claim that it is not the
principle itself that has to be argued, but the way in which it has been
implemented up to now.
We will hereafter present a general approach trying to improve the
abstraction level of such a principle. The idea is very simple. When
comparing two alternatives consider the "positive reasons" (which may
support a preference) and the "negative reasons" (which may be against
the preference) independently. If these "positive" and "negative" rea-
sons can be modelled in a formal way, such an approach will lead to
a general preference model which can be used at any moment of the
decision aiding process: single criterion preference modelling, preference
aggregation, measurement, classification etc.. For this purpose it will be
necessary to introduce a specific formalism. The following is based on
results published in Tsoukias and Vincke, 1995; Tsoukias and Vincke,
1997; Tsoukias and Vincke, 1998; Tsoukias and Vincke, 2001; Pernyand
Tsoukias, 1998; Ngo The and Tsoukias, 2001.

3.1. The formalism


Hereafter, we briefly present the basic concepts of the logic formalism
we use in the paper. The basic property of such a logic is to explicitly
represent situations of hesitation due either to lack of information (miss-
ing or uncertain) or to excess information (ambiguous or contradictory).
A detailed presentation of the DDT logic can be found in Tsoukias,
1996.A detailed presentation of the continuous extension of DDT intro-
duced at the end of the subsection can be found in Perny and Tsoukias,
1998.

The DDT Logic. The DDT logic, which is a four-valued first order
language, is based on a net distinction between the "negation" (which
Modelling Positive and Negative Reasons in Decision Aiding 159

represents the part of the universe verifying the negation of a predicate)


and the "complement" (which represents the part of the universe which
does not verify a predicate) since the two concepts do not necessarily
coincide. The four truth values represent four epistemic states of an
agent towards a sentence (a) that is:

- a is true (t): there is evidence that it is true and there is no


evidence that it is false;
a is false (J): there is no evidence that it is true and there is
evidence that it is false;
a is unknown (u): there is neither evidence that it is true nor that
it is false;
a is contradictory (k): there is both evidence that it is true and
that it is false.

The logic is based on a solid algebraic structure which is a Boolean


algebra on a bilattice of the set of its truth values (k and u are incom-
parable on one dimension of the bilattice and t and f are incomparable
on the other dimension of the bilattice). The logic extends the one in-
troduced by Belnap, 1977 and uses results from Ginsberg, 1988; Fitting,
1991.
The logic introduced deals with uncertainty. A set A may be defined,
but the membership of an object a to the set may be unsure either
because the information is not sufficient or because the information is
contradictory.
In order to distinguish between these two principal sources of uncer-
tainty the knowledge of the "membership" of a in A and of the "non-
membership" of a in A are evaluated independently since they are not
necessarily complementary. Under this perspective from a given knowl-
edge we have two possible entailments, one, positive, about membership
and one, negative, about non-membership. Therefore, any predicate is
defined by two sets, its positive and its negative extension in the uni-
verse of discourse. Since the negative extension does not necessarily
correspond to the complement of the positive extension of the predi-
cate we can expect that the two extensions possibly overlap (due to the
independent evaluation) and that there exist parts of the universe of
discourse that do not belong to either of the two extensions. The four
truth values capture these situations.
Under such a logic, for any well formed formula a, we may use the
following sentences:

..., a (not a, the negation);


160 AIDING DECISIONS WITH MULTIPLE CRITERIA

f a (perhaps not a, the weak-negation);


'" a (the complement of a, '" a == -, f -, fa);
6,a (presence of truth for a);
6, -,a (presence of truth for -, a);
- Ta (the true extension of a);
- Ka (the contradictory extension of a);
- U a (the unknown extension of a);
- Fa (the false extension of a).
Between Ta, Ka, Ua, Fa on the one side and 6,a and 6,-,a on the
other side the following hold:

Ta {:::::} 6,a /\ -,6,-,a (10)


Ka {:::::} 6,a /\ 6,-,a (11)
Ua {:::::} -,6,a /\ -,6,-,a (12)
Fa {:::::} -,6,a /\ 6,-,a (13)

A continuous extension. The DDT logic introduced above dis-


tinguishes four possible interpretations of a formula a, namely "true",
"false", "contradictory", "unknown", all defined from the two conditions
6,a and 6, -,a reflecting the presence of truth for a and -,a respectively.
However, this presence of truth cannot always be thought of as an all
or nothing concept. Following the example of Concordance and Dis-
cordance concepts, introducing intermediary states between the "full
presence of truth" and the "full absence of truth" can be useful. We can
imagine a continuum of situations between these extremal situations, en-
abling to differentiate a multitude of information states between 6,a and
-,6,a, and 6,-,a and -,6,-'a. For this reason, conditions 6,a and 6,-,a
will be represented by real values b( a) and b( -,a) respectively, chosen in
the unit interval in order to reflect the "strength" or the "credibility"
of the two arguments. From these two values, a degree of truth, con-
tradictory, unknown and false can be defined in the same spirit as what
has been done in equations (10-13). As an example, we mention here
a possible solution proposed and justified by Perny and Tsoukias, 1998
(for an alternative approach see Fortemps and Slowinski, 2001):

t(a) min(b(a), 1 - b(-,a)) (14)


k(a) max( b(a) + b(-,a) - 1, 0) (15)
u(a) = max(1 - b(a) - b(-,a), 0) (16)
f(a) = min(l- b(a), b(-,a)) (17)
Modelling Positive and Negative Reasons in Decision Aiding 161

and therefore :

t(a) + k(a) b(a)


f(a) + k(a) = b(-,a)
t(a) + u(a) 1 - b( -,a)
f(a) + u(a) = 1 - b(a)

Using these equations, any formula a is represented by the truth ma-


trix v(a) :
v(a) = (t(a) k(a)) (18)
u(a) f(a)
with t(a)+k(a) +u(a)+ f(a) = 1 for any proposition a. Thus, the set of
all possible values is represented by the continuous bi-lattice represented
in figure 3.

u k

f
Figure 3. The continuous hi-lattice

Note that, by construction, there is a one-to-one correspondence be-


tween the points of this bi-lattice, and the matrices defined by equations
(18) and (14-17).
162 AIDING DECISIONS WITH MULTIPLE CRITERIA

3.2. Applications in preference modelling


We can now use the formalism introduced above for preference mod-
elling and decision support purposes. Given a set A and a binary relation
8 modelling the concept "at least as good as", we are allowed to write
formulas of the type:
- 6.8(x, y): there is (presence of) truth in claiming that x is at least
as good as y;
- 6.-,s(x, y): there (is presence) of truth in claiming that x is not
at least as good as y;
- -,6.8(x, y): there is no (presence of) truth in claiming that x is at
least as good as y;
- -,6.-,8(x, y): there is no (presence of) truth in claiming that x is
not at least as good as y;
Clearly, from equations (10-13), we obtain:
T8(x,y) <===? 6.8(x, y) A -,6.-,8(x, y) (19)
K8(x,y) <===? 6.8(x, y) A 6.-,8(x, y) (20)
U8(x,y) <===? -,6.8(x, y) A -,6.-,8(x, y) (21)
F8(x,y) <===? -,6.8(x, y) A 6.-,8(x, y) (22)
which enable to establish the true, contradictory, unknown and false
extensions of the relation 8 respectively.

Combining such extensions with the extensions of the inverse rela-


tion 8- 1 we obtain the PC preference structure (see Tsoukias and
Vincke, 1995; Tsoukias and Vincke, 1997) where ten different basic
preference relations can be defined (P, H, K, I, J, U, R, T, V, L) enabling
to clearly distinguish different types of hesitation and incomparability
when two alternatives are compared. In such a way we are able, for in-
stance, to distinguish two situations of hesitation between strict prefer-
ence (TP(x,y) == T8(x,y) A F8- 1 (x,y)) and indifference (TI(x,y) ==
T8(x, y) A T8- 1 (x, y)), one where 8- 1 is contradictory (TH(x, y) ==
T8(x,y) A K8- 1 (x,y)) and one where 8- 1 is unknown (TK(x,y) ==
T8(x, y) A U8- 1 (x, y)). We are also able to distinguish the situation
where both 8 and 8- 1 are unknown (ignorance) from the situation where
both 8 and 8- 1 are false (conflict). The reader will find more details on
the semantics of the ten basic relations in Tsoukias and Vincke, 1997.
We are now able to use such results in order to model generalised
concordance and discordance conditions on a single criterion. The con-
cept is very simple: we associate the concordance condition the formula
Modelling Positive and Negative Reasons in Decision Aiding 163

x
y
• P(x, y)

x
y
• Q(x,y)
y
x

x
y
• I(x,y)

Figure 4. PQ I Interval Orders

"presence of truth in x is at least as good as y (.6.S(x, y))" and the dis-


cordance condition to the formula "presence of truth in x is not at least
as good as y (.6. -,S (x, y) )". We will show how such an approach applies
to the problem of comparing alternatives represented through intervals.
Consider two alternatives x and y whose values are known under the
form of an interval: [l(x),r(x)], [l(y),r(y)]; 'Ix, l(x) < r(x), l(x) and
r(x) representing the left and right extremes respectively. It is well
known that it is possible to compare x to y using the interval order
preference structure such that:
P(x, y) if r(x) > l(x) > r(y) > l(y),
I(x, y) otherwise (see figure 4).
However, we may intuitively consider that the case where r(x) >
r(y) > l(x) > l(y) represents a situation of hesitation between preference
and indifference. Moreover, we may wish to distinguish the case r(x) >
r(y) > l(y) > l(x) from the case r(y) > r(x) > l(x) > l(y) since the
two intervals are inversely included (see also figure 4). Such a preference
structure was first studied in Tsoukias and Vincke, 2001; Ngo The et
al., 2000, under the name of PQI interval order (Q representing the
hesitation between P and 1).
164 AIDING DECISIONS WITH MULTIPLE CRITERIA

We are able to show that using the PC preference structure and the
positive / negative reasons approach we can model such situations of
hesitation due to the presence of an interval representation in a positive
way.

Definition 1 (see Tsoukids and Vincke, 2001) A PQI preference struc-


ture on a finite set A is a PQI interval order iff there exist two real valued
functions I and r, such that V x, yEA:
i) r(x) > l(x);
ii) P(x, y) ~ r(x) > l(x) > r(y) > l(y);
iii) Q(x, y) ~ r(x) > r(y) > l(x) > l(y);
ivY I(x,y) {=::} r(x) > r(y) > l(y) > l(x) or r(y) > r(x) > l(x) > l(y).

Theorem 1 (see Tsoukids and Vincke, 2001) A PQI preference struc-


ture on a finite set A is a PQI interval order iff there exists a partial
order II such that:
i) I = II U Ir U 10 where 10 = {(x, x), x E A} and Ir = I I- I ;
ii) (P U Q U II)P c P;
iii) P(PUQUlr ) C P;
ivY (P U Q U II)Q CPU Q U II;
v) Q(PUQUlr ) C PUQUlr ;

On this basis we can present the following characterisation result:

Definition 2 (see Ngo The and Tsoukids, 2001) A PC preference struc-


ture, having characteristic relation S on a finite set A, is a PQI interval
order iff 3: I, r : A f-7 R, such that V x, yEA:
i) r(x) > l(x);
ii) AS(x, y) = r(x) ~ l(y) /\ [r(x) < r(y) V l(x) ~ l(y)]
iii) A-S(x, y) = l(x) < l(y) /\ r(x) < r(y)

Theorem 2 (see Ngo The and Tsoukids, 2001) Definitions 1 and 2 are
equivalent.

The consequence of this theorem is that:


- P = TP ~ T S /\ F S-1
- Q = TH {=::} TS /\ KS-l

-I=TIUTKUTK- l
{=::} (TS /\ TS- l ) V (TS /\ US-I) V (US /\ TS- l )

Theorem 3 (see Ngo The and Tsoukids, 2001) A PC preference struc-


ture is a PQ I interval order iff
Modelling Positive and Negative Reasons in Decision Aiding 165

- i} I = Io = {(x,x),x E A};
- ii} \Ix, y, TS(x, y) V TS(y, x);
- iii} \Ix, y, z, FS(x, y) 1\ --,TS(y, z) => FS(x, z);
- ivY \Ix, y, z, US (x, y) 1\ (KS V FS)(y, z) => --,TS(x, z);
- v} \Ix, y, z, US(x, y) 1\ US(y, z) => US(x, z);
- vi} \lx,y,z, KS(x,y) 1\ FS(y,z) => FS(x,z);
- vii} \lx,y,z, KS(x,y) 1\ KS(y,z) => (KS V FS)(x,z);
- viii} \Ix, y, z, KS(x, y) 1\ US(y, z) => (KS V US)(x, z);
- ix} \Ix, y, z, US(x, y) 1\ US(z, y) => --,FS(x, z) 1\ --,FS(z, x);

Consider again the two alternatives represented by the intervals. Con-


sider the examples presented in figure 4. It could be claimed that the
second case of Q(x, y) is more an indifference than an ambiguous prefer-
ence. In fact the two intervals are almost included in one another. The
intuitive reasoning is that hesitation between preference and indifference
begins only when one interval is "sufficiently to the left" of the other (and
ends, as usually, when it is completely to the left of the other). Such
a reasoning corresponds to the use of an "intermediate point" for each
interval which we denote m(x), such that \Ix r(x) > m(x) > l(x).
We can give the following definition to such a structure.

Definition 3 (see Vincke, 1988; Tsoukias and Vincke, 1998) A PQI


preference structure on a finite set A is a double threshold order iff there
exist three real valued functions l, m and r, such that \I x, yEA:
i} r(x) > m(x) > l(x);
ii} P(x, y) <=> l(x) > r(y);
iii} Q(x, y) <=> r(y) > l(x) > m(y);
ivY I(x, y) <=> m(y) > l(x) and m(x) > l(y).

Theorem 4 (see Vincke, 1988; Tsoukias and Vincke, 1998) A PQI


preference structure on a finite set A is a double threshold order iff:
- \lx,y,z,w Q(x,y) 1\ I(y,z) 1\ Q(z,w) --+ P(x,w) V Q(x,w)
- \Ix, y, z, w Q(x, y) 1\ I(y, z) 1\ P(z, w) --+ P(x, w)
- \Ix, y, z, w P(x, y) 1\ I(y, z) 1\ P(z, w) --+ P(x, w)
- \lx,y,z,w P(x,y) 1\ Q-l(y,z) 1\ P(z,w) --+ P(x,w)

This is a well-known preference structure, known under the name of


double threshold order, which was first studied in Roy and Vincke, 1984
(see also Roy and Vincke, 1987; Vincke, 1988). Pseudo-orders described
in section 2 are particular cases of double threshold orders.
We can again introduce the positive / negative reasons approach as
follows.
166 AIDING DECISIONS WITH MULTIPLE CRITERIA

Definition 4 (see Tsoukids and Vincke, 1998) A PC preference struc-


ture is a double threshold order iff there exist three real valued functions
l, m and r, such that \;I x,y E A:
1. r(x) > m(x) > l(x)
2. \;Ix, y b.S(x, y) ¢:::=> r(x) > l(y)
3. \;Ix, y b.-,s(x, y) ¢:::=> l(y) > m(x)

Theorem 5 (see Tsoukids and Vincke, 1998) Definitions 3 and 4 are


equivalent.
The consequence of this theorem is that:
- P = TP ¢:::=> TS 1\ FS-l
- Q = TH <===> TS 1\ KS- 1
- 1= TI ¢:::=> TS 1\ TS-l

Theorem 6 (see Tsoukids and Vincke, 1998) A binary relation S char-


acterises a PC preference structure which is a double threshold order iff
1. \;Ix, y (TS(x, y) V TS(y, x)) 1\ ""US(x, y) 1\ ...,US(y, x).
2. \;Ix,y,z,w TS(x,y) 1\ TS(z,w) --+ TS(x,w) V TS(z,y).
3. \;Ix,y,z,w TS(x,y) 1\ KS(z,w) --+ TS(x,w) V ...,FS(z,y).
4. \;Ix, y, z, w KS(x, y) 1\ KS(z, w) --+ ...,FS(x, w) V ...,FS(z, y).
The previous results illustrate the potential role of b.S and b....,S in
preference modelling, as a medium between criterion values and pref-
erence relations. Firstly, they can be used to derive a compact repre-
sentation of complex preference structures (PQI interval orders, double
threshold orders) using intervals of criterion value. Conversely, note
that they provide a more expressive language to compare alternatives
described by imprecise criterion values. As a last illustration, we assume
that any x E A is represented by an interval [l(x), r(x)] (representing the
set of plausible values for g(x), for a given criterion function g) and we
introduce the following non-conventional preference structure:

b.S(x,y) <===> r(x) ~ l(y) (23)


6....,S(x, y) ¢:::=> r(y) > l(x) +v (24)
where v represents the maximal difference of of type g(y) - g(x) which is
compatible with S(x, y) (a kind of veto threshold). Such a construction
can be seen as an alternative to definitions 2 and 4 aiming at defining
a preference structure in which negative arguments remain very close to
the original ideas of discordance and veto. Using equations (19-22) we
obtain:

TS(x, y) ¢:::=> r(x) ~ l{y) and r(y) ~ l(x) +v (25)


Modelling Positive and Negative Reasons in Decision Aiding 167

KS(x, y) ¢=:::} r(x):::: Z(y) and r(y) > Z(x) +v (26)


US(x,y) ¢=:::} r(x) < Z(y) and r(y) ~ Z(x) +v (27)
FS(x, y) ¢=:::} r(x) < Z(y) and r(y) > Z(x) +v (28)
Note that the four belief states attached to the predicate S(x, y) corre-
spond to four complementary and significantly distinct situations of the
two intervals. Interestingly, TS(x, y) corresponds to a situation where
outranking is intuitively justified whereas this is just the contrary for
F S(x, y). Similarly, KS (x, y) corresponds to a natural conflicting situ-
ation due to the simultaneous possibilities of:
- finding two possible values for x and y such that x is better than
y,
- finding two possible values for x and y such that y is much better
than x
Finally US(x,y) seems also justified because on the one hand, there is
no possibility for x to receive a better evaluation than y, but on the
other y cannot be strongly better than x.

Using the continuous bi-Iattice. Suppose now that we want to


compare alternatives described by fuzzy intervals of criterion values. For
the sake of simplicity we denote by X a generic alternative identified
to a fuzzy interval of the real line and characterised by the possibility
distribution /-Lx, taking its values in the unit interval. For any fuzzy
interval X, we define:

S(X) {x E 1R,/-Lx(x) > O} (The support of X)


C(X) {x E 1R,/-Lx(x) = I} (The core of X)
Z-(X) = inf{x E S(X)}
Z+(X) = inf{x E C(X)}
r-(X) sup{x E C(X)}
r+(X) = sup{ x E S(X)}
Note that, since C(X) ~ S(X), we have: Z-(x) ~ Z+(x) ~ r-(x) ~
r+(x). Moreover, since X is a fuzzy interval, /-Lx is necessary increasing
on [Z-(x),Z+(x)] and decreasing on [r-(x),r-(x)]. Thus, we can define
L(X) and R(X), the left and right boundaries of X, as fuzzy subsets of
X characterised by the following possibility distributions:

(x) _ {/-LX(x) if x E [Z-(x), Z+(x)]


/-L L(X) - 0 otherwise.
168 AIDING DECISIONS WITH MULTIPLE CRITERIA

(x) _ {J-LX(x) if x E [r-(x),r+(x)]


J-LR(X) - 0 otherwise.
Considering two fuzzy intervals X and Y, we want to evaluate the
outranking S(X, Y). For this, we need to extend the comparison models
introduced above to classical intervals. Let us observe that the compari-
son of two non-fuzzy intervals x and y was based on the relative positions
of the borders l(x), r(x), l(y), r(y). Hence, in the fuzzy case, we have
to compare the fuzzy borders L(X), R(X), L(Y), R(Y). Interpreting a
fuzzy interval X (resp. Y) of the real line as the fuzzy set of possible
values for an alternative x (resp y), we define the quantity 2: (X, Y)
(resp < (X, Y)) as the necessity of the event x 2: y (resp. x < y, see
Dubois and Prade, 1988; Perny and Roubens, 1998). We obtain:
2: (X, Y) =1- sup min{J-Lx(x), J-LY(Y)}
(x,Y)ES(X)xS(y): x<y
< (X, Y) = 1 - sup min{J-Lx(x), J-LY(Y)}
(x,y)ES(X)xS(y): x?y

These equations suggest a natural extension of b.S(x, y) and b.-,S(x, y)


compatible with fuzzy intervals. For example, equations (23-24) can be
extended to the case of fuzzy intervals by:
b(S(X; Y)) 2: (R(X),L(Y))

b(-,S(X,Y)) = < (R(X),L(Y)+v)


where L(Y) +v is the fuzzy set defined by:
\Ix E 1R, J-LL(Y)+v(x) = J-LL(Y) (x - v)
Thus, we obtain a basis for computing t(S(X, Y)), k(S(X, Y)), u(S(X, Y)),
f(S(X, Y)) using equations (10-13). These four values make it possi-
ble to evaluate the level of confidence we may have in the outranking
S(X, Y) as well as the level of conflict between the pros and cons. It
provides a compact representation of the relative position of X and Y
while keeping high descriptive possibilities.
Similarly, the PQI interval order structure (see definition 2) might be
extended by defining:
- b(S(X, Y)) from quantities 2: (R(X), L(Y)), < (R(X), R(Y)) and
2: (L(X) 2: L(Y)),
- b(-,S(x,y)) from < (L(X),L(Y)) and < (R(X),R(Y)).
Note however that such a generalisation is not straightforward, due to
the complexity of conditions ii) used in the definition. This is left for
further investigation.
Modelling Positive and Negative Reasons in Decision Aiding 169

3.3. Application in multiple criteria aggregation


Moving to multiple criteria we should bear in mind that each criterion
is now equipped with a PC preference structure (which also contains all
classic preference structures) where positive and negative reasons are ex-
plicitly considered. Consider formula 1.1. Instead of computing whether
S(x, y) holds or not, we explicitly compute b.S(x, y) (equivalent to the
concordance concept) and b.,S(x, y) (equivalent to the discordance con-
cept). To give an example we could write:
b.s
b.S(x,y) J.tl(Jxy ) ~ 'Y (29)
b.,S(x,y) J.t2(J;;-'S) ~ 0 (30)

where:
- J;;S is the set of criteria gj E G for which b.Sj(x, y) holds;
- J;;-'S is the set of criteria gj E G for which b.,Sj(x, y) holds;
- 11-1 and 11-2 represent two functions (11-1,11-2 : 2G I-t R "measuring" the
"strength" or "importance" of criteria coalitions;
- 'Y and 0 being thresholds representing a "security level" for the decision
maker.
For the same reasons as those mentioned in the preference modelling
section, it is worth considering a fuzzy extension of the aggregation
method proposed above. A natural proposition for this derives directly
from fuzzy concordance and discordance indices introduced in section
2.2. The most natural is:

b(S(x,y)) = c(x,y) (31)


b(,S(x, y)) = d(x, y) (32)

More generally, we propose (see also Perny and Tsoukias, 1998) a


fuzzy extension of (29-30) by setting:

b(S(x, y))
b(,S(x, y))

The following remarks can be made.

1 In the previous examples, for a given (x, y), b.S (b.,S) is evaluated
wrt the criteria for which b.Sj (b.,Sj) holds. It could be the
case that the criteria for which ,b.,Sj (,b.Sj) holds could also
be considered. It is necessary to study what the consequences of
such choices are, but a priori it is up to the will and intuition of
170 AIDING DECISIONS WITH MULTIPLE CRITERIA

the decision-maker and the analyst to define the most appropriate


formula.
2 Functions /1-1 and /1-2 should "measure" the importance of each
coalition. We are not necessarily limited to the use of additive rep-
resentations. Moreover, neither /1-1 and /1-2 have to be of the same
shape nor should they use the same information. It is sufficient
to consider that a couple of importance parameters is associated
to each criterion, one applying when the criterion belongs to a
"positive" coalition, the other applying when the criterion belongs
to a "negative" coalition. This is already the case with the veto
concept. A veto condition implies that a criterion becomes very
important, but only in negative terms. The "positive" importance
of such a criterion remains the same.
3 It is now possible to have an homogeneous preference model for
all levels of modelling and decision support. We have positive and
negative reasons for each single criterion, we have the same when
such criteria are aggregated and if we have to go up a hierarchy of
criteria and/or decision makers we have an elegant way of keeping
positive and negative reasons distinguished until the final level.
Under such a perspective we can not totally prevent the problem
of losing information as the aggregations are repeated, but we have
better control of the process and more clear justifications for the
final recommendation.

What can we obtain as a final result? The two relations l::,.S and l::,.-,S
are just two binary relations for which nothing, except reflexivity, can
be imposed. What happens if a final result in the form of an ordering
relation is expected? As an example, in the following we will briefly
present three procedures (suggested in Tsoukias, 1997; the first also
studied and axiomatised in Greco et al., 1998).

1 Given the two binary relations l::,.S and l::,. -,S compute the following
score for each alternative:

O"(x) = I{Y: l::,.S(x, Y)}I + I{Y: l::,.-,S(y, x)}1


-I{y: l::,.S(y, x)}I-I{y: l::,.-,S(x, y)}1

then order the alternatives on the decreasing value of 0".


Such a score generalises the "net flow" procedure introduced by
Brans and Vincke, 1985). It has the nice property of introducing
three possible situations for any couple (x, y):
Modelling Positive and Negative Reasons in Decision Aiding 171
- strict preference (when u(x) = 2 and u(y) = -2);
- weak preference (when u(x) = 1 and u(y) = -1);
- indifference (when u(x) = 0 and u(y) = 0).
2 Given the two binary relations b,.S and b,.-,s compute, for all
alternatives, the following scores:
u+(x) - I{y: b,.S(x, y)}1 + I{y: b,....,S(y, x)}1
u-(x) = I{y: b,.S(y,x)}1 + I{y: b,....,S(x,y)}1
construct two pre-orders, the first one based on the decreasing
value of u+, the second one based on the increasing value of u-
and intersect the two resulting pre-orders. The intersection is a
partial pre-order generalising the classical ranking procedure based
on leaving and entering flows (see Brans and Vincke, 1985).
3 Construct the pre-ordering relation !:::::* obtained by the transitive
closure of the relation !::::: defined by:
!::::: (x,y) {:::::::> b,.S(x,y) 1\ b,....,S(y,x)

4 Generalise such operations to the fuzzy case.


It should be noted that very little research has been carried out insofar
as the formal properties of such procedures are concerned (with the
exception of the first one). Some preliminary propositions can be found
in Perny and Tsoukias, 1998 that are valid both in the crisp and fuzzy
cases; some alternative options have been proposed in Fortemps and
Slowinski, 2001.

4. Conclusions
Following the example of concordance and discordance concepts in-
troduced by B. Roy in multiple criteria aggregation methods, we have
presented some elements of a new and non-conventional approach to
preference modelling. The main originality of this approach is to con-
sider positive and negative arguments independently with respect to a
given assertion about preferences. Hence, the various possible combina-
tions of these two independent views on preferences provide a richer set
of situations than usual. This potential richness is represented by a lat-
tice of truth values from which a new multi-valued logic is constructed.
In the context of preference modelling, the increased expressive power
of the resulting language should be useful. Among others, it enables the
description of the possible belief states of a decision-maker facing a com-
plex situation due to imprecise evaluations or conflicting criteria. Due
172 AIDING DECISIONS WITH MULTIPLE CRITERIA

to the high expressivity of this language, the construction of preference


models, the definition of aggregation procedures and the conception of
choices, ranking or sorting procedures must be revisited. Throughout the
paper, examples are given suggesting possible options to initiate some
work in this direction. We think it is worth continuing this preliminary
investigation.

Acknowledgements
We would like to thank the referees for several useful comments. Ob-
viously we want to express our deep gratitude to Bernard Roy who really
inspired us with his pioneering work in decision aiding.

References
Belnap N.D., (1977), "A useful four-valued logic", in G. Epstein, J. Dumm, (eds.),
Modern uses of multiple valued logics, D. Reidel, Dordrecht, 8-37.
Bouyssou D., (1992a), "On some properties of outranking relations based on a concor-
dance - discordance principle", in A. Goicoechea, L. Duckstein and S. Zionts, (eds.),
Multiple criteria decision making, Springer-Verlag, Berlin, 93-106.
Bouyssou D., (1992b), "Ranking methods based on valued preference relations: A
characterization of the net flow network", European Journal of Operational Re-
search, vol. 60, 61-67.
Bouyssou D., (1996), "Outranking relations: do they have special properties?", Jour-
nal of Multi-Criteria Decision Analysis, vol. 5, 99-111.
Bouyssou D., Perny, P., (1992), "Ranking method for valued preference relations:
A characterization of a method based on leaving and entering flows", European
Journal of Operational Research, vol. 61, 186-194.
Bouyssou D., Pirlot M., (1997), "Choosing and ranking on the basis offuzzy preference
relations with the 'Min in Favor", in G. Fandel, T. Gal, (eds.), Multiple criteria
decision making, Springer Verlag, Berlin, 115-127.
Bouyssou D., Pirlot M., (1999), "Conjoint measurement without additivity and transi-
tivity", in N. Meskens, M. Roubens, (eds.), Advances in Decision Analysis, Kluwer
Academic, Dordrecht, 13-29.
Bouyssou D., Vincke Ph., (1997), "Ranking alternatives on the basis of preference
relations: a progress report with special emphasis on outranking relations" , Journal
of Multiple Criteria Decision Analysis, vol. 6, 77-85.
Bouyssou D., Pirlot M., Vincke Ph., (1997), "A general model of preference aggrega-
tion", in M.H. Karwan, J. Spronk, J. Wallenius, (eds.), Essays in decision making,
Springer Verlag, Berlin, 120-134.
Bouyssou D., Marchant Th., Perny P., Pirlot M., Tsoukias A., Vincke Ph., (2000),
Evaluation and Decision Models: a critical perspective, Kluwer Academic, Dor-
drecht.
Brans J.P., Vincke Ph., (1985), "A preference ranking organization method", Man-
agement Science, vol. 31, 647-656.
Dubois D., Prade H., (1988), Possibility Theory, Plenum Press, New York.
REFERENCES 173
Fargier H., Pemy P., (2001), "Modelisation des preferences par une regIe de con-
cordance generalisee", in A. Colomi, M. Parruccini, B. Roy (eds.), Selected Pa-
pers from 4gth and 5d h meetings of the EURO Working Group on MCDA, EUR-
Report, to appear.
Fitting M.C., (1991), "Bilattices and the semantics of Logic Programming", Journal
of Logic Programming, vol. 11,91-116.
Fortemps Ph., Slowinski R., (2001), "A graded quadrivalent logic for ordinal prefer-
ence modelling", Fuzzy Optimization and Decision Making, voU, no.l..
Grabisch M., Perny P., (2001), "Agregation Multicritere", in B. Bouchon-Meunier, C.
Marsala (eds.). Utilisation de la logique floue, Hermes, Paris, to appear.
Ginsberg M., (1988), "Multivalued logics: A uniform approach to reasoning in artificial
intelligence", Computational Intelligence, vol. 4, 265-316.
Greco S., Matarazzo B., Slowinski R., Tsoukias A., (1998), "Exploitation of a rough
approximation of the outranking relation in Multi-Criteria Choice and Ranking",
in T.J. Stewart and R.C. van der Honert (eds.), Trends in Multi-Criteria Decision
Making, Springer Verlag, LNEMS 465, Berlin, 45-60.
Greco S., Matarazzo B., Slowinski R., (2001), "Conjoint measurement and rough set
approach for multicriteria sorting problems in presence of ordinal criteria", in A.
Colorni, M. Paruccini, B. Roy, (eds.), AMCDA: Selected papers from the 4gth and
5d h meetings of the EURO MCDA Working Group, EUR-report, Ispra, 114 - 141.
Marchant Th., (1996), "Valued relations aggregation with the Borda method", Jour-
nal of Multiple Criteria Decision Analysis, vol. 5, 127-132.
Ngo The A., Tsoukias A., Vincke Ph., (2000), "A polynomial time algorithm to detect
PQI interval orders", International Transactions of Operations Research, vol. 7,
609-623.
Ngo The A., Tsoukias A., (2001), "A new axiomatisation of PQI interval orders",
submitted.
Ostanello A., (1985), "Outranking relations", in G. Fandel, J. Spronk, Multiple Cri-
teria Decision Methods and Applications, Springer Verlag, Berlin, 41-60.
Perny P., (1998), "Multicriteria filtering methods based on concordance and non-
discordance principles", Annals of Operations Research, vol. 80, 137-165.
Perny P., Roubens M., (1998), "Preference Modelling", in Handbook of Fuzzy Sets
and Possibility Theory, Operations Research and Statistics, R. Slowinski (ed.), D.
Dubois et H. Prade (series eds.), Kluwer Academic Publishers, 3-30.
Perny P., Roy B., (1992), "The use of fuzzy Outranking Relations in Preference Mod-
elling", in Fuzzy sets and Systems, vol. 49, 33-53.
Perny P., Tsoukias A., (1998), "On the continuous extension of a four valued logic
for preference modelling", in Proceedings of the IPMU 1998 conference, 302-309.
Pirlot M., (1995), "A characterization of 'min' as a procedure for exploiting valued
preference relations and related results", Journal of Multiple Criteria Decision
Analysis, vol. 4, 37-56.
Pirlot M., (1997), "A common framework for describing some outranking methods",
Journal of Multiple Criteria Decision Analysis, vol. 6, 86-92.
Roy B., (1968), "Classement et choix en presence de points de vue multiples (Le
methode ELECTRE)", Revue Fran~aise d'Informatique et de Recherche Operation-
nelle, vol. 8, 57-75.
174 AIDING DECISIONS WITH MULTIPLE CRITERIA

Roy B., (1978), ELECTRE III: Un algorithme de classement fonde sur une representation
Houe des preferences en presence de criteres multiples, Cahiers du CERO, vol. 20,
1,3-24.
Roy B., (1991), "The outranking approach and the foundations of ELECTRE meth-
ods", Theory and Decision, vol. 31, 49-73.
Roy B., Bouyssou D., (1993), Aide Multicritere d la Decision: Methodes et cas, Eco-
nomica, Paris.
Roy B., Mousseau V., (1996), "A theoretical framework for analysing the notion of
relative importance of criteria", Journal of Multi-criteria Decision Analysis, vol.
5,145-159.
Roy B., Skalka J.-M., (1984), "ELECTRE IS: Aspects methodologiques et guide
d'utilisation", Technical Report, Document du LAMSADE N°30, Universite Paris
Dauphine, Paris.
Roy B., Vincke Ph., (1984), "Relational systems of preferences with one or more
pseudo-criteria: some new concepts and results", Management Science, vol. 30,
1323-1335.
Roy B., Vincke Ph., (1987), "Pseudo-orders: definition, properties and numerical rep-
resentation", Mathematical Social Sciences, vol. 14, 263-274.
Tsoukias A., (1996), "A first-order, four valued, weakly paraconsistent logic", Cahier
du LAMSADE n. 139.
Tsoukias A., (1997), "Sur la generalisation des concepts de concordance et discor-
dance en aide multicritre a la decision", Memoire presente pour l'obtention de
l'habilitation a diriger des recherches, Universite Paris Dauphine, appeared also as
Document du LAMSADE. n. 117.
Tsoukias A., Vincke Ph., (1995), "A new axiomatic foundation of partial comparabil-
ity", Theory and Decision, vol. 39, 79-114.
Tsoukias A., Vincke Ph., (1997), "Extended preference structures in MCDA", in J.
CHmaco (ed.), Multi-criteria Analysis, Springer Verlag, Berlin, 37-50.
Tsoukias A., Vincke Ph., (1998), "Double Threshold Orders: a new axiomatisation",
Journal of Multiple Criteria Decision Analysis, vol. 7, 285-30l.
Tsoukias A., Vincke Ph., (2001), "A characterization of PQI interval orders", to ap-
pear in Discrete Applied Mathematics.
Vanderpooten, D., (1990), "The construction of prescriptions in outranking methods",
in Bana e Costa C.A., (ed.), Readings in Multiple Criteria Decision Aid, Springer-
Verlag, Berlin, 184-215.
Vincke Ph., (1982), "Aggregation of preferences: a review", European Journal of Op-
erational Research, vol. 9, 17-22.
Vincke Ph., (1988), "P,Q,I preference structures", in J. Kacprzyk, M. Roubens, eds.,
Non conventional preference relations in decision making, LNEMS 301, Springer
Verlag, Berlin, 72-8l.
Vincke Ph., (1992a), "Exploitation of a crisp relation in a ranking problem", Theory
and Decision, vol. 32, 221-240.
Vincke Ph., (1992b), Multicriteria Decision Aid, J. Wiley, New York.
Vincke Ph., (1999), "The Outranking Approach", in T. Gal, Th. Stewart, Th. Hanne,
(eds.), Multicriteria Decision Making,. Advances in MCDM Models, Algorithms,
Theory and Applications, Kluwer Academic, Dordrecht.
EXPLORING THE CONSEQUENCES
OF IMPRECISE INFORMATION IN
CHOICE PROBLEMS USING ELECTRE

Luis C. Dias
INESC and Faculty of Economics, University of Coimbra, Portugal
Idias@pombo.inescc.pt

J oao Climaco
INESC and Faculty of Economics, University of Coimbra, Portugal
jciimaco@pombo.inescc.pt

Abstract ELECTRE I, IS and its variants are well-known methods to help Deci-
sion Makers (DMs) choose one action (alternative) from a discrete set.
Here, we consider the case where the DMs have difficulties in fixing pre-
cise values for all the parameters required by ELECTRE, such as the
importance of the criteria, the veto thresholds or the performances of
the actions. We indicate how to obtain robust conclusions when the
DMs specify a set of multiple acceptable combinations of values for the
parameters. We argue that robust conclusions should be studied at the
outranking relation level, and then suggest some approaches to enrich
such conclusions (introducing a tolerance) and to exploit them.

Keywords: Multi-criteria choice; Imprecision; Robust conclusions; Outranking;


ELECTRE

1. Introduction
We owe Bernard Roy and his colleagues for the ELECTRE methodol-
ogy (see Roy, 1991; Roy and Bouyssou, 1993) and the concept ofrobust-
ness analysis in multicriteria decision aid (Roy, 1998; Roy and Bouys-
sou, 1993). This paper addresses robustness analysis in the context of
ELECTRE methods (ELECTRE I and its variants) for choosing one
action from a discrete set considering multiple evaluation criteria.
Using ELECTRE requires setting the value of its input parameters
(we use this word in a broad sense), such as the performance of the
176 AIDING DECISIONS WITH MULTIPLE CRITERIA

actions (alternatives), the importance and veto power of the criteria, etc.
Providing precise figures for all the parameters is often hard for Decision
Makers (DMs): some data may be missing, uncertain, contradictory or
imprecise, some modeling options are subject to arbitrariness, and some
parameters reflect the values of the DMs, which they may find difficult
to express and that may change over time (see also Roy, 1989; French,
1995).
To deal with these difficulties, we consider a setting where multi-
ple combination of values for the parameters are accepted, instead of a
precise one. This type of information is often named "imprecise" (e.g.
Miettinen and Salminen, 1999), "partial" (e.g. Charnetski and Soland,
1978) or "poor" (e.g. Bana e Costa and Vincke, 1995). The multiple
combinations of parameter values can be either enumerated or defined by
mathematical constraints (provided by the DMs or inferred from holistic
comparisons as in Mousseau, 1993).
Robustness analysis concerns the study of the results that are valid
for the multiple combinations of input values. We deem that this type
of analysis is useful from the very beginning of the decision process.
Firstly, it produces information (which conclusions are robust? which
results are more affected by the imprecision?) that may guide the DMs
in revising the information they provide, progressively narrowing the
range of acceptable values for the parameters. Secondly, it allows the
DMs to avoid the questions they consider difficult, or at least to postpone
these questions until they feel more familiar with the problem and more
confident about the answers. In group decisions, accepting mUltiple
combinations of parameter values may foster cooperation among the
DMs, as they may accept multiple opinions and may reach consensus
on the conclusions that are robust (see Dias and Climaco, 2000b, in the
context of sorting problems). This type of analysis may also be used
in the context of an interactive aggregation/ disaggregation approach to
decision aid (see Dias et al., 2000, in the context of sorting problems).
This paper discusses how to choose the most preferred action consid-
ering imprecise information, within the general context of ELECTRE
methods. Previous research concerning choice problems has focused on
the multiattribute value function model (for a recent review see Dias and
Climaco, 2000a), with few exceptions. For instance, Roy and Bouyssou
(1993) present a real-world study using ELECTRE IS where a robust-
ness analysis was conducted considering a discrete set of combinations
of input values; Miettinen and Salminen (1999) present an approach to
find the criteria weights (if any exist) that make each certain action the
best in terms of the min procedure (Pirlot, 1995).
Consequences of Imprecise Information 177

In the next section we overview ELECTRE's way of constructing and


exploiting an outranking relation. In Section 3 we discuss our method-
ology of finding robust conclusions concerning the outranking relation
using optimization. The optimization problems range from very simple
ones (ELECTRE I) to the more complicated ones that may appear if
credibility indices are to be computed. If few robust conclusions are
found, then we suggest in Section 4 how to obtain richer (though more
fragile) results by introducing a tolerance. Section 5 addresses the ex-
ploitation of robust conclusions. We provide an example in Section 6
and we end with a concluding section.

2. Choosing with ELECTRE: brief reminder


Consider a set of m actions A = {aI, ... , am} described by their perfor-
mances at n criteria. Let 9j(ai) denote the performance of ai at the j-th
criterion. The first phase of ELECTRE consists in building an outrank-
ing relation defined over A. For each ordered pair of actions (ax, ay), the
method finds whether ax outranks ay (i.e. ax is at least as good as ay)
or not. We will use the following notation for the parameters:

• tl j represents the advantage of ax over ay at the j-th criterion:

tl. ={ 9j (ax) - 9j (ay) ,if the lh criterion is to be maximized;


J 9j (ay) - 9j (ax) ,if the lh criterion is to be minimized;
• kj 2:: 0 denotes the importance coefficient (weight) of the j-th
criterion, such that ~j=l k j = 1;

• s denotes the concordance threshold; and

• Vj denotes the veto threshold of the j-th criterion.

The conditions for stating that ax outranks ay (denoted axSay) de-


pend on the method:
ELECTRE I (Roy, 1968; Roy and Bouyssou, 1993):
axSay {::? c{a x , ay) = ~ kj 2:: s /\ Vj, tlj 2:: -Vj.
j:Clj ?O
ELECTRE IS (Roy and Skalka, 1984; Roy and Bouyssou, 1993):
Besides the previous parameters, let qj and Pj denote the indifference
and preference thresholds, respectively, of the j-th criterion. Then,
axSay {::?
c(a x , ay) = ~ kjcj{a x , ay) 2:: s /\ Vj, tlj 2:: -Vj + qjWj (ax, ay),
j
where
178 AIDING DECISIONS WITH MULTIPLE CRITERIA

0, if !:1j < -Pj;


Cj (ax, ay) = { (pj + !:1j ) / (pj - qj), if. - Pj ::; !:1j < -qj;
1, If !:1j ~ -qj;
.( ) - l-c(a:z:,ay)-kj
and w J ax, ay - l-s-k'
J
l-c(a:z:,ay) ( . , I
.)
or =
Wj ( ax, ay) l-s ongma
verSIOn.
A third option is to use the credibility index as defined for the ELEC-
TRE III method:
axSay {:} (J (ax, ay) ~ s (see Roy and Bouyssou, 1993, for details).

Note that we may generalize and write


axSay{:}r(ax,ay)~O, (1)
where
r (ax, ay) = min{ c(a x , ay) - s,!:1j + Vj(j = 1, ... , n)} (ELECTRE I),
r (ax, ay) = min{ c(a x , ay) - s,!:1j + Vj - qjwj(a x , ay)(j = 1, ... , n)}
(ELECTRE IS), or
r (ax, ay) = (J (ax, ay) - s (ELECTRE based on credibility indices).

The following relations can be obtained from the outranking relation:


axPay ( ax is preferred to ay ) {:::::} axSay 1\..., (aySa x );
ax! ay ( ax is indifferent to ay ) {:::::} axSay 1\ aySa x ;
axRay ( ax is incomparable to ay ) {:::::} ..., (axSa y) 1\ ..., (aySa x ).
The exploitation of the outranking relation finds a subset of actions, as
small as possible, yet containing the most preferred one. This "kernel"
K ~ A is such that actions outside of K are outranked by at least
one action in K (hence justifying its exclusion) and no action in K is
outranked by another action in K that could exclude the former. If the
graph representing the outranking relation contains cycles, it is necessary
to regard each cycle as a single action (or a class of indifferent actions)
to guarantee the existence of a kernel K satisfying those two conditions
(for details see Roy and Bouyssou, 1993).

3. Robust conclusions concerning the


outranking relation
Roy (1998) presented a framework defining the concept of robust
conclusion as a formalized premise that is true for all the acceptable
combinations of parameter values. He also introduced the notions of
approximately robust conclusion (if it is true "almost everywhere") and
pseudo-robust solution (if not perfectly formalized). In a different frame-
work, Vincke (1999) defines the concepts of robust solutions and robust
methods. For Vincke, a robust solution is one that is "close" (in some
Consequences of Imprecise Information 179

formalized manner) to all the solutions that correspond to admissible


combinations of parameter values.
Robustness analysis contrasts with sensitivity analysis, conducted af-
ter obtaining a result, which determines how much may each parameter
vary (often changing only one at a time) without leading to a different
result. Although useful, these sensitivity analyses require an initial value
for each parameter and focus on the result found initially, hence ignor-
ing other interesting conclusions that might have been found with other
initial values. This is different from global sensitivity analysis, which
ascertains how may the variability in a model's output be partitioned
into model input's variability (see Saltelli et al., 1999).
Let T denote the set of all acceptable combinations of values for the
parameters (performances, importance coefficients, veto thresholds, ... ).
We will use Roy's definition of a robust conclusion, although there is a
further distinction (introduced in Dias and Climaco, 1999) that we deem
useful when using decision aid methods:
- An absolute robust conclusion is a premise intrinsic to one of the
actions, which is valid for every combination in T (e.g. "the value of
action ax is greater than 0.7" in an additive aggregation model).
- A (relative) binary robust conclusion is a premise concerning a pair
of actions, which is valid for every combination in T (e.g. "ax outranks
a y " in ELECTRE).
- A (relative) unary robust conclusion is a premise concerning one
action but relative to others, which is valid for every combination in T
(e.g. "ax belongs to the kernel" in ELECTRE).
We will illustrate the methodology we advocate through the following
fictitious example:
Example 1. Four actions are being compared according to five crite-
ria (to be maximized) using ELECTRE I. Table 1 displays their perfor-
mances, as well as the veto thresholds chosen for the criteria. Consider

Table 1. Performances and veto thresholds for Example 1

40 20 20 30 30
30 30 30 20 10
20 40 20 40 20
20 10 40 10 40
40 25 15 40 40

that the importance of the last two criteria has been fixed (k 4 = 0.1
and k5 = 0.2), hence there is imprecise information concerning only the
180 AIDING DECISIONS WITH MULTIPLE CRITERIA

f/! J/!
k=(.1,.4,.2,.1,.2) k=(.2,.4,. 1,. 1,.2) k=(.275,.275,.15,.1,.2)
N={tlj,a4} N={aJ,a4} N={a},a4}

k=(.4,.2,.1,.1,.2) k=(.4,.1,.2,.1,.2)
N={a},a4} N={a},aJ

Figure 1. Outranking relations (arrows denote outranking) and kernels for the
combinations in Ts

remaining three criteria. Suppose that this information leads to the


following constraints defining T:
kl E [0.1,004], k2 E [0.1,004]' k3 E [0.1,0.2], kl + k2 + k3 = 0.7. (2)
Roy has suggested testing the robustness of a conclusion in a finite
number of sample combinations Ts from T (Roy and Bouyssou, 1993;
Roy, 1998): a few values are chosen for each parameter (e.g. maxi-
mum, central and minimum); then, the sample consists of the admissible
combinations of such parameter values. However, when the constraints
defining T inter-relate the parameter values, the set of admissible com-
binations taken as a sample may not be representative enough. An
alternative sample could consist of the extreme points of T (assuming
it is a polytope), together with one or more points in its interior, e.g.
Ts = {(.1, A, .2), (A, .1, .2), (.2, A, .1)' (A, .2, .1), (.275, .275, .15)}. If we
choose the latter sample Ts and compute ELECTRE I's results for each
point we will find the outranking relations and kernels in Figure 1 for a
concordance level s = 0.55.
Now, considering the final results of the method (i.e. the kernel K),
there are only two (unary robust) conclusions that hold for all the com-
binations in Ts: "a4 E K " and " a2 ¢ K ". However, if we look for
binary robust conclusions concerning the outranking relation, we will
Consequences of Imprecise Information 181

find that a2 always outranks a4, whereas a4 never outranks a2! This
paradox, resulting from the intransitivity of the outranking relation and
the rules for forming the kernel, forced us to ponder which kind of robust
conclusion should the analysis focus on. This example should not be dis-
missed on the grounds that it deals with a discrete set of combinations.
Indeed, a discrete T may correspond to a set of future scenarios or to a
group of DMs that decides to form T as the union of the combinations
of values that each of them deems worth considering. (NOTE: if we
tt
considered T instead of T s , " a4 E K " and " a2 K " would hold for
every combination exept (.15, .35, .2), which lays on the boundary of T,
whereas "a2Sa4" and "not a4Sa2" would always hold).
To our opinion, searching for binary robust conclusions concerning
the outranking relation fits well into the ELECTRE's philosophy of con-
fronting actions in pairs. Moreover, this has the advantage of being
more transparent to the DMs to the extent that the concept of outrank-
ing is easier to understand than the concept of kernel. It also avoids
the embarrassment of electing a winner (a4 in the previous example) in
a situation where there is another action (a2) that is preferable to it
(outranks it without being outranked) according to all the combinations
of values for the parameters.
Given an ordered pair of actions (ax, ay), from (1), considering r (ax, ay)
as a function of a combination of parameter values, we may test the ro-
bustness of conclusions concerning the outranking relation as follows:

min {r (ax, ay, t) : t E T} ~ 0; (3)

max {r (ax, ay, t) : t E T} < O. (4)


Therefore, we can uncover robust conclusions by maximizing and min-
imizing r (ax, ay, t) (i.e. finding its range) in the domain T, for the mul-
tiple ordered pairs of actions (note that the range of r (ax, ay, t) bears
no information concerning the range of r (a y, ax, t)).
Maximizing and minimizing r(.) is often straightforward, even if this
function corresponds to credibility indices, provided that the criteria
weights can be varied independently of the thresholds and all constraints
are linear. This is a reasonable assumption given the different nature of
the parameters, but if it is violated the optimization does not become
impossible: it becomes only harder to perform. The difficulty of the
problems to be solved ranges from linear programming (ELECTRE I
and original version of ELECTRE IS) to the optimization of non-linear
182 AIDING DECISIONS WITH MULTIPLE CRITERIA

quasiconcave functions (more recent version of ELECTRE IS and cred-


ibility indices). For details concerning the latter see Dias and Climaco
(1999).
Back to Example 1, since only the importance coefficients are variable,
we would maximize and minimize the concordance indices c(.), subject
to (2). The range for the concordance indices concerning the pairs where
the outranking is not vetoed is presented in Table 2.

Table 2. Concordance ranges for Example 1

- [0.4, 0.7] [0.5,0.8] veto


[0.3, 0.6] - [0.3, 0.6] [0.6,0.7]
[0.4, 0.7] [0.4, 0.7] - veto
[0.3, 0.4] [0.3, 0.4] veto -

Thus, the following conclusions concerning S are robust, in that they


hold for every combination of weights satisfying the constraints, given
the fixed veto thresholds and s = 0.55:
- al and a3 do not outrank a4;
- a2 outranks a4;
- a4 does not outrank any other action.

The binary relations SR and N R , corresponding to the robust conclu-


sions, define an interval outranking relation, bounded from below by SR
and bounded from above by NR (the complement of N R ), such that, if
we let st
denote the outranking relation corresponding a combination t
of parameter values, Vt E T, SR ~ ~ NR .st
4. Enriching the robust conclusions
This section addresses the problem (that may arise frequently) of deal-
ing with relations SR and NR that are too poor (i.e. they hold for few
ordered pairs). The ideal way of enriching these relations (and thereby
increasing the number of robust conclusions) is to reduce the impreci-
sion concerning the inputs by asking the DMs for more information. This
elicitation should proceed in an interactive manner, so that the results
of the analysis at a given iteration may prompt the discussion concern-
ing the following one. For instance, DMs may deem that SR (or NR)
should apply to a given pair (ax, ay ), based on their capacity to judge
the actions holistically. They may also wish to know the combination of
parameter values leading to a given minimum or maximum r(.). Then,
Consequences of Imprecise Information 183

if they consider the values unacceptable, they could insert a constraint


to make that combination become unfeasible.
Richer conclusions may also be obtained if the conditions (3) and (4)
are somehow relaxed. Of course, in this case the conclusions will be more
fragile and the argument for accepting them becomes less compelling.
Next, we next propose two different types of relaxation.
Type 1: Ignoring a small fraction of the volume of T
The DMs may regard as robust a conclusion that holds for almost all
of the combinations in T. Assuming that T is a compact set with non-
empty interior we may formalize this affirmation as stating that DMs
could accept as robust a conclusion that holds for a large proportion
of the volume of T. Such a conclusion might be called "approximately
robust", according to the definition of Roy (1998).
Let Vol(t E T : r (ax, ay, t) ;:::: 0) denote the volume of T where axSay,
whereas Vol(T) denotes the volume of T. Let t denote a tolerance
as regards the relative volumes where each conclusion holds, such that
o < t < 0.5. Then, we may define the following relations to generalize
(3) and (4) (When T is a discrete set, these relations can be defined by
counting the proportion of the elements in T that yield axSay):

• ax SV (t) a y ¢:} Vol(t E T : r (ax, ay , t) ;:::: O)jVol(T) ;:::: 1 - t;

• ax NV (t) ay ¢:} Vol(t E T : r (ax, ay, t) ;:::: O)jVol(T) ~ t.

These relations are more general and flexible than SR and N R : they
coincide with SR and N R when t = 0 and become richer (i.e. hold for a
larger number of pairs of actions) as t increases.
Table 3 presents the relative volumes of T where each outranking oc-
curs concerning Example 1. If the DMs regard as robust any conclusion
that holds for 95% of the acceptable combinations (setting t = 0.05),
then the following conclusions would be added to the list presented in
the previous section:
- a2 does not outrank al;
- a2 does not outrank a3.
The conclusion "al outranks a3" would also be accepted if the DMs
increased t up to 0.2. Of course, DMs should be given an idea of the
combinations in the region they are neglecting, which might contain the
values that they consider the most adequate.
The volume ofT may be analytically computed in the case of a polyhe-
dral T (e.g. Lasserre, 1983; Bueler et al., 1998). An alternative approach
is to compute approximate volumes using Monte-Carlo simulation (e.g.
Charnetski and Soland, 1978).
184 AIDING DECISIONS WITH MULTIPLE CRITERIA

Table 3. Percentage of combinations for which outranking occurs (Example 1)

- 60% 80% veto


5% - 5% 100%
40% 60% - veto
0% 0% veto -

A different idea to relax the conditions for robustness which has a


similar rationale consists in "contracting" the set T by replacing each
linear constraint of the form
ail tl + ... + aik tk ~ fJi (tl, ••• , tk denotes the parametersj i indexes the
constraint)
by a constraint
ail tl + ... + aik tk ~ fJi + 10 ( 10 represents the positive tolerance value).
This places more emphasis on the more central combinations of T,
which is particularly indicated when the DMs reason in terms of strict
inequalities (e.g. kl > k2, which the relaxation converts into kl ~ k 2+E),
but requires the DMs to code all the constraints in a similar manner so
that they may attribute a meaning to the value of E.

Type 2: Introducing a tolerance when comparing r(.) with 0


The simplest relaxation of all is to consider a small non-negative tol-
erance 10 when comparing r(.) with zero. This amounts to generalize (3)
and (4) by writing

• ax SZ(E) ay {::} min{r(ax,ay,t): t E T} ~ -Ej

• ax NZ(E) ay {::} max{r (ax,ay,t): t E T} < E.


This type of relaxation can be readily applied whether T is a discrete
set or not. The relations SZ (E) and N Z (E) coincide with SR and N R
when 10 = 0 and become richer as 10 increases. However, there is an
important difference to Type 1: the relations SZ (E) and N Z (E) are not
guaranteed to be mutually exclusive. In fact, there will appear pairs of
actions (ax,ay) such that ax SZ(E) ay and ax NZ(E) ay, as soon as 10
exceeds the threshold
10 K = min(ax,a y )EA2 {max {- mintET r(a x , ay, t), maxtET r(a x , ay, tn}.
Therefore, this type of relaxation fits naturally into the four-valued
logic framework of Tsoukias and Vincke (1997). Considering an ordered
pair of actions (ax, ay), the statement that ax outranks a y, may be:

• "true", if ax SZ (E) ay A ,(ax NZ (E) ay)j


Consequences of Imprecise Information 185

• "false", if-.(a x SZ(E) a y ) 1\ ax NZ(E) a y ;

• "unknown", if-.(a x SZ(E) ay)I\-.(a x NZ(E) ay);

• "contradictory", if ax SZ (E) a y 1\ ax N Z (E) a y .

Concerning Example 1, if we consider E = 0.055 (which is 10% of sand


lower than EK = 0.15)' we would accept an outranking if the minimum
concordance was not lower than s - E = 0.495 and we would reject that
outranking if the maximum credibility was lower than s + E = 0.605 (or
if veto occurred). Hence, we would reach the same conclusions as when
we used a Type 1 relaxation with E = 0.2.

We believe that both types of relaxation are adequate and provide a


compelling rationale (if E is small) for accepting or rejecting an outrank-
ing. The second type of relaxation may even be combined with the first
type. This second type of relaxation is very easy to perform (after the
ranges r(.) have been computed) and places the emphasis on the output
rather than the inputs. It also allows contradiction, which enables a
richer analysis.
It is important to note that these relaxations are intended to be used
in an interactive manner, where the DMs may experiment with the differ-
ent types and with different values for E, with the objective of acquiring
insight and of being able to provide new information. Let us also note
that we have used the function r (.) to allow a more general presentation
of our approach. A possible drawback is that r(.) is the minimum be-
tween different aspects concerning concordance and discordance (veto)
when using ELECTREs I or IS, which can make the value of E somewhat
difficult to interpret in the definitions of SZ (E) and N Z (E) (relaxation of
Type 2). However, this is not important, because:

• SR, N R and their relaxations can be redefined to deal with con-


cordance and discordance separately;

• the performances can be normalized to be comparable, as in the


original version of ELECTRE I;

• the function r(.) may be defined to deal with discordance and/or


concordance in terms of relative deviation, e.g.
186 AIDING DECISIONS WITH MULTIPLE CRITERIA

5. Exploiting the robust conclusions


The most important goal of decision aid is perhaps the insight it gen-
erates. It may even happen that the best action becomes obvious once
the DMs have learned enough about the situation and their preferences.
In this perspective, finding robust conclusions (possibly relaxed ones)
concerning the outranking relation S yields the most important bene-
fit. However, the DMs often need a structured approach (exploitation
procedure) to select an action.
The exploitation of the robust conclusions in the context of a choice
problem may be conducted by various means. A very important aspect is
that this exploitation should not be isolated from the construction of the
outranking relation. Instead, the exploitation of the robust conclusions
and the identification of the results exhibiting higher variability (i.e.
the pairs of actions for which the range of r{.) is wider) should prompt
the DMs to revise the information that defines T, possibly reducing
the amount of imprecision, which in turn leads to a new set of robust
conclusions and a new iteration of the exploitation process, and so forth.
The literature on outranking relations offers some ideas to address our
exploitation problem, as the following list demonstrates:

Exploiting the relations Sand N


Let us first consider that a Type 2 relaxation has been chosen, mean-
ing that the robust conclusions allow to consider outrankings as "true",
"false", "unknown" or "contradictory". Greco et al. (1997) propose the
use of a score-based net flow procedure. Each action ax E A would get
the score
snf(a x ) =
# {a y E A: --, (axNz(f)a y)} - # {a y E A: --, (ayNz(f)a x)} +
+# {a y E A: --, (aySz(f)a x)} - # {a y E A: --, (axSz(f)a y)}.
Tsoukias and Vincke (1997) suggest that the "true" and "not false"
relations could be separately exploited by some procedure to produce
two rankings, which would be combined afterwards.
Considering now that a Type 1 relaxation is being used, or even no
relaxation at all, the robust conclusions allow to consider the outranking
of a given action over some other as "true" (if S is robust), "false" (if
N is robust), or "unknown" (remaining cases). In this case, we are
in presence of an interval outranking relation bounded by SR (or its
relaxation) and NR (or the complement of the relaxation of N R ), with
SR ~ N R. This means that the exploitation procedure of ELECTRE
II (e.g. see Roy and Bouyssou, 1993: 409-415) can be used, considering
that SR is the strong outranking relation, whereas NR is the weak one.
Consequences of Imprecise Information 187

The two procedures outlined above can also be used, since they do not
differentiate "unknown" from "contradictory" outrankings.
Notice that although we are interested in choice problems, all these
procedures provide a ranking of the actions.

Exploiting valued outranking relations


It is straightforward to use a relaxation of Type 1 to define a valued
(fuzzy) outranking relation. Given an ordered pair (ax, a y ) of actions, the
credibility of the statement "ax outranks a y" is equal to the proportion
of T's volume where such outranking occurs (this idea of associating vol-
umes with a valued relation is also present in Bana e Costa and Vincke,
1995). The credibility is maximum when ax SR a y , it is minimum when
ax N R ay, and it has an intermediate value for the remaining cases.
There are many methods to exploit binary valued relations, namely the
net-flow procedure (Bouyssou, 1992), the min procedure (Pirlot, 1995),
and ELECTRE Ill's distillation algorithms (Roy and Bouyssou, 1993).
Exploiting a single outranking relation
A different idea is to work with a single outranking relation SM and
then exploit it to find a kernel according to the ELECTRE IllS methods.
This outranking relation could be SR or N R, if one of these relations is
rich enough to exploit. Otherwise, DMs could consider the relaxation of
SR or the complement of the relaxation of N R. If a relaxation of Type
1 is used then SR ~ SM ~ N R. However, note that the relation SM
might not correspond to a combination of parameter values in T.
Another possibility is to consider a "central" combination t C E T
and to exploit the outranking relation SM that this combination yields
(SR ~ SM ~ NR). The central combination t C E T may be computed
by following one of two approaches: t C may be chosen as a combination
(there may be several) maximizing the minimum slack among the in-
equality constraints defining T; or t C may be chosen as the centroid of
T (Solymosi and Dombi, 1986), which is very easily computed when T
is defined by a ranking of the values for the parameters.
The main objective of using these exploitation techniques should be
to prompt the DMs to revise their inputs and provide more information.
Hence, experimenting with several of these techniques could enable a
richer analysis. The exploitation can also be used to put an end to the
analysis, particularly when SR is close to NR.

6. Illustrative example
As an illustration, let us consider the choice of a machine to sort
packages, a problem faced by the French postal service (presentation
188 AIDING DECISIONS WITH MULTIPLE CRITERIA

based on Roy and Bouyssou, 1993: 501-541). In that study, ELECTRE


IS was used to compare 9 actions according to 12 criteria (Table 4).

Table 4. Performances (to be maximized) and thresholds for the example by Roy
and Bouyssou (the preference thresholds pj coincide with qj)

91 92 93 94 95 96 97 98 9g 910 911 912


al 75 69 68 70 82 72 86 74 -15,23 83 76 29
a2 81 60 82 70 66 52 86 60 -15,7 83 76 71
a3 77 60 82 50 66 60 86 60 -15 83 82 71
a4 73 57 82 90 75 61 93 60 -15,55 83 71 29
a5 76 46 55 90 48 46 93 60 -36,68 83 50 14
a6 75 63 68 90 98 63 78 61 -22,9 100 68 57
a7 73 63 68 70 98 86 78 61 -19,58 100 74 57
a8 77 31 41 50 59 79 71 60 -15,47 67 76 86
ag 96 69 41 70 49 60 57 60 -13,99 83 50 86
qj 5 5 5 5 5 8 10 0 1 10 5 10
Vj 50 50 40 100 40 25 100 50 5 100 30 50
3 2 5 3 3 5 2 2 5 1 5 3
kj 3a 3a
3a 3a 3a 3a 3a 3a 3a 3a 3a 3a

The criteria weights used in the original example are depicted in the
kj row of Table 4. These weights were chosen to satisfy the following
system, which reflects the opinion of the DMs:
(i) klO < k2 = k7 = ks < kl = k4 = k5 < k3 = k6 = kg = kn,
(ii) klO :::; k12 :::; kn,
(iii) kl = k2 + klO,
(iv) kl1 = kl + k2'
(v) kj ~ 0 (j = 1, ... , 12).
The original study set s = 0.7, although it admitted that s E [0.63, 0.73]
when performing a robustness analysis a posteriori. The outranking re-
lation from the original study is depicted in Figure 2. Based on this
relation, a5, a6, and as can obviously be excluded, while al justifies the
exclusion of a2, a3,and a4, which form an indifference class. Hence, the
kernel is K = {al,a7,ag}.
In our example, we will proceed by considering the imprecise infor-
mation defined by the constraints above and see what conclusions may
be drawn. We will use the same values of the original study for the
thresholds associated with the criteria (rows qj and Vj in Table 4). The
set T is defined by the constraints (i)-(v), the constraint L:/i=l
kj = 1
(which is not restrictive), and the bounds s E [0.63,0.73].
Let us now define r(.) to account for discordance as a ratio to the veto
thresholds, in order to attribute some meaning to inter-criteria compar-
isons of discordance:
Consequences of Imprecise Information 189

Figure 2. Outranking relation for the example by Roy and Bouyssou (a5 does not
appear since it is outranked by every other action).

Table 5. Ranges for r(.)

as
- [-.08,.04] [-.12,.01] [.05,.18]. [.18,.31]
[-.28,.05] - [.13,.25] [-.03,.12] [.18,.31]
[-.16,.01] [.18,.31] - [.1,.25] [.08,.31]
[-.09,.09] [.01,.12] [-.03,.08] - [.27,.37]
[-4.34,-3.82] [-3.95,-3.57] [-4.29,-3.82] [-4.18,-3.71] -
[-1.21,-.8] [-1.10,-.75] [-1.14,-.83] [-.81,-.59] [.2,.33]
[-.09,.08] [-.24,.01] [-.48,-.17] [-.26,0] [.13,.25]
[-.23,-.07] [-.31,-.15] [-.38,-.19] [-.31,-.15] [-.09,.06]
[-.67,-.15] [-.31,-.14] [-.44,-.23] [-.38,-.19] [-.02,.13]

as
[-.05,.23] J-.09,.16J. 1~·23,-.14J 1~·38,-.26]
[-.12,.06] [-.69,-.45] [-.3,-.03] [-.11,.08]
[.01,.15] [-.60,-.23] [.01,.23] [-.17,.01]
[-.05,.16] [-.67,-.25] [-.33,-.22] [-.42,-.29]
[-2.81,-2.32] [-3.47,-2.98] [-3.90,-3.56] [-4.10,-3.79]
- [-.57,-.17] [-1.05,-.74] [-1.15,-.91]
[.18,.31] - [.01,.18] [-.48,-.24]
[-.2,-.1] [-.2,-.1] - [-.12,.06]
[-.54,-.40] [-1.20,-.56] [-.42,-.03] -

According to our approach, we have to find the maximum and mini-


mum r(a x , ay, t), subject to t E T, for all ordered pairs (ax, ay) E A x A
(Table 5). There are many robust conclusions that may be drawn from
these results. Particularly, we may note that a5 never outranks any
other action and is always outranked by al to a7. The action a6 never
190 AIDING DECISIONS WITH MULTIPLE CRITERIA

outranks any other (except a5) and is always outranked by a3 and a7.
Obviously, a5 and a6 are not contenders for the best action and can be
deleted.
Figure 3 represents the relation SR through thick arrows and the
complement of N R as segmented arrows. Hence, a thick arrow may
be read as "always outranks", a segmented arrow may be read as "may
outrank" , and the absence of an arrow indicates "never outranks". These
relations could be exploited by any of the techniques from the previous
section. As an example, the exploitation of the relation SR according to
the rules ofELECTRE IllS leads to the kernel K = {al,a7,ag}, which
is equal to the original study's.
To continue this example, let us suppose that the DMs were invited to
think about the doubtful outrankings and they would answer that they
were expecting that al would outrank a7. At this point, they could learn
that the combination yielding the minimum r( aI, a7) was kl = k4 = k5 =
0.08, k2 = k7 = kg = klO = 0.04, k3 = k6 = kg = kn = k12 = 0.12,
and s = 0.73. Analyzing this information, suppose that the DMs would
state that k2 should not be less than k12' which corresponds to a new
constraint on T (in this case, if they would state that al S a7, then
it could also be coded as a linear constraint). The ranges for the r(.)
functions would become smaller, as shown in Table 6.
By now it is clear that al S a7, although the DMs did not require
it directly. Let us also suppose that the DMs would accept a Type 2
relaxation with a tolerance to = 0.03. Figure 4 depicts the conclusions
corresponding to SZ (0.03) and N Z (0.03). The decision process could
then continue, either asking the DMs for more information (e.g. is al
preferred to a4 or indifferent to it?), or exploiting the relations obtained.
Action al would have the highest net-flow score and would appear at the
top of ELECTRE II's exploitation ranking. It would also belong to the

Figure 3. Relations "always outranks" and "may outrank".


Consequences of Imprecise Information 191

Table 6. Ranges after additional constraint

as ag
- [-.04,.04] [-.09,.01] [.05,.17] [.03,.16] [-.19,-.14] [-.34,-.26]
[-.28,.02] - [.13,.24] [-.03,.10] [-.69,-.47] [-.17,-.03] [-.06,.08]
[-.16,-.02] [.18,.30] - [.1,.24] [-.60,-.25] [.07,.23] [-.13,.01]
[-.09,.07] [.06,.12] [.01,.08] - [-.58,-.25] [-.29,-.22] [-.39,-.29]
[-.09,.06] [-.18,.01] [-.43,-.17] [-.26,-.02] - [.07,.18] [-.42,-.24]
[-.23,-.10] [-.31,-.17] [-.38,-.20] [-.31,-.17] [-.20,-.11] - [-.12,.04]
[-.67,-.20] [-.31,-.16] [-.44,-.25] [-.38,-.20] [-1.20,-.61] [-.42,-.06] -

Figure 4. Relations "always outranks" and "may outrank" after additional con-
straint and accounting for a tolerance of 0.03.

kernel if either SZ (0.03) or NZ (0.03) were exploited by the usual process


in ELECTRE IllS.
This example illustrates how it is possible to work with imprecise in-
formation as a means to obtain robust conclusions. In this case, the ro-
bust conclusions or slight relaxations of these conclusions are rich enough
to advance towards their exploitation. The exploitation led to results
very similar to those of the original study, but easier to justify, since
we did not need to fix precise values for the parameters for which only
imprecise information was available. However, the use of these results
to elicit further information from the DMs (hence constraining T) WGuid
probably be yet more interesting.

7. Concluding remarks
Instead of bulldozing the difficulties and hesitation of the DMs, through
a quest for the right combination of values for the parameters, we deem
that imprecision should be accepted from the very beginning of the de-
cision aid process. This allows to alleviate the DMs' cognitive burden at
the beginning, postponing the most difficult questions to a stage when
192 AIDING DECISIONS WITH MULTIPLE CRITERIA

they are more familiar with the problem at hand and the decision aid
method.
The analysis proposed here explores the consequences of the impre-
cise information that the DMs are able (or willing) to provide. We focus
on exploration rather than aggregation, avoiding the computation of
averages, median values, and other usual aggregation means. The ex-
ploration allows to discover which conclusions are robust and allows to
identify which conclusions are more affected by the imprecision. This is
particularly important in what regards the questions that can be posed
to the DMs when more information is needed.
This paper does not propose a precise method. Rather, we feel that
the array of tools to be used will depend on the problem at hand. A
decision support system implementing several of these tools would hence
be quite helpful. Most of the approaches proposed here have the pos-
sible drawback of demanding some computational effort (optimization
or volume computation). However, the DMs will not perceive this for
two reasons. On the one hand, today's low cost personal computers are
sufficiently powerful to solve these problems in acceptable time for the
problem dimensions usually found in practice. On the other hand, the
DMs need only understand the results, and not the algorithmic details
of their computation. The concept of a robust conclusion such as never
outranks or always outranks is, of course, easy to apprehend. We might
say that this is a case of using "hard" tools for a "softer" decision aid.

References
Bana e Costa. C.A. and Ph. Vincke (1995). Measuring credibility of compensatory
preference statements when trade-offs are interval determined, Theory and Deci-
sion 39, 127-155.
Bouyssou, D. (1992), Ranking methods based on valued preference relations: a char-
acterization of the net-flow method, European Journal of Operational Research
60,61-68.
Biieler, B., A. Enge and K. Fukuda (1998), Exact volume computation for polytopes:
a practical study, in Polytopes - Combinatorics and Computation, DMV-Seminars,
Birkhauser Verlag (to appear).
Charnetski J.R. and R.M. Soland (1978). Multiple-attribute decision making with
partial information: the comparative hypervolume criterion. Naval Research Lo-
gistics 25, 278-288.
Dias, L. C. and J. N. Clfmaco (1999), On computing ELECTRE's credibility indices
under partial information, Journal of Multi-Criteria Decision Analysis 8, 74-92.
Dias, L. C. and J. N. Clfmaco (2000a), Additive aggregation with variable interdepen-
dent parameters: the VIP Analysis software, Journal of the Operational Research
Society 51, 1070-1082.
Dias, L. C. and J. N. Clfmaco (2000b), ELECTRE TRI for Groups with Imprecise
Information on Parameter Values, Group Decision and Negotiation 9, 355-377.
REFERENCES 193
Dias, L., V. Mousseau, J. Figueira and J. CHmaco (2000), An Aggregation/Disaggregattion
approach to obtain robust conclusions with ELECTRE TRI, Cahier du LAM-
SADE, No. 174, Universite Paris-Dauphine. (To appear in EJOR).
French, S. (1995), Uncertainty and imprecision: modelling and analysis, Journal of
the Operational Research Society 46, 70-79.
Greco, S., B. Matarazzo, R. Slowinski amd A. Tsoukias (1997), Exploitation of a
rough approximation of the outranking relation, Cahier du LAMSADE, No. 152,
Universite Paris-Dauphine.
Lasserre, J.B. (1983), An analytical expression and an algorithm for the volume of
a convex polyhedron in RR, Journal of Optimization Theory and Applications 39,
363-377.
Miettinen, K. and P. Salminen (1999), Decision-aid for discrete multiple criteria de-
cision making problems with imprecise data, European Journal of Operational
Research 119, 50-60.
Mousseau, V. (1993), Problmes lies a l'evaluation de l'importance relative des criteres
en aide multicritere a la decision: refiexions tMoriques, experimentation et imple-
mentation informatique, PhD Thesis, Universite Paris-Dauphine.
Pirlot, M. (1995), A characterization of 'Min' as a procedure for exploiting valued
preference relations and related results, Journal of Multi-Criteria Decision Analysis
4,37-56.
Roy, B. (1968), Classement et choix en presence de points de vue multiples (la methode
ELECTRE), Revue Informatique et Recherche Oprationelle, 2e. Anne, No.8, 57-
75.
Roy, B. (1989), Main sources of inaccurate determination, uncertainty and imprecision
in decision models, Mathematical and Computer Modelling 12, 1245-1254.
Roy, B. (1991), The outranking approach and the foundations ofELECTRE methods,
Theory and Decision 31, 49-73.
Roy, B. (1998), A missing link in OR-DA: robustness analysis, Foundations of Com-
puting and Decision Sciences 23, 141-160.
Roy B. and Bouyssou, D. (1993), Aide multicritere a la decision: methodes et cas,
Economica, Paris.
Roy, B. e J. M. Skalka (1984), ELECTRE IS: aspects methodologiques et guide
d'utilisation, Document du LAMSADE, No. 30, Universite Paris-Dauphine.
Saltelli, A., S. Tarantola and K. Chan (1999), A role for sensitivity analysis in present-
ing the results from MCDA studies to decision makers, Journal of Multi-Criteria
Decision Analysis 8, 139-145.
Solymosi, T and J. Dombi (1986), A method for determining the weights of the
criteria: the centralized weights, European Journal of Operational Research 26,
35-41.
Tsoukias, A. and Ph. Vincke (1997), Extended preference structures in multicriteria
decision aid, in Climaco, J. (ed.), Multicriteria analysis, Springer, 37-50.
Vincke, Ph. (1999), Robust solutions and methods in decision aid, Journal of Multi-
Criteria Decision Analysis 8, 181-187.
MODELLING IN DECISION AIDING

Daniel Vanderpooten
LAMSADE - UniversiU Paris Dauphine, France
vdp@lamsade.dauphine.fr

Abstract This paper focusses on the central role of modelling in decision aid-
ing. All stages of the modelling process require choices which cannot
be totally rationalized. We believe, however, that adopting a certain
perspective (accepting ambiguity, favoring flexibility, ... ) in relation to
some specificities of the decision context may prove helpful to guide the
modelling process and to motivate some technical choices.

Keywords: Decision aiding; Model; Alternatives; Criteria; Preferences

1. Introduction
The most remarkable characteristic of the contributions of Bernard
Roy is that they address all aspects of decision aiding, including:

• theoretical achievements,

• development of methods,

• applied works, and

• epistemological reflection.

These contributions, which are reported in several parts of the present


book, have led to the development of a general methodology for deci-
sion aiding whose initial foundations are given in Roy, 1976 and which
is presented in detail in Roy, 1985. A central feature underlying this
methodology is a certain perspective on decision aiding as developed in
Roy, 1993. Modelling is of major importance in this methodology.
The purpose of this paper is to emphasize the central role of mod-
elling in the activity of decision aiding. Building a model is a delicate
matter insofar as it involves, at different stages, choices that cannot
be completely justified. Some choices seem more relevant than others
196 AIDING DECISIONS WITH MULTIPLE CRITERIA

considering, e.g., a better capacity of capturing some phenomena or an


easier subsequent solution process. It is often the case, however, that
these two types of arguments are conflicting. Moreover, different ways
of modelling a situation are usually possible. The resulting models may
be logically equivalent but use different formal representations. For in-
stance, a set of alternatives can be represented equivalently using linear
programming or networks. Arguments in favor of a specific model may
then include ease of understanding of the representation, simplicity of
resolution, and possibility of evolutions of the model. One can even
consider acceptable non-equivalent representations of the same problem.
Since any model is a partial representation of the situation under study,
one may quite reasonably decide to focus on different aspects depending
on the model. Each model imposes its specific assumptions, while other
assumptions could have been advocated, possibly leading to different
results.
In spite of these irreducible difficulties, we believe that some helpful
guidelines can be given to support the modelling process. Many of these
were originally introduced in the methodology proposed in Roy, 1985. In
the first place, the way of designing a model is strongly influenced by the
way in which decision aiding is conceived. Therefore, we first recall three
basic attitudes to decision aiding (section 2) before defining a model
and modulating the definition according to each attitude (section 3).
We then distinguish different types of models intervening in decision
aiding and comment on each type of model (section 4). Key-points when
designing a model are then presented (section 5), followed by conclusions
(section 6).

2. Basic attitudes to decision aiding


Establishing the scientific character of a discipline is a complex and
controversial question. Many schools of thought, often taking "hard sci-
ences" as a reference, propose to characterize the scientific status of a
discipline through its historical development or by using methodologi-
cal criteria like falsifiability. From inductivists to Feyerabend, through
Popper, Lakatos or Kuhn, there is no universal acceptance of science
(Chalmers, 1982).
The scientific status of an activity like decision aiding is particularly
complex since it involves, at different levels, several disciplines (mathe-
matics, economics, computer science, sociology, psychology) whose sci-
entific status are appraised quite diversely. To clarify the scientific status
of decision aiding, it is important to distinguish three main attitudes (see
also Bell et al., 1988 and Roy, 1993):
Modelling in Decision Aiding 197

1 The descriptive attitude refers to an observable reality. This atti-


tude postulates a pre-existing preference structure in the decision
maker's mind and the existence of one or several optimal solutions
for any preference structure. In this perspective, decision aiding
consists of describing as faithfully as possible this structure in or-
der to derive the corresponding optimal solution.

2 The normative attitude is based on principles or norms that any


rational decision maker should strive to respect. In this attitude,
decision aiding consists of selecting norms and formalizing them
within an axiomatic framework which, if sufficiently constrained -
but not too much - by the previous selection, allows the determi-
nation of a rational prescription.

3 The constructivist attitude aims to fit into a decision process where


the decision maker's preferences may be ill-structured, conflicting
and unstable. In this attitude, decision aiding consists of using
concepts, models, procedures and results - i.e. building a 'set of
keys' (Roy, 1993) - in order to found recommendations.

The descriptive and normative attitudes try to justify the advocated


solutions in relation to a reality or a rationality. Rather than trying
to provide a formal justification for the proposed recommendations, the
constructivist attitude focuses on the process leading to these recom-
mendations. The descriptive and normative attitudes emphasize an in-
strumental rationality, whereas the constructivist attitude emphasizes a
procedural rationality. While the two first attitudes aim at founding a
'decision science', the third attitude argues for a 'decision aid science'
(Roy, 1993).
Roy, 1993 argues convincingly in favor of the constructivist attitude.
Giving scientific foundations to the decision aiding process requires draw-
ing on rigorous concepts, models, procedures and results whose relevance
is acknowledged by a large scientific community. Many works have been
undertaken in this perspective within what is sometimes referred to as
the European School (Roy and Vanderpooten, 1996).
It is important to observe that adopting a constructivist attitude
does not exclude using concepts, models, procedures or results devel-
oped within a descriptive or normative perspective.
When modelling a set of alternatives or the decision maker's prefer-
ences, one necessarily describes something. It is thus quite natural to
use the same tools as those used when adopting a descriptive attitude.
The main difference however is that we are not aiming to capture a real-
ity but to represent working hypotheses that are judged satisfactory and
198 AIDING DECISIONS WITH MULTIPLE CRITERIA

which may evolve as we gain a better understanding of the situation


under study.
Axiomatization, which forms the scientific basis of a normative at-
titude, can also be used to analyze procedures (or at least some sim-
plified versions) implemented in a constructivist perspective. In this
spirit Bouyssou et al., 1993 suggest that a systematic analysis of pro-
cedures and algorithms for decision aiding should be undertaken. An
axiomatic analysis aims at proving that a procedure, and any solution
derived from this procedure, verify a list of desirable properties called
axioms. A stronger theoretical result is an axiomatic characterization
showing that a procedure is the only one possible which verifies a given
list of axioms. One strives to use only interpretable axioms (which may
be difficult to achieve, since one often needs at least one rather technical
axiom in order to 'complete' an axiomatic characterization).

3. Definition of a model
The activity of decision aiding, when claiming to be founded on sci-
entific bases, requires an abstraction of the decision situation under the
form of models.
As defined by Roy, 1985 'a model is a schema which, for a certain
family of questions, is considered as a representation of a class of phe-
nomena that an observer has more or less carefully removed from their
environment to help in an investigation and to facilitate communication'.
A model is thus necessarily a partial representation of the situation to
be studied which emphasizes aspects that are judged to be relevant. As
outlined by Roy, 1999, 'a model is more a caricature ofreal-life situations
than an impoverished or approximate photograph of it'.
In the widest possible sense, we define a mathematical model as a
formal representation of a set of hypotheses which are judged to be
relevant in order to shed light on the situation under study.
The perception of the role of models in decision aiding, and correla-
tively the way of elaborating them, are closely related to the attitude (see
section 2). Therefore, depending on the underlying attitude, a model
may appear as:
1 a formal approximation of the reality under study (descriptive
model)
2 a formal translation of norms of rationality (normative model)
3 a formal framework for reasoning (constructive model)
Moreover, we shall introduce the following distinction between closed
and open models.
Modelling in Decision Aiding 199

A model is said to be closed when its defining assumptions lead to a


mathematically well-posed problem. Solving this problem gives rise to
one or several solutions that are considered as results of the model.
A model is said to be open when its defining assumptions do not aim or
are not sufficient to characterize a mathematically well-posed problem.
Thus, an open model corresponds to a formal framework supporting
reflection, investigation and/or negotiation.

4. Decision aiding based on models


Formal models used in decision aiding usually include, often within
the same entity, two categories of (sub-)models:

• a model of alternatives which gathers and formalizes hypotheses


delimiting the set of potential alternatives or feasible solutions,

• a preference model which gathers and formalizes hypotheses to


appreciate the attractiveness of each alternative.

4.1. Pre-modelling stage


Before building a model, it is necessary to investigate the decision con-
text specificities in order to choose some basic modelling options which
will deeply influence the subsequent selection of a formal framework.
This pre-modelling stage includes :

• Identification of the various actors (decision makers and stakehold-


ers). This step is extremely important in order to inventory the
various viewpoints and alternatives to be taken into account.

• Evaluation of the degree of conflict. Depending on the variety and


number of viewpoints and actors, the analyst must decide whether
the model should incorporate multiple criteria and negotiation as-
pects.

• Choice of a problem formulation (or problematic). As introduced


in Roy, 1981 (see also Roy, 1985, chap 6 and Bana e Costa, 1996),
four reference problem formulations can be envisaged when stat-
ing a decision problem. The first three problem formulations are
oriented towards making a recommendation. These are:

- the choice problem formulation where the decision problem


is formulated as a problem of choosing a 'best' alternative or
a limited subset of the 'best' alternatives,
200 AIDING DECISIONS WITH MULTIPLE CRITERIA

- the ranking problem formulation where the decision problem


is formulated as a problem of ranking all or a subset of alter-
natives,
- the sorting problem formulation where the decision problem
is formulated as a problem of assigning alternatives to pre-
defined categories.
While the first two problem formulations aim at comparing the
relative value of alternatives, the third one is oriented towards the
intrinsic value of each alternative. This shows, in the latter case,
that alternatives can be evaluated independently.
The fourth problem formulation, the description problem formu-
lation, aims to clarify a decision situation by describing the al-
ternatives and their consequences without proceeding to a recom-
mendation. This often appears as a prerequisite to the three other
problem formulations but may also be an end in itself.
• Choice of a type of decision aiding. Depending on the decision
makers' willingness to playa more or less active part in the de-
cision aiding process, the analyst should strive to build a more
or less open model. While low-level, repetitive and non-conflicting
decisions often require closed models, it seems desirable, and some-
times explicitly requested, to provide the user (or the analyst) with
an open model allowing him/her to explore solutions of interest.

4.2. Elaboration of a model of alternatives


The model of alternatives aims to represent the set of alternatives.
This set A can be defined:

• explicitly, by enumerating all alternatives through an exhaustive


list (when A is finite and relatively small)
• implicitly, by stating the properties which characterize the alterna-
tives (when A is infinite or finite but relatively large, e.g., resulting
from the combination of elementary alternatives).

Modelling the set of alternatives is a complex process which requires


creativity. Different strategies for generating alternatives, based on sys-
tematic searching and screening techniques, are reported in Zeleny, 1982,
Norese and Ostanello, 1989, Keeney, 1992, among others.
The status of alternatives to be modelled may be different, necessi-
tating different identification and generation strategies. In this respect,
Norese and Ostanello, 1989 suggest distinguishing alternatives as: those
Modelling in Decision Aiding 201
which are 'given or available', those which are 'existing but not avail-
able' and those which are 'non-existing'. Roy, 1985 makes a distinction
between 'actual' and 'dummy' alternatives, the latter being idealized or
hypothetical entities which might prove useful for reasoning or discus-
SIon.
The complexity of alternatives also varies depending on the decision
context. In some cases, alternatives result from the combination of ele-
mentary alternatives (e.g., projects involving a series of sub-projects with
possible dependencies). When choosing the representation framework,
the modeler must then decide between the level of elementary alterna-
tives, in which case an implicit definition will often be used to char-
acterize possible combinations, and the level of final alternatives where
combinations will be directly and explicitly represented. In relation to
this point, Roy, 1985distinguishes a 'comprehensive' conception where
alternatives included in the model are exclusive, and a 'fragmented' con-
ception where the elements represented in the model are fragments which
can be combined to form alternatives.
A typical pitfall, often observed in practice, is to focus a priori on a
limited subset of alternatives. Alternatives which are untypical or ini-
tially seem uninteresting may be rejected too hastily. When alternatives
result from the combination of elementary alternatives, some combina-
tions are sometimes omitted in order to limit combinatorial difficulties.
This may also happen because actors who could propose different solu-
tions are not included in the decision aiding process.
Modelling a set of alternatives requires the definition of some limits
which may be somewhat arbitrary. When using an implicit definition,
these limits are represented by constraints. It may then prove useful to
introduce some flexibility in the definition of these constraints. In partic-
ular, it is sometimes interesting to distinguish between 'hard' constraints
which define the basic structure of the model and 'soft' constraints which
may be revised during the decision aiding process. More fundamentally,
when modelling the set of alternatives A, a major option is to consider
this set as:
• 'stable', i.e. defined without any possibility of further modifica-
tions,
• or 'evolving', i.e. likely to be modified either because of interme-
diate results during the decision aiding process or because of a
changing decision context.
This distinction, introduced by Roy, 1985, has important consequences
for other parts of the model. When A is modelled as evolving, it is more
difficult to consider the relative value of a changing set of alternatives.
202 AIDING DECISIONS WITH MULTIPLE CRITERIA

This will then favor the selection of a sorting problem formulation which
focuses on the intrinsic value of alternatives. Closed models are also less
appropriate in an evolving context.

4.3. Elaboration of a preference model


The preference model may include two distinct levels:
• First level preference models aim at capturing aspects which re-
flect the worth or value of elements represented in the model of
alternatives. This results in the construction of one or several cri-
teria.

• Second level preference models are used when multiple criteria, de-
fined at the first level, must be aggregated in order to model overall
preferences, taking into account relative importance of criteria.
Optimization and simulation models include within the same entity
a model of alternatives and a first-level preference model. Unlike an
optimization model which is a closed model and involves a unique crite-
rion in its preference (sub}model, a simulation model is an open model
which may involve several criteria. Multicriteria models gather within
the same entity a model of alternatives, a first-level preference model (in-
cluding several criteria) and, possibly, a second-level preference model.
Indeed, while some multicriteria models explicitly include an aggregation
mechanism to represent overall preferences, other multicriteria models
replace this second·level preference model with an interactive process
which aims at exploring the model of alternatives using the first-level
preference model (see also section 5.2).
Observe that optimization models as well as multicriteria models in-
cluding first and second level preference (sub}models are closed models.
In the second case, however, explicitly expressing two levels of preference
modelling shows a lesser degree of closure and, above all, the possibil-
ity of reopening the model. We shall refer to this type of models as
semi-closed models.
When using a closed or semi-closed model, it is important to strengthen
the results through a sensitivity analysis or, more generally, a robustness
analysis (see section 5.1). As shown in section 5.2, interactivity provides
an efficient way to take advantage of an open model.

4.3.1 First level preference models. As a tool for comparing


alternatives from a given viewpoint, a criterion is a preference model.
A basic modelling option at this level is the choice of using a sin-
gle criterion or several criteria to represent overall preferences. When a
Modelling in Decision Aiding 203

unique viewpoint or non-conflicting viewpoints predominate in the de-


cision process a single criterion model must be used. When conflicting
viewpoints play an important role, the construction of a single criterion
is often questionable. Indeed, a priori aggregating conflicting viewpoints
forces the taking of an early stand on possible trade-offs, whereas weigh-
ing trade-offs and observing their impact is precisely one of the main
purposes of the decision aiding process. Aggregating viewpoints defined
on heterogeneous scales, possibly involving quantitative and qualitative
aspects, entails a large amount of arbitrariness and raises technical dif-
ficulties when mixing cardinal and ordinal scales. The resulting overall
criterion is then expressed in a fictitious unit, such as utility, which is
often difficult to appraise.
As a general guideline, a criterion should integrate viewpoints refer-
ring to the same category of consequences among which no trade-offs
are to be considered.
Techniques for constructing criteria are reported in Roy, 1985 and
Bouyssou, 1990 and, for specific problems, in Grassin, 1986, Keeney,
1988, D'Avignon, 1992, Roy and Slowinski, 1993, Martel et al., 1998
and Azibi and Vanderpooten, 2001.

4.3.2 Second level preference models. These models, which


endeavor to aggregate criteria, usually consist of an analytical formula,
a set of aggregation rules or binary preference relations. Many different
second level preference models have been proposed in the literature. We
refer to general textbooks for a presentation of these models (Fishburn,
1970, Keeney and Raiffa, 1976, Vincke, 1989, Roy and Bouyssou, 1993).
Two basic approaches are prevailing for constructing such a model:
• impose a priori desirable mathematical properties on the preference
model (e.g., transitivity and completeness) which makes it easy to
derive a recommendation from the preference model.
• reject any a priori assumptions on preferences, accept hesitations
and incomparability, which is much more satisfactory from a mod-
elling point of view but may lead to difficulties when trying to
derive a recommendation.
A basic issue is the degree of compensation that can be accepted
between criterion values. From totally compensatory models, where a
bad score on a criterion can be compensated by one or several good
scores on other criteria, to totally non-compensatory models where no
compensation is accepted, there exists a variety of models, including
partially compensatory models which accept compensation only when
differences of scores are small.
204 AIDING DECISIONS WITH MULTIPLE CRITERIA

Another crucial issue is the way parameters representing the relative


importance of criteria are evaluated. Vansnick, 1986 specifies the con-
cept of weights within a non-compensatory framework and gives some
guidelines on how to elicit such information. Roy and Mousseau, 1996
study the concept of importance and propose a framework to analyse
this concept.

5. Some key points in modelling


In this section we briefly comment on some key points to consider
when constructing a model.

5.1. Integrating imprecision, uncertainty and


ambiguity
A large part of the modeler's activity consists of gathering and elab-
orating numerical entities. Because of imperfection affecting these en-
tities, care must be taken when using and comparing them. Sources of
this imperfection are multiple (Bouyssou, 1989; Roy, 1989); they mainly
derive from:
• imprecision due to measurement,
• uncertainty related to any evaluation of future phenomena or sit-
uations,
• irreducible ambiguity when capturing complex phenomena.
A number of general approaches have been proposed to model uncer-
tainty, imprecision and ambiguity, including probability theory, possi-
bility theory (Dubois and Prade, 1985) and rough sets theory (Pawlak,
1982; Greco et al., 1999). For a comparison ofthese approaches, their ad-
vantages and limits, see Slowinski and Teghem, 1990; Grzymala-Busse,
1991; Kruse et al., 1991.
In the context of preference modelling, some specific concepts have
been developed: discrimination thresholds (Jacquet-Lagreze, 1973; Bouys-
sou and Roy, 1987), fuzzy binary relations (Perny and Roy, 1992; Fodor
and Roubens, 1994), multi-valued logics (Tsoukias and Vincke, 1995;
Tsoukias, 1996).
When using a closed or semi-closed model, it is important to under-
take a robustness analysis in order to distinguish the part of the results
which is firmly established. Considering the domain of reasonable val-
ues for the parameters and data intervening in the model, robustness
analysis aims at deriving robust conclusions, i.e. conclusions which re-
main valid on the whole domain, or at least on some clearly identified
Modelling in Decision Aiding 205

parts of this domain (Roy, 1998). In a similar perspective, Vincke, 1999


formalizes the concepts of robust solutions and methods. In the field of
discrete optimization, Kouvelis and Yu, 1997 consider different scenarios
and define some concepts of robust solutions derived from classical cri-
teria used in decision theory under uncertainty (Wald and Savage). The
resulting min-max problems are studied for a variety of combinatorial
problems.
The general idea of not extracting more information from the numer-
ical entities (data) than they can express is of the utmost importance
when designing a model. Integrating hesitation and incomparability is
a typical concern in preference modelling. In the same spirit, we believe
that decision aiding models should be designed so as not to provide sys-
tematically clear-cut results, but should reflect hesitation by providing,
in some cases, partial results or even no result.

5.2. Interactivity in modelling


Interactivity is a key concept which aims at creating a man-machine
cooperation (see Sims, 1997 for an overview of different levels and types
of interactivity).
This concept has been used extensively in the literature devoted to
Decision Support Systems (Bonczek et al., 1981; Sprague Jr and Carlson,
1982; Levine and Pomerol, 1989). Following Levine and Pomerol, 1989,
interactivity is defined as a way of providing a user with full or partial
control of an exploration within a state space depending on the specific
problem.
Interactivity is used in many areas of operations research either to pro-
vide assistance in the modelling process or to guide the solution process
(Pollack, 1976; Fischer, 1985).
It is often difficult to take account of some aspects when designing a
model. This may happen when these aspects:
• are unknown or badly known (value of some parameters),
• cannot be made explicit initially (constraints or criteria whose rel-
evance appears during the decision aiding process, e.g., because of
previous results of the model),
• are likely to evolve (distinction between 'hard' constraints and
'soft' constraints that may be revised by the user considering, e.g.,
previous results of the model),
• are difficult to formalize (a classical difficulty is to formalize quali-
tative criteria, which can often be evaluated on explicit alternatives
only, while the set of alternatives is modelled implicitly),
206 AIDING DECISIONS WITH MULTIPLE CRITERIA

• are difficult to make explicit (constraints or criteria that some ac-


tors do not wish to state explicitly).

In the previous cases, interactivity provides a flexible way of testing


some assumptions of the model, completing the missing assumptions and
more generally revising the model.
The solution process can also be guided efficiently using interactivity
(see, e.g., Fischer, 1985). This is achieved iteratively, in cooperation with
a user who, considering partial results, may guide the process (e.g., by
setting, more or less temporarily, some components of the solution). Ob-
serve that the spirit underlying such a cooperative strategy differs widely
depending on whether one aims at approximating one optimal solution
(within a technically difficult closed model) or whether one wishes to
take the user's preferences into account (within an open multicriteria
model). While the idea of mathematical convergence remains central in
the first case, in the second case the objective is more to support learning
of preferences (Roy, 1987; Vanderpooten, 1992).
Interactivity has often been used in multicriteria analysis (see, e.g.,
Gardiner and Vanderpooten, 1997). In this context, interactivity aims at
organizing the exploration within the model of alternatives. It appears
as a substitute for a second-level preference model that would be too dif-
ficult to express or too rigid given possible evolutions or partial lack of
preferences. Most interactive multicriteria procedures organize interac-
tion in the criterion space only. When the model of alternatives is defined
implicitly, interaction should also be directed towards the structure of
solutions (Perny and Vanderpooten, 1998). More generally, supporting
interaction in the decision space gives the opportunity to play with and
change the model of alternatives (e.g., by modifying, adding or removing
constraints) .

6. Conclusions
Building a model, as we have discussed in this article, refers essen-
tially to a constructivist attitude. Indeed, the modelling process includes
a series of modelling choices which cannot be fully rationalized and may
even appear sometimes as an act of faith. We believe, however, that
adopting a certain perspective (which accepts ambiguity, favors flexibil-
ity, ... ) in relation to some specificities of the decision context may prove
helpful to guide the modelling process and to consolidate some technical
choices.
In this paper we mainly focused on aspects related to the technical
validity of a model. However, a model also plays a key role in commu-
nication. Even if a model is technically valid, it must also be acceptable
REFERENCES 207
to the actors. This refers to what is called 'model legitimisation' by
Landryet al., 1996. These authors give some general guidelines to favor
the legitimacy of a model, such as working in close cooperation with
the strategic stakeholders, striking a balance between the level of model
sophistication and the competence level of the stakeholders,... We also
believe that specific modelling choices may influence the model legiti-
macy depending on the decision situation. For instance, when several
decision makers with different value systems are involved in the deci-
sion process, an acceptable representation must include the viewpoints
of each actor. A multicriteria model, where each actor sees his/her view-
point represented through at least one criterion, is already a good basis
for negotiation and discussion. Another example is in the not so un-
common situation where decision makers are willing to be supported by
a model but do not accept that the model dictates a prescription. It
is then extremely important to design an open model which leaves the
possibility to explore various solutions.

Acknowledgments
The author would like to thank John Buchanan and Lorraine Gardiner
for their helpful comments on an earlier version of this paper.

References
Azibi, R. and Vanderpooten, D. (2001). Elaboration de criteres agregeant des conse-
quences dispersees : deux exemples concrets. In Colorni, A., Paruccini, M., and
Roy, B., editors, AMCDA - Aide MultiCritre La Dcision Multi Criteria Decision
Aid, pages 13~30. EUR Report 19808.
Bana e Costa, C. (1996). Les problmatiques de l'aide la dcision : vers l'enrichissement
de la trilogie choix-tri-rangement. RAIRO - RO, 30(2):191~216.
Bell, D., Raiffa, H., and Tversky, A. (1988). Descriptive, normative and prescriptive
interactions in decision making. In Bell, D., Raiffa, H., and Tversky, A., editors,
Decision making: descriptive, normative and prescriptive interactions, pages 9~30.
Cambridge University Press.
Bonczek, R., Holsapple, C., and Whinston, A. (1981). Foundations of Decision Support
Systems. Academic Press, Orlando.
Bouyssou, D. (1989). Modelling inaccurate determination, uncertainty, imprecision
using multiple criteria. In Lockett, A. and Islei, G., editors, Improving Decision
Making in Organisations, LNEMS 335, pages 78~87. Springer-Verlag, Berlin.
Bouyssou, D. (1990). Building criteria: A prerequisite for MCDA. In Bana e Costa, C.,
editor, Readings in Multiple Criteria Decision Aid, pages 58~80. Springer-Verlag,
Berlin.
Bouyssou, D., Perny, P., Pirlot, M., Tsoukias, A., and Vincke, P. (1993). A Manifesto
for the new MCDA era. Journal of Multi Criteria Decision Analysis, 2:125~127.
Bouyssou, D. and Roy, B. (1987). La notion de seuils de discrimination en analyse
multicritere. INFOR, 25:302~313.
208 AIDING DECISIONS WITH MULTIPLE CRITERIA

Chalmers, A. (1982). What is this thing called Science? An assessment of the nature
and status of science and its method. University of Queensland Press, St Lucia.
traduction fran<;aise, Editions La Decouverte, Paris, 1987.
D'Avignon, G. (1992). Demarche d'aide la preparation d'un plan directeur: Le cas
des palais de justice. Documents du LAMSADE no. 73, Universite Paris-Dauphine,
France.
Dubois, D. and Prade, H. (1985). Theorie des possibilites - Applications d la represen-
tation des connaissances en informatique. Masson, Paris.
Fischer, M. (1985). Interactive optimization. Annals of Operations Research, 5:541-
556.
Fishburn, P. (1970). Utility theory for decision making. Wiley, New-York.
Fodor, J. and Roubens, M., editors (1994). Fuzzy preference modelling and multicri-
teria decision support. Kluwer, Dordrecht.
Gardiner, L. and Vanderpooten, D. (1997). Interactive multiple criteria procedures:
Some reflections. In Climaco, J., editor, Multicriteria Analysis, pages 290-301.
Proc. of the XIth International Conference on MCDM, Coimbra, Portugal, Springer
Verlag, Berlin.
Grassin, N. (1986). Constructing population criteria for the comparison of different
options for a high voltage line route. European Journal of Operational Research,
26:42-57.
Greco, S., Matarazzo, B., and Slowinski, R. (1999). The use of rough sets and fuzzy
sets in MCDM. In Advances in MCDM models, Algorithms, Theory, and Applica-
tions, pages 14.1-14.59. Kluwer Academic Publishers, Boston.
Grzymala-Busse, J. (1991). Managing uncertainty in expert systems. Kluwer Academic
Publishers, Boston.
Jacquet-Lagreze, E. (1973). Le probleme de l'agregation des preferences: une classe
de procedures a seuils. Mathmatiques et Sciences Humaines, 43:29-37.
Keeney, R (1988). Structuring objectives for problems of public interest. Operations
Research, 36:396-405.
Keeney, R (1992). Value-focused thinking: a path to creative decision making. Harvard
University Press.
Keeney, Rand Raiffa, H. (1976). Decisions with multiple objectives - preferences and
value trade-offs. Wiley and Sons.
Kouvelis, P. and Yu, G. (1997). Robust discrete optimization and its applications.
Kluwer.
Kruse, R, Schwecke, E., and Heinsohn, J., editors (1991). Uncertainty and vagueness
in knowledge based systems. Springer-Verlag, Berlin.
Landry, M., Banville, C., and Oral, M. (1996). Model legitimisation in operational
research. European Journal of Operational Research, 92(3}:443-457.
Levine, P. and Pomerol, J. (1989). Systemes interactifs d'aide d la decision et systemes
experts. Hermes, Paris.
Martel, J.-M., Guitouni, A., Abi-Zeid, I., Blanger, M., Chraygane, Z., and Zaras,
K. (1998). Construction d'une famille de critres dans Ie cadre du projet CESA.
Documents du LAMSADE no. 106, Universite Paris-Dauphine, France.
Norese, M. and Ostanello, A. (1989). Identification and development of alternatives:
introduction to the recognition of process typologies. In Lockett, A. and Islei, G.,
editors, Improving Decision Making in Organisations, LNEMS 335, pages 112-123.
Springer-Verlag, Berlin.
REFERENCES 209
Pawlak, Z. (1982). Rough sets. Int. J. Computer and Information Sci., 11:341-356.
Perny, P. and Roy, B. (1992). The use of fuzzy outranking relations in preference
modelling. Fuzzy Sets and Systems, 49:33-53.
Perny, P. and Vanderpooten, D. (1998). An interactive multiobjective procedure for
selecting medium-term countermeasures after nuclear accidents. Journal of Multi
Criteria Decision Analysis, 7(1):48-60.
Pollack, M. (1976). Interactive models in Operations Research - an introduction and
some future research directions. Computers and Operations Research, 3(4):305-312.
Roy, B. (1976). From optimization to multicriteria decision aid: Three main oper-
ational attitudes. In Thiriez, H. and Zionts, S., editors, Multiple Criteria Deci-
sion Making, Proceedings Jouy-en-Josas France 1975, pages 1-32. LNEMS 130,
Springer-Verlag, Berlin.
Roy, B. (1981). The optimisation problem formulation: criticism and overstepping.
Journal of the Operational Research Society, 32(6):427-436.
Roy, B. (1985). Methodologie Multicritere d'Aide d la Decision: Methodes et Cas.
Economica, Paris. english translation: Multicriteria Methodology for Decision Aid-
ing,Kluwer Academic Publishers, 1996.
Roy, B. (1987). Meaning and validity of interactive procedures as tools for decision
making. European Journal of Operational Research, 31(3):297-303.
Roy, B. (1989). Main sources of inaccurate determination, uncertainty and impreci-
sion. Mathematical and Computer Modelling, 12(10/11):1245-1254.
Roy, B. (1993). Decision science or decision-aid science? European Journal of Opera-
tional Research, 66(2):184-203.
Roy, B. (1998). A missing link in OR-DA: Robustness analysis. Foundations of Com-
puting and Decision Sciences, 23(3):141-160.
Roy, B. (1999). Decision-aiding today: what should we expect? In Gal, T., S. T. and
Hanne, T., editors, Multicriteria Decision Making, Advances in MCDM models,
Algorithms, Theory and Applications, pages 1.1-1.35. Kluwer Academic Publishers,
Boston.
Roy, B. and Bouyssou, D. (1993). Aide Multicritere d la Decision: Methodes et Cas.
Economica, Paris.
Roy, B. and Mousseau, V. (1996). A theoretical framework for analysing the notion
of relative importance of criteria. Journal of Multi Criteria Decision Analysis,
5(2):145-159.
Roy, B. and Slowinski, R. (1993). Criterion of distance between technical programming
and socio-economic priority. RAIRO - RO, 27:45-60.
Roy, B. and Vanderpooten, D. (1996). The European school of MCDA: Emergence,
basic features and current works. Journal of Multi Criteria Decision Analysis,
5(1):22-37.
Sims, R. (1997). Interactivity: a forgotten art? Computers in Human Behavior, 13(2):
157-180.
Slowinski, R. and Teghem, J., editors (1990). Stochastic versus Fuzzy approaches to
Multiobjective Mathematical Programming under uncertainty. Kluwer, Dordrecht.
Sprague Jr, R. and Carlson, E. (1982). Building effective Decision Support Systems.
Academic Press, Orlando.
Tsoukias, A. (1996). A first-order, four valued, weakly paraconsistent logic. Cahier
du LAMSADE no. 139, Universite de Paris Dauphine, France.
210 AIDING DECISIONS WITH MULTIPLE CRITERIA

Tsoukias, A. and Vincke, P. (1995). A new axiomatic foundation of partial compara-


bility. Theory and Decision, 39:79-114.
Vanderpooten, D. (1992). Three basic conceptions underlying multiple criteria in-
teractive procedures. In Goicoechea, A., Duckstein, L., and Zionts, S., editors,
Multiple Criteria Decision Making, pages 441-448. Proc. of the IXth International
Conference on MCDM, Fairfax, USA, Springer-Verlag, Berlin.
Vansnick, J. (1986). On the problem of weights in multiple criteria decision mak-
ing (the noncompensatory approach). European Journal of Operational Research,
24:288-294.
Vincke, P. (1989). L 'aide multicritere d la decision. Ellipses, Paris. English translation:
Multicriteria decision-aid, Wiley, 1992.
Vincke, P. (1999). Robust and neutral methods for aggregating preferences into an
outranking relation. European Journal of Operational Research, 112(2):405-412.
Zeleny, M. (1982). Multiple Criteria Decision Making. McGraw Hill.
ON THE USE OF MULTICRITERIA
CLASSIFICATION METHODS:
A SIMULATION STUDY

Michael Doumpos
Technical University of Crete,Greece
dmichael@ergasya.tuc.gr

Constantin Zopounidis
Technical University of Crete, Greece
kostas@ergasya.tuc.gr

Abstract: The classification of a set of alternatives into predefined homogenous groups


is a problem with major practical interest in many fields. Over the past two
decades several non-parametric approaches have been developed to address
the classification problem, originating from several scientific fields. Among
these approaches, multicriteria decision aid (MCDA) has several attractive
features, involving its decision support orientation. This paper is focused on
the preference disaggregation approach of MCDA (PDA). The objective of
this study is to explore whether the attractive features of PDA also lead to
higher efficiency in terms of classification accuracy, as opposed to traditional
statistical classification procedures. For this purpose an extensive Monte Carlo
simulation is conducted. The methods considered in this simulation include a
well-known classification method based on the PDA paradigm, namely the
UTADIS method (UTilites Additives DIScriminantes), and three statistical
classification procedures, namely the linear discriminant analysis, the
quadratic discriminant analysis and the logit analysis. The results indicate that
the UTA DIS method outperforms the considered parametric techniques in the
majority of the data conditions that are used in the simulation.

Key words: Classification; Multicriteria decision aid; Monte Carlo simulation;


Multivariate statistical analysis
212 AIDING DECISIONS WITH MULTIPLE CRITERIA

1. Introduction
The classification problem involves the assignment of a set of
alternatives (objects, alternatives) described over some attributes or criteria
into predefined homogeneous classes. Such problems are often encountered
in many fields, including finance, marketing, agricultural management,
human recourses management, environmental management, medicine,
pattern recognition, etc. This major practical interest of the classification
problem has motivated researchers in developing an arsenal of methods in
order to develop quantitative models for classification purposes. The linear
discriminant analysis (LDA) was the first method developed to address the
classification problem from a multidimensional perspective. LDA has been
used for decades as the main classification technique and it is still being used
at least as a reference point for comparing the performance of new
techniques that are developed. Other widely used parametric classification
techniques developed to overcome LDA's restrictive assumptions
(multivariate normality, equality of dispersion matrices between groups)
include quadratic discriminant analysis (QDA), logit and probit analysis and
the linear probability model (for two-group classification problems).
During the last two decades several alternative non-parametric
classification techniques have been developed, including mathematical
programming techniques (Freed and Glover, 1981), multicriteria decision aid
methods (Roy and Moscarola, 1977; Doumpos et aI., 2000), neural networks
(Patuwo et aI., 1993), machine learning approaches (Quinlan, 1986), and
rough sets (Pawlak, 1982; Pawlak and Slowinski, 1994). Among these
techniques, multicriteria decision aid (MCDA) has several distinguishing
and attractive features, involving, mainly, its decision support orientation.
Within the context of the MCDA theory, the preference disaggregation
approach (PDA) provides a sound methodological framework for developing
decision-making models both for ranking and classification (sorting
purposes). PDA (Jacquet-Lagreze and Siskos, 2001; Pardalos et aI., 1995)
refers to the analysis (disaggregation) of the global preferences (judgement
policy) of the decision maker in order to identify the criteria aggregation
model that underlies the preference result (ranking, classification). PDA uses
common utility decomposition forms to represent the decision maker's
preferences. The major distinguishing characteristic of PDA as opposed to
other MCDA approaches (multiattribute utility theory, outranking relations)
is that regression-based techniques are used for model development (indirect
estimation procedure) instead of direct interrogation techniques that are
employed in other MCDA approaches. While several case studies have
demonstrated the efficiency of methods based on PDA, in supporting
On the Use ofMulticriteria Classification Methods 213

efficiently the decision making process, their evaluation regarding their


classification performance relative to well-established techniques needs
further investigation. This is the objective of this study; to explore whether
the attractive features of PDA also lead to higher efficiency in terms of
classification accuracy, as opposed to traditional statistical classification
procedures. For this purpose an extensive Monte Carlo simulation is
conducted that enables the investigation of this issue under several different
scenarios regarding the data used. The methods considered in this simulation
include a well-known classification method based on the PDA paradigm,
namely the UTADIS method (UTi lites Additives DIScriminantes; Jacquet-
Lagn~ze, 1995; Zopounidis and Doumpos, 1999), and three statistical
classification procedures, namely the linear discriminant analysis, the
quadratic discriminant analysis and the logit analysis.
The rest of the paper is organized as follows. Section 2 provides a brief
outline of the basic concepts of the UTADIS method. Section 3 provides
details on the design of the Monte Carlo simulation and the factors
considered in the conducted experiments. Section 4 discusses the obtained
results for the UTADIS method and compares them with the results of LDA,
QDA and logit analysis.

2. The UTADIS method


The UTADIS method is a MCDA classification technique, originated
from PDA. The method leads to the development of an additive utility
function that is used for classification purposes. The parameters of the utility
decomposition model are estimated through the analysis of the decision
maker's overall preference (predetermined classification) on some reference
alternatives (reference set of alternatives A). The development of the
classification model is performed so as to respect the pre-specified
classification, as much as possible. Once this is achieved, the classification
model can be used for extrapolation purposes involving the classification of
a new alternative. This is a common model development procedure that is
widely used in statistics and econometrics (e.g., in discriminant, logit and
probit analysis), as well as in other MCDA preference disaggregation
approaches too (Jacquet-Lagreze and Siskos, 1982; Mousseau and
Slowinski, 1998).
The model development process through the UTADIS method begins
with a reference set A consisting of n alternatives A= {aI, a2, ... , an}. Each
alternative is described over a consistent family of m evaluation criteria gl>
g2, ... , gm' The evaluation criteria involve all the characteristics (qualitative
and/or quantitative) of the alternatives that affect their overall evaluation.
214 AIDING DECISIONS WITH MULTIPLE CRITERIA

The alternatives under consideration are classified by the decision maker


into q ordered classes C), C2, ... , Cq (Ck is preferred to Ck+1, k= 1, 2, ... ,
q-l). The additive utility model that is developed through the UTADIS
method has the following form:
m
U(a)= ~:Ui[gi(a)]
i=l

where U (a) is the global utility of an alternative aEA, and uj[g.{a)] is the
marginal utility of an alternative aEA on the evaluation criterion gj.
To classify the alternatives into their original classes, it is necessary to
estimate utility thresholds UI, U2, ... , Uq_1 (threshold Uk distinguishes the
classes Ck and Ck+), "d k~q-l). Comparing the global utilities of an
alternative a with the utility thresholds, the classification of this alternative is
achieved in the following way:
U(a)~uI ~aECI

u2~U(a)<uI ~aEC2
(1)

U(a) < u q _ 1
The estimation of the global utility model (additive utility function) and
the utility thresholds is accomplished through linear programming
techniques, and more specifically, through the solution of the following
linear program:
MinimiseF= La+(a)+ ... + L[a+(a)+a-(a)]+ ... + La-(a) (2)
aeq aECk aeCq

s.t.

~>;[g;(a)]-ul+a+(a) ~ 0 "daEC I (3)


;=1

f:m~1 u;[g;(a)]-u -a-(a)~-o)


k _1

"daECk (4)
LuJg;(a)]-u k +a+(a)~O
;=1

!uJg;(a)]-Uq-l- a-(a) ~ -0 "daECq (5)


;:::1

(6)
On the Use ofMulticriteria Classification Methods 215

k = 2, 3, ... , q-I (7)

Wij~ 0, (J +(a) ~ 0, (J-(a) ~ 0 (8)


where:

- aj is the number of subintervals [g( ,gt I ] into which the range of values
of criterion gi is divided,

- wiJ= uj (gti ) - uj (g!) is the difference between the marginal utilities of


two successive values gf and gj+Iofcriterion i (wij~O),
- £5isathresholdusedtoensurethatU(a)<uk_t. VaECk,2~k~q-1 (8)0),
- s is a threshold used to ensure that Uk-I> Uk (s>8>O), and

- u\a) and (J-(a) are the classification errors (over-estimation and under-
estimation errors respectively). These errors indicate differences between
the classification of the alternatives performed by the developed additive
utility model and the classification specified by the decision maker. These
errors can be due to three reasons: (1) inability of the additive utility
model to fully represent the decision maker's preferences, (2) lack of data
or inappropriate data, (3) misjudgement by the decision maker during the
specification of the classification of the alternatives.
The objective function (2) of the above linear program minimises the
sum of all classification errors (J+ and (J- for all alternatives in the reference
set. This is achieved subject to the constraints (3)-(8). Constraints (3)-(5)
result from the classification rules (1) and they are used to define the
classification errors. In particular, constraint (3) implies that the global
utility of an alternative aECI should be greater or equal to the utility
threshold U\ which is the lower bound of class C\. If this is not possible, then
an amount of utility equal to (J+(a) should be added to the global utility of
this alternative, indicating that the alternative is classified to a lower class
than the one it actually belongs. Constraint set (4) is used for alternatives
which are classified by the decision maker in an intermediate class Ck • Since
Uk-\ is the lower bound of the class Ck-\ and Uk is the lower bound of class Ck,
the correct classification of an alternative a which belongs in class Ck can be
achieved only if the global utility of the alternative is strictly lower than the
utility threshold Uk-\ [by definition, according to the classification rule (1) if
U(a)=uk-\ then aECk] and greater or equal to the utility threshold Uk. In order
to ensure the strict inequality between U(a) and Uk-\ a user-provided small
positive constant £5 is used. If either of the above two conditions is not
satisfied then the corresponding amount of utility u+(a) or (J-(a) should be
216 AIDING DECISIONS WITH MULTIPLE CRITERIA

added (subtracted) to the global utility of the alternative. Similarly,


constraint (5) is used for alternatives that belong to the worst class Cq • The
global utility of these alternatives should be strictly lower than the utility
threshold Uq_l; otherwise an amount of utility equal to O"-(a) should be
subtracted from the global utility of the alternatives, indicating that these
alternatives are classified by the model to a higher (better) class than the one
they actually belong. Constraint (6) is used as normalization constraint, so
that the global utilities are normalized between 0 and 1. Finally, constraint
(7) is used to ensure that UI>U2>, ... , >u,;>, ... , >Uq_1 (a positive real number
s>o>O is used to ensure the strict inequality between the utility thresholds).
After the solution F* of the above linear program is obtained, a post
optimality stage is carried out to examine the existence of other optimal or
near optimal solutions, which could provide a more consistent representation
of the decision maker's preferences. These correspond to error values lower
than F* + k(F*), where k(F) is a small proportion of F*. In this way a range
is determined for both the marginal utilities and the utility thresholds, that is
representative of their stability.
Once the appropriate additive utility model has been developed through
the aforementioned procedure, it can be easily used for the evaluation of any
new alternative (extrapolation). This is achieved by introducing the
evaluation of the new alternative on the considered criteria in the developed
additive utility model to estimate its global utility (the developed marginal
utility functions are used to calculate the marginal utility of the alternative
on each evaluation criterion). Then the global utility of the alternative is
compared to the utility thresholds to decide upon its classification.

3. Experimental design
3.1 Methods
In order to perform a thorough examination of the classification
performance of the UTADIS method as opposed to traditional classification
procedures, an extensive experimental study is conducted using several
different data conditions.
Except for the UTADIS method, three other statistical and econometric
procedures are considered: linear discriminant analysis (LDA), quadratic
discriminant analysis (QDA) , and logit analysis (LA). These methods are
widely used in many scientific fields for the development of classification
models. Furthermore, they are also used as a benchmark in many
comparative studies investigating the classification performance of non-
parametric techniques such as mathematical programming, neural networks,
machine learning algorithms, etc. These are the two main reasons for
On the Use ofMulticriteria Classification Methods 217

considering these methods in this experimental study, thus providing the


means for investigating the classification performance of the proposed
MCDA methodology compared to the most commonly used existing
techniques.
Both LDA and QDA lead to the development of a set of discriminant
functions (q-l discriminant functions, where q is the number of classes).
These functions are linear in the case of LDA and quadratic in the case of
QDA. On the other hand, in LA the discriminant function has the form of a
logistic function that is used to assess the probability that an alternative
belongs into a specific group. In all these functions (classification models)
the criteria are assumed to be measured in a quantitative nominal scale
(attributes). This imposes problems when qualitative criteria should be
considered or the quantitative criteria are measured using ordinal or ratio
scales. In the UT ADIS method the preference modelling approach imposed
through the use of the additive utility function overcomes these problems,
enabling the decision maker to aggregate qualitative and/or quantitative
criteria, assuming that they are all measured in an ordinal scale. This implies
that the classes are also defined in an ordinal way, whereas in LDA, QDA
and LA the classes are defined nominally.
In developing these functions the parameters that need to be estimated
are the coefficients of the criteria (q-l coefficients for each criterion
corresponding to the q-l functions in the case of LDA and LA and
2q+m!/(m-2)! coefficients for each function in QDA, where m is the number
of criteria). Thus, it becomes apparent that the classification models
developed using LDA, QDA and LA have a considerably smaller number of
degrees of freedom compared to the additive utility function developed
through UTADIS. This implies that the classification models of LDA, QDA
and LA are expected to have smaller fitting ability to the data used for model
development (reference set) compared to the models developed through the
UTADIS methods. However, an increased number of degrees of freedom
and higher fitting ability cannot be associated to higher generalizing ability.
In fact it is well-known that an increased number of degrees of freedom
often leads to the over-fitting phenomenon which can be negatively
associated to the generalizing performance of classification model. In that
regard the comparison of the generalizing classification performance of the
more complicated models of UTADIS compared to the simpler models of
LDA, QDA and LA becomes meaningful.
The number of the estimated parameters of the classification models is
also related to the computational effort required for its development. In LDA
and QDA simple algebraic computations are only required for model
development (detailed description of the model development procedure can
be found in the book of Altman et aI., 1981). This is an advantage of these
218 AIDING DECISIONS WITH MULTIPLE CRITERIA

methods since actually any classification model can be developed in limited


time irrespective of the number of data used for model development (size of
the reference set). In the case of LA more sophisticated non-linear
optimisation techniques are required for the estimation of the parameters of
the classification model. However, even in this case the model development
process is commonly completed within a limited time for most data sets. On
the other hand, in UTADIS the development of the classification model
requires increased computational effort, due to the use of linear
programming techniques.
Despite the increased computational effort required for model
development in the UTADIS method, the use of linear programming
techniques provides increased flexibility for model development. In
particular, the decision maker is able to employ different measures of the
quality and the classification accuracy of the developed model, while he/she
can also impose specific constraints on the form of the model (form of the
marginal utility functions, weights of the criteria, etc.) depending upon
hislher decision making policy and judgment (Dias et aI., 2001). On the
other hand in LDA, QDA and LA the model development process is based
on specific statistical assumptions. In particular, in LDA it is assumed that
the data are multivariate normal with equal group dispersion matrices. In
QDA it is assumed that the data are multivariate normal with unequal group
dispersion matrices. Under these specific assumptions both LDA and QDA
lead to the development of the optimal classification model (this can be
proved using the Bayes rule; cf. Patuwo et aI., 1993). LA is less restrictive in
terms of these assumptions, however it still assumes that the probability that
an alternative belongs into a specific group is modelled through cumulative
logistic probability function. Such statistical assumptions are difficult to
meet in real world data, thus imposing restrictions on the practical
implementation of statistical and econometric procedures. Furthermore, the
traditional statistical regression framework used for model development in
LDA, QDA and LA prohibits the incorporation of the decision maker's
preferences in the estimation of the model's parameters. On the other hand,
the additive utility function modelling approach used in UTADIS is based on
the preferential independence assumption (Keeney and Raiffa, 1993), which
is not related to the statistical properties of the considered data, but rather to
the specific preferences of the decision maker.
The above remarks are directly related to the major difference between
the traditional statistical philosophy that underlies the use of
statistical/econometric classification techniques as opposed to the decision
support orientation of the MCDA approach. All statistical/econometric
classification techniques such as LDA, QDA and LA aim at the development
of an appropriate statistical description of an unknown population of
On the Use ofMulticriteria Classification Methods 219

alternatives using a given sample (reference set). On the other hand, using
the preference disaggregation framework employed in the UT ADIS method,
the aim is to analyse a given sample of decision instances (reference set) that
incorporate all the preferential information that characterizes the decision
maker's preferences and judgment. This enables the elicitation of the
necessary preferential information in an indirect way and the development of
classification models that are in accordance with the decision maker's
preferences. Such an approach supports the decision maker in understanding
the peculiarities of the considered alternatives, identifying and correcting the
possible inconsistencies in his/her judgments, thus improving the decision
making process. Of course, the use of the classification error as the
optimisation criterion in the linear programming formulation of the UTADIS
method is not necessarily in accordance with this objective. However, as
mentioned above, the use of linear programming enables the analyst to
incorporate specific constraints in the model development process in order to
elicit the decision maker's preferences as accurately as possible.
All the above functional, theoretical and practical (implementation)
differences between the methods and the associated advantages and
disadvantages of the methods are investigated in this experimental study in
terms of their impact to the classification performance (accuracy) of the
methods).

3.2 Factors
The comparison of the methods outlined in the previous subsection is
performed considering six factors regarding the properties of the data that
are used during model development and testing. A complete description of
the factors considered in this experimental design is presented in Table 1.
The first of the factors involving the properties of the data investigated in
the conducted experimental design involves their distributional form (F2).
While many studies conducting similar experiments have been concentrated
on univariate distributions to consider multivariate non-normality, in this
study multivariate distributions are considered. This specification enables
the investigation of additional factors in the experiment, such as the
correlation of the criteria and the homogeneity of the group dispersion
matrices. Actually, the use of a univariate distribution implies that the
criteria are independent, a case that is hardly the situation encountered in
real-world problems. The first two of the multivariate distributions that are
considered (normal and uniform) are symmetric, while the exponential (this
is actually a multivariate distribution that resembles the exponential
distribution in terms of its skewness and kurtosis) and log-normal
distributions are asymmetric, thus leading to a significant violation of
220 AIDING DECISIONS WITH MULTIPLE CRITERIA

multivariate normality. The generation of the multivariate non-normal data is


based on the methodology presented by Vale and Maurelli (1983).

Table 1. Factor investigated in the experimental design


Factors Levels
Linear discriminant analysis (LDA)
Quadratic discriminant analysis (QDA)
Classification procedures
Logit analysis (LA)
UTADIS
Multivariate normal
Multivariate uniform
F2 Statistical distribution of the data
Multivariate exponential
Multivariate log-normal
Two
F3 Number of groups
Three
36 alternatives, 5 criteria
F4 Training sample size 72 alternatives, 5 criteria
108 alternatives, 5 criteria
Low correlation: re [0, 0.1]
Fs Correlation coefficient
Higher correlation: re [0.2, 0.5]
Homogeneity ofthe group dispersion Equal
F6
matrices Unequal

F7 Group overlap Low overlap


High overlap

Factor F3 defines the number of groups into which the classification of


the objects is made. In this experimental design the two-group and the three-
group classification problems are considered. This specification enables the
derivation of useful conclusions on the performance of the methods
investigated, in a wide range of situations that are often met in practice
(many real-world classification problems involve three groups).
Factor F4 is used to define the size of the training sample, and in
particular the number of alternatives that it includes (henceforth this number
is denoted as m). The factor has three levels corresponding to 36, 72 and 108
alternatives, distributed equally to the groups defined by factor F 3. In all
three cases the alternatives are described along five criteria. Generally, small
training samples contain limited information about the classification problem
being examined, but the corresponding complexity of the problem is also
limited. On the other hand, larger samples provide richer information, but
they also lead to increased complexity of the problem. Thus, the examination
of the three levels of the factor enables the investigation of the performance
of the classification procedures under all these cases.
On the Use ofMulticriteria Classification Methods 221

The specified correlation coefficients for every pair of criteria define the
off-diagonal elements of the dispersion matrices of the groups, The elements
in the diagonal of the dispersion matrices, representing the variance of the
criteria are specified by the sixth factor, which is considered in two levels, In
the first level, the variances of the criteria are equal for all groups, whereas
in the second level the variances differ, Denoting the variance of criterion g;
for group y as O't, the realization of these two situations regarding the
homogeneity of the group dispersion matrices is performed as follows:
- For the multivariate normal, uniform and exponential distributions:
Levell: O';~ =0'/2 =0';; =1, V i=l, 2, ",,5,
Level 2: O'~ =1, O'j; =4, (jj; =16, V i=l, 2, .. ,,5,
- For the multivariate log-normal distribution, the variances are specified so
as to assure that the kurtosis of the data ranges within reasonable levels),
as follows:
a) In the case of two groups:
12' if m =36
Levell: O'j~ = O'j; ={14, if m =72 ,V i=l, 2, .. ,,5,
16, if m =108
12' if m = 36
Level 2: O';~ = 14, if m =72 , 0';22
{ =1.50';~, V i=l, 2, .. ,,5,
16,if m =108
b) In the case of three groups
4' if m =36
Levell: O';~ = 0';22 =0';; = 7, if m =72 ,V i=l, 2, .. ,,5,
{
10, if m =108
2'if m =36
Level 2: O'j~ = 4, if m =72 ,O'j;
{ =1.50'j~' O'j~ =1.50'j;, V i=1,2, .. ,,5,
6,if m =108

1 In the log-nonnal distribution the skewness and kurtosis are defined by the mean and the
variance of the criteria for each group. The procedures for generating multivariate non-
nonnal data can replicate satisfactory the prespecified values of the first three moments
(mean, standard deviation and skewness) ofa statistical distributions. However, the error is
higher for the fourth moment (kurtosis). Therefore, in order to reduce this error and
consequently to have better control of the generated data, both the mean and the variance of
the criteria for each group in the case of the multivariate log-normal distribution, are
specified so as the coefficient of kurtosis is lower than 40.
222 AIDING DECISIONS WITH MULTIPLE CRITERIA

The final factor defines the degree of group overlap. The higher the
overlapping is between each pair of groups, the higher is the complexity of
the classification problem due to the difficulty in discriminating between the
alternatives of each group. The degree of group overlap in this experimental
design is considered using the Hotelling r statistic. This is a multivariate
measure of difference between the means of two groups, assuming that the
criteria are multivariate normal and that the group dispersion matrices are
equal. Studies conducted on the first of these assumptions (multivariate
normality) have shown that actually the Hotelling r is quite robust to
departures from multivariate normality even for small samples (Mardia,
r
1975). Therefore, using the Hotelling in the non-multivariate distributions
considered in this experimental design does not pose a significant problem.
To overcome the second assumption regarding the homogeneity of the group
dispersion matrices, the modified version of the Hotelling r defined by
Anderson (1958) is employed in the case where the dispersion matrices are
not equal. The use of these measures of group overlap in the conducted
experimental design is performed as follows: Initially, the means of all five
criteria for the first group is fixed to a specific value (1 for the case of
multivariate normal, uniform and exponential distribution, and 8 in the case
of the log-normal distribution). Then, the means of the criteria for the second
r
group are specified so that the Hotelling (or its modified version) for the
differences in the means of groups 1 and 2 is significant at the 1% and the
10% significance level (low and high degree of group overlap). Similarly,
the means of the third group are specified so as the Hotelling r (or its
modified version) for the differences in the means of groups 2 and 3 is
significant at the 1% and the 10% significance level.
To ensure the consistency in the ordering of the classes (groups) from the
best (group l-CI ) to the worst one (group l-C3), all data are generated so
that the following condition is satisfied: g,{a»g,{b), VaECh bECk+ b i=l, 2,
... , 5, k=1, 2. This ensures that there is no alternative of group Ck that is
dominated by an alternative of group Ck+I.
For each combination of the factors F2-F7 (192 combinations) a training
sample and a validation sample are generated, having all the properties that
these factors specify. The size of the training sample is defined by the factor
F4 , while the size of the validation sample (holdout sample) is fixed at 216 in
all cases. For each factor combination 20 replications are performed.
Therefore, during this experiment the number of samples considered is 7,680
(192x20=3,840 training samples matched to 3,840 validation samples).
Overall, the conducted experiment involves a 4x4x2x3x2x2x2 full-level
factorial design consisting of768 treatments (factor combinations).
On the Use ofMulticriteria Classification Methods 223

4. Results

The analysis of the results obtained in this experimental design, is


focused only in classification errors for the validation samples, on the basis
of the transformation:
2 arcsin .j error rate (9)
This transformation has been used by several researchers to stabilize the
variance of the misclassification rates (Bajgier and Hill, 1982;
Joachimsthaler and Starn, 1988). The ANOVA results presented in Table 2,
indicate that the seven main effects (the considered factors), and three two-
way interaction effects explain more than 79% of the total variance (Hays oJ
statistic). None of the remaining interaction effects explains more than 1% of
the total variance, and consequently they are not reported.

Table 2. Major explanatory effects regarding the classification performance of the methods
(seven-way ANOV A results)

Effects df Sum of sguares Mean sguares F al


FJ 3 264.774 88.258 8033.38 20.50%
F6 193.305 193.305 17594.94 14.97%
FJxF6 3 132.429 44.143 4017.96 10.25%
FJxF 2 9 122.833 13.648 1242.27 9.50%
F3 1 89.017 89.017 8102.47 6.89%
F2 3 72.768 24.256 2207.81 5.63%
F4 2 60.140 30.070 2737.02 4.65%
F7 1 43.688 43.688 3976.55 3.38%
FJxF 4 6 24.444 4.074 370.82 1.89%
Fs 21.274 21.274 1936.43 1.65%

The interaction effects are of major interest in analysing the results of the
experimental design with regard to the relative performance of the
considered methods. All three interaction effects that are found to explain a
high proportion of the total variance in the obtained results, involve the
interaction of the factor FI (classification procedures) with other factors, in
particular the homogeneity of the group dispersion matrices (F6), the
distributional form of the data (F2)' and the training sample size (F 4). Table 3
summarizes the results of all methods throughout all experiments, while
Tables 4-6 provide further details on the comparison of the methods in terms
of the aforementioned two and three-way interaction effects that are found
significant through the analysis of variance. Each of these tables reports the
average transformed error rate, the untransformed error rate (in parentheses)
224 AIDING DECISIONS WITH MULTIPLE CRITERIA

and the grouping obtained through the Tukey's test on the average
transformed error rates [cf. equation (9)].

Table 3. Tukey's test for significant differences between methods along all experiments
methods (factor: F 1)

Mean Tukets ~ouEing


1.2000
LDA C
(32.20%)
1.0671
QDA B
(26.99%)
1.1891
LOGIT C
(31.69%)
0.8738
UTADIS A
{19.60%}

Table 4. Tukey's test for significant differences between methods (factor: F 6)

of disEersion matrices
Homogeneit~
Egual Unegual
Tukey's Tukey's
Mean Mean
~ouEing ~ouEing
1.2391 1.1610
LDA C C
(33.98%) (30.42%)
1.3283 0.8059
QDA D B
(38.17%) (15.80%)
1.2183 1.1599
LOGIT B C
(33.01 %) (30.37%)
0.9932 0.7545
UTADIS A A
{24.20%) {15.01%}

Table 5. Tukey's test for significant differences between methods (factor: F 2)

Distribution
Nonnal Unifonn Exponential Log-nonnal
Tukey's Tukey's Tukey's Tukey's
Mean Mean Mean Mean
grouping grouping grouping grouping
1.1900 1.2312 1.1642 1.2147
LDA B C C C
(31.69%) (33.57%) (30.66%) (32.86%)
1.0917 1.0634 1.0392 1.0740
QDA A B B B
(27.79%) (27.02%) (26.97%) (27.16%)
1.1827 1.2200 1.1517 1.2020
LOGIT B C C C
(31.36%) (33.03%) (30.11%) (32.25%)
1.1061 0.9851 0.5126 0.8916
UTADIS A A A A
(28.\0%) (23.43%) (7.36%) (19.52%)
On the Use ofMulticriteria Classification Methods 225

Table 6. Tukey's test for significant differences between methods (factor: F4)

Training samele size


36 72 108
Tukey's Tukey's Tukey's
Mean Mean Mean
S!:oueing S!:ouEing S!:0uEing
1.1092 1.2342 1.3017
LDA B C C
(26.04%) (33.68%) (36.87%)
1.0585 1.0686 1.0741
QDA B B B
(26.26%) (27.11%) (27.58%)
1.0555 1.2225 1.2894
LOGIT B C C
(25.69%) (33.11%) (36.27%)
0.8197 0.8651 0.9367
UTADIS A A A
{17.33%} {19.31%) {22.16%}

The results indicate that, overall, the UTADIS method outperforms all
statistical procedures, followed by QDA, while the performances of LDA
and LA are similar. The analysis, regarding the performance of the methods
when the three two-way significant interactions are considered (methods by
dispersion matrices, distribution and size; cf. Tables 4-6), further pronounces
the superiority of the UTADIS method. The results indicate that in all cases
the UTADIS method provides consistently lower error rates than the
considered statistical classification techniques. The differences that are
evident between the performances of the methods are found significant
through the Tukey's test at the 5% significance level. The only exception to
these remarks is the case of multivariate normal data, where QDA provides
slightly lower error rate than UTADIS (27.79% for QDA vs. 28.10% for
UTADIS). However, even in this case the difference between the two
methods is not found to be statistically significant at the 5% level.
These results indicate that theoretical advantages (cf. section 3.1) of the
proposed MCDA methodology over the statistical techniques is associated
with an improved classification performance. Furthermore, the higher fitting
ability that results from the increased number of degrees of freedom in the
additive utility models of the UTADIS method and the increased
computational effort required for model development are by higher
generalizing classification performance which is crucial in most real world
cases.
226 AIDING DECISIONS WITH MULTIPLE CRITERIA

5. Conclusions

Classification problems are often encountered in many research and


practical fields, including finance, marketing, energy analysis and policy
making, human recourses management, etc. The aim of this study was to
explore the performance of a MCDA classification approach, namely the
UTADIS method. Over the past three decades, MCDA has emerged as a
significant methodological approach within the broad field of operations
research to address complex decision-making problems, where an evaluation
of a set of alternatives is required.
A thorough comparison was performed with traditional statistical
classification approaches, on the basis of an extensive experimental design
involving several factors regarding the properties of the data involved. The
results indicate that the preference disaggregation analysis could be
considered as a promising classification approach compared to well-
established existing procedures. In the overwhelming majority of the cases,
the UTADIS method performed significantly better than all the multivariate
statistical techniques that were considered.
Further analysis regarding additional data conditions that are commonly
encountered in real-world problems, such as the existence of qualitative data
and outliers, could provide a global view true performance of the proposed
MCDA approach in a wider range of complex data conditions. Furthermore,
it would be interesting to consider in the comparison other classification
procedures, such as neural networks and machine learning approaches, while
the consideration of other MCDA classification procedures such as the
ELECTRE TRI method (Yu, 1992) and the rough sets approach (Pawlak,
1982; Slowinski and Zopounidis, 1995) will provide useful insight on the
similarities and dissimilarities among the different MCDA approaches (e.g.,
preference disaggregation vs. outranking relations).
Of course, this kind of analysis is not restricted to the specific MCDA
method used in this study. Recently, new techniques based on preference
disaggregation analysis, have been proposed to infer the parameters of other
classification methods based on utility function models similar to the
UTADIS method (e.g., the M.H.DIS method; Zopounidis and Doumpos,
2000) or on different criteria aggregation form such as mUltiplicative utility
functions or outranking relations (e.g., the ELECTRE TRI method;
Mousseau and Slowinski, 1998; Dias et aI., 2000; Mousseau et aI., 2001).
The use of such alternative criteria aggregation models will enable the
extension of the preference modelling context considered through the present
form of the UTADIS method that is based on the additive utility function
and the preferential independence assumption.
On the Use ofMulticriteria Classification Methods 227

References
Altman, E.I., Avery, R, Eisenbeis, Rand Stinkey, 1. (1981), Application of Classification
Techniques in Business, Banking and Finance, Contemporary Studies in Economic and
Financial Analysis, Vol. 3, JAI Press, Greenwich.
Anderson, T.W. (1958), An Introduction to Multivariate Statistical Analysis, Wiley, New
York.
Bajgier, S.M. and Hill, A.V. (1982), "An experimental comparison of statistical and linear
programming approaches to the discriminant problem", Decision Sciences, 13,604-618.
Dias, L., Mousseau, V., Figueira, 1. and Clirnaco, J. (2000), "An aggregation/disaggregation
approach to obtain robust conclusions with ELECTRE TRJ", Cahier du LAMSADE, No 174,
Universite de Paris-Dauphine.
Doumpos, M., Zopounidis, C. and Pardalos, P.M. (2000), "Multicriteria sorting methodology:
Application to financial decision problems", Parallel Algorithms and Applications, 15/1-2,
113-129.
Freed, N. and Glover, F. (1981), "Simple but powerful goal programming models for
discriminant problems", European Journal of Operational Research, 7, 44-60.
Jacquet-Lagreze, E. (1995), "An application of the UTA discriminant model for the
evaluation of R&D projects", in: P.M. Pardalos, Y. Siskos, C. Zopounidis (eds.),
Advances in Multicriteria AnalysiS, Kluwer Academic Publishers, Dordrecht, 203-211.
Jacquet-Lagreze, E. and Siskos, Y. (1982), "Assessing a set of additive utility functions for
multicriteria decision making: The UTA method", European Journal of Operational
Research, 10, 151-164.
Jacquet-Lagreze, E. and Siskos, J. (2001), "Preference disaggregation: Twenty years of
MCDA experience", European Journal of Operational Research, 130/2,233-245.
Joachimsthaler, E.A. and Starn, A. (1988), "Four approaches to the classification problem in
discriminant analysis: An experimental study", Decision Sciences, 19,322-333.
Keeney, R.L. and Raiffa, H. (1993), Decisions with Multiple Objectives: Preferences and
Value Trade-offs, Cambridge University Press, Cambridge.
Mardia, K.V. (1975), "Assessment of multinormality and the robustness of Hotelling's T2
test", Applied Statistics, 24, 163-171.
Mousseau, V. and Slowinski, R. (1998), "Inferring an ELECTRE-TRI model from assignment
examples", Journal of Global Optimization, 12/2,157-174.
Mousseau, V., Figueira, 1. and Naux, J.-Ph. (2001), "Using assignment examples to infer
weights for ELECTRE TRI method: Some experimental results", European Journal of
Operational Research, 130/2,263-275.
Pardalos, P.M., Siskos, Y. and Zopounidis, C. (1995), Advances in Multicriteria Analysis,
Kluwer Academic Publishers, Dordrecht.
Patuwo, E., Hu, M.Y. and Hung, M.S. (1993), "Two-group classification using neural
networks", Decision Sciences, 24, 825-845.
Pawlak, Z. (1982), "Rough sets", International Journal of Information and Computer
Sciences, 11,341-356.
Pawlak, Z. and Slowinski, R (1994), "Rough set approach to multi-attribute decision
analysis", European Journal of Operational Research, 72, 443-459.
Quinlan J.R. (1986), "Induction of decision trees", Machine Learning, I, 81-106.
Roy, B. and Moscarola, 1. (1977), "Procedure automatique d'examem de dossiers fondee sur
une segmentation trichotomique en presence de criteres multiples", RAIRO Recherche
Operationnele, 11/2, 145-173.
228 AIDING DECISIONS WITH MULTIPLE CRITERIA

Slowinski, R. and Zopounidis, C. (1995), "Application of the rough set approach to evaluation
of bankruptcy risk", International Journal of Intelligent Systems in Accounting, Finance
and Management, 4, 27-41.
Vale, D.C. and Maurelli, V.A. (1983), "Simulating multivariate nonnormal distributions",
Psychometrika, 48/3, 465-471.
Yu, W. (1992), "ELECTRE TRI: Aspects methodologiques et manuel d'utilisation",
Document du Lamsade No 74, Universite de Paris-Dauphine, 1992.
Zopounidis, C. and Doumpos, M. (1999), "A multicriteria decision aid methodology for
sorting decision problems: The case of financial distress", Computational Economies,
14/3,197-218.
Zopounidis, C. and Doumpos, M. (2000), "Building additive utilities for multi-group
hierarchical discrimination: The M.H.DIS method", Optimization Methods and Software,
14/3,219-240.
ORDINAL MULTIATTRIBUTE SORTING
AND ORDERING IN THE PRESENCE
OF INTERACTING POINTS OF VIEW

Marc Roubens
University of Liege, Institute of Mathematics, Belgium
M.Roubens@ulg.ac.be

Abstract In this paper, we use the Choquet integral as a general tool for dealing
with ordinal multiattribute sorting and ordering problems in the pres-
ence of interacting points of view. The technique that is used proceeds
in two steps : a pre-scoring phase determines for each point of view
and for each alternative a net score (number of times a given alterna-
tive beats all the other alternatives minus the number of times that this
alternative is beaten by the others) and is followed by an aggregation
phase that produces a global net score associated to each alternative.
The assessment of a capacity linked to the Choquet integral is ob-
tained by the solution of a constraint satisfaction problem deriving from
a reference set of alternatives (prototypes) that have been previously
ordered or sorted. We give examples of application comparing this ap-
proach with a rule based methodology.

Keywords: Multiattribute decision making; Ordinal data; Interacting points of


view; Choquet integral

1. Introduction
Let X = TIi=l Xi be a product space, where Xi is the ordered set
of possible evaluations of alternatives from the set A of cardinality m
for point of view i, belonging to the set N of cardinality n.
The performance scale is considered to be a totally ordered set Xi :
{gi --< ... --< g~J, i.e. a ni -point scale.
A profile that corresponds to alternative x E A is a vector x =
(Xl, ... ,Xn )
E X, where Xi = gi(x).(XiZ-d represents a profile that corresponds to z
except that its i-th component is equal to Xi.
230 AIDING DECISIONS WITH MULTIPLE CRITERIA

A preference relation related to i (t) can be defined on A such that


i
for every pair of alternatives x and y,

x t y iff Xi t Yi·
i

It is a total preorder which can be decomposed in its asymmetric part


(~) and its symmetric part ('":").
l l
To the total preorder (~) corresponds a valuation denoted Ri such
z
that

Ri (x, y) = 1 if x t y
i
0, otherwise.

From these valuations we determine a partial net score Si as follows :

Si(X) = L [~(x, y) - ~(y, x)]


yEA

Si (x) represents the number of times that x is better than any other
alternative minus the number of times that any other alternative is better
than x for point of view i.
The definition of the partial net score clearly indicates that this utility
function is measured on an interval scale. Positive linear transformations
are meaningful with respect to such a scale.
In order to obtain normalized measures, we consider the net scores
(this transformation is legitimate) :

sty ( ) = Si(X) + (m - 1)
z x 2(m _ 1)

to obtain

sf (x) 1 if x ~
l
y, for all y 1= x EA
= 0 if Y ~ x, for all y 1= x E A.
l

The net scores identify the corresponding components :

Xi ~ Yi (x ~ y) iff Sf (x) > Sf (y).


l

x dominates y (xDy) if x t y for each i E N. Relation D is a partial


i
preorder, being the intersection of total preorders (t).
i
Ordinal Multiattribute Sorting and Ordering 231

A ranking O:A) is a total preorder on the set A which does not con-
tradict the principle of coherence with respect to dominance (CD R) :
xDy ::::} x ~A y.
There exists a numerical representation of the ranking, i.e. a mapping
F : A -7 IR such that

F(x) ~ F(y) iff x ~A y.


Due to coherence property F(x) > F(y) implies not (yDx).
The net scores identify the Xi and we rewrite the previous result in
the following way :
given a ranking, there exists a scoring function f : IRn -7 IR such that

f(sf' (x), ... , S~ (x)) ~ f(sf' (y), ... , S~ (y)) iff x ~A y.

In some particular cases f can be expressed in terms of a Choquet


integral, [1]
n
Cv(SN (x)) = L <5i v(A(i))SD) (x)
i==l

where v represents a capacity on N, which is a set function v : 2N -7 IR


such that

v(0) 0
v(N) = 1

B C C::::} v(B) :::; v(C) (monotonicity)


{(I), ... ,(i), ... , (n)) represents a permutation 1r of the elements of the
finite set N: {I, ... , i, ... , n} such that

S~)(x) :::; ... :::; Sfn)(x)

and A(i) is the coalition of points of view {(i), ... ,(n)}, <5i v(A(i)} is the
marginal contribution in capacity of (i) to those points of view which
are ranked before (i) by 1r :

The step that consists of determining the partial scores Sf corre-


sponds to the pre-scoring phase.
In the second step we aggregate the information related to each point
of view to obtain a global scoring function of Choquet type.
232 AIDING DECISIONS WITH MULTIPLE CRITERIA

Consider now a sorting that assigns the alternatives to predetermined


ordered classes. (see also [6])
Let C E = {CEr , r = 1, ... , t} be a set of disjoint and ordered classes
of A. We denote
t
CE~ = U CEs·
s=r
A sorting is a partition of the alternatives belonging to A in t increas-
ingly ordered classes which does not contradict the principle of coherence
with respect to dominance (CDS) : xDy and y E CEr ::::} x E Clf.
Among results presented by Greco and al. [6] in the field of sorting
problems, we quote the next result.

The following propositions are equivalent:


1 foreachiEN,r,sE{l, ... ,t} and
for each Xi, Yi E Xi and X-i, Y-i E IljEN\i Xj

{XiX-i E CE r and YiY-i E CEs} ::::} {YiX-i E CE~ or XiY-i E CEf}

2 there exists

• functions hi : IRn --+ IR, increasing in each argument, called


criteria;
• function f : IRn --+ IR, increasing in each argument, called
sorting function;
• t - 1 ordered thresholds Zr, r = 2, ... ,t, satisfying

such that for each x E A

J[hdxd,.·., hn(xn)] ;::: Zr {:} x E CE~.

As the net scores can serve as criteria, we can rewrite the last propo-
sition replacing

by
J[sf (x), ... , S~ (x)] ~ Zr {:} x E CE~
where Zr = minxEclr f[Sf (x), ... , S~ (x)].

In this paper we restrict the sorting function to be a Choquet integral


loosing the universal characterization of Greco and al.
Ordinal Multiattribute Sorting and Ordering 233

2. The Choquet integral as an extension of the


weighted sum
First note that for additive capacities (v(B U C) = v(B} + v(C} if
B, C E N and B n C = 0} the Choquet integral coincides with the usual
discrete Lebesque integral and the set function v is simply determined by
the importances of each point of view: v(l}, ... , v(n}. In this particular
case
n
Cv(SN (x)) = Wv(SN (x)) = L v(i}S[i (x)
i=l
which is the natural extension of the Borda score as defined in the theory
of voting if alternatives play the role of candidates and points of view
represent voters.
If points of view cannot be considered as being independent, impor-
tances of coalitions BeN, v(B}, have to be taken into account.
Some coalitions of points of view might present positive interaction or
synergy. Although the importance of some points of view members of a
coalition B might be low, the importance of a pair, a triple, ... , might
be substantially larger and v(B} > EiEB v(i}.
In other situations, points of view might exhibit negative interaction
or redundancy. The union of some points of view do not have much
impact on the decision and for such coalitions B, v(B} < EiEB v(i}. In
this perspective, the use of a Choquet integral is needed.
The Choquet integral presents standard properties for aggregation
(see [2, 7]) : it is continuous, non decreasing, located between min and
max.
We will now indicate an axiomatic characterization of the class of
all Choquet integrals with n arguments that fully justify our approach.
This result is due to Marichal ([7, 8]). Let eB denote the characteristic
vector of B in {O, l}n, i.e. the vector of {O, l}n whose i-th component is
1 if and only if i E B.
The operators Mv : IR,n -+ IR, (v being a capacity on N) are
• linear with respect to the capacities, that is, there exist 2n functions
fT : IR,n -+ R (T c N) such that

Mv = L v(T}fT,
TeN

• non decreasing in each argument,


• stable for the admissible positive linear transformations, that is
234 AIDING DECISIONS WITH MULTIPLE CRITERIA

where r > 0, s E~,

• properly weighted by v, that is,

if and only if Mv = C v for all capacity defined on N.


This important characterization clearly justifies the way the partial
scores introduced in Section 1 have been aggregated.
The linearity assumes that we restict the aggregation to a linear com-
bination of the capacities.
The second axiom says that increasing a partial score cannot decrease
the global score.
The stability with respect to any positive linear transformation is
meaningful in our context: considering rSf +s instead of sf in the ag-
gregation procedure should not affect the conclusions obtained with the
initial scoring or sorting function. The last axiom gives an appropriate
definition of the importance of a coalition of points of view BeN :
v (B) corresponds to the aggregation of the elements of an hypothetical
profile x such that x >:- y for any yEA \ {x} and any i E Nand y >:- x,
z z
for any yEA \ {x} if i belongs to N \ B.
The major advantage linked to the use of the Choquet integral derives
from the large number of parameters (2n - 2) associated with a capacity
but this flexibility can be also considered as a serious drawback when
assessing real values to the importances of all possible coalitions. We
will come back to the important question in Section 3.
Let v be a capacity on N. The Mobius transform of v is a set function
m : 2N -t ~ defined by

m(B) = L (-l)IB\Clv(C), VB c N.
GeB

This transformation is invertible and thus constitutes an equivalent


form of a capacity (for other equivalent forms, see [4]) and v can be
recovered from musing

v(B) = L m(C)
GeB

This transformation can be used to redefine the Choquet integral with-


out reordering the net score :

Cv(SN (x)) = L m(B) 1\ sf (x)


BeN iEB
Ordinal Multiattribute Sorting and Ordering 235

A capacity v is k-additive [2] if its Mobius transform m corresponds to


m(B) = 0 for B such that IBI > k and there exists at least one subset
B of N of exactly k elements such that m(B) =I- O.
Thus, k-additive capacities can be represented by a limited number
of coefficients, at most L:f=l (7) coefficients.
For a k-additive capacity,
Cv(SN (x)) = L m(B) 1\ sf (x).
BeN iEB
IBI~k

In order to assure boundary and monotonicity conditions imposed on


v, the Mobius transform of a k-additive capacity must satisfy:
m(0) = 0, L m(B) = 1
BeN
IBI~k

L m(T) ~ 0, VB C N, Vi E B
T:iETeB,
ITI~k

If we confine the capacity to the 2-additive capacity :

v(B) = L m(i) + L m(i,j), VB C N,


iEB {i,j}eB

Cv(SN (x)) = L m(i)Sf (x) + L m(i,j)(Sf (x) 1\ Sf (x))


iEN {i,j}eN

under constraints related to m (see [3])

L m(i) + L m(i,j) =1
i {i,j}eN

m(0) = °
m(i) ~ 0, Vi E N

m(i) +L m(i,j) ~ 0, Vi E N, VB C N \ i
jEB

3. Assessment of capacities in sorting problems


Suppose that we have defined a sorting on a reference set of alter-
natives PeA. The capacity related to the sorting function

Cv[St' (x), ... , S;: (x)]


236 AIDING DECISIONS WITH MULTIPLE CRITERIA

is based on the resolution of a linear constraint satisfaction problem.


Let us first reconsider the ordered classification in terms of a digraph
G(A, r) where A is the set of nodes and where the application r : A -+ A
corresponds to the preference relation R (see also [5] and [6]) :

xRy iff xDy or (x E Cir and y E Cis, r> s).


Each subclass Cir can be seen as a partial subgraph G( Cir, r r) where
xrry iff xDYi x, y E Cir.
For each G( Cir, r r) we determine a subclass of non dominating nodes
Ndr = {x E Cir, rr(x) = 0} and a subclass of non dominated nodes
ND r = {x E Cf r , r;:-l(x) = 0} with
Zr = xENd
min Cv(SN (x))
r

The capacity is constrained by the following inequalities :

The capacity is practically constrained by :

Cv(SN(X)) > Cv(SN(y)), for all x E Ndr , Y E NDr-l (1)


and the total number of constraints of the previous type is equal to
t
L INdrl·INDr-ll·
r=2
Indeed, all nodes x E Cir \ (ND r UNdr ) will satisfy Cv(SN(x)) ~ Zr
as there exists a finite path (Xl, ... , x, ... ,Xi), Xl E N Dr, Xi E N dr
which corresponds to XIDx2, X2Dx3, ... , Xi-lDxi and by monotonicity
ofCv :

Cv(SN (Xl)) > Cv(SN (X2)) ~ ... ~ Cv(SN (X)) ~ ... ~ Cv(SN (Xi))
> min Cv(SN (x)) = Zr
xENdr

In order to determine the capacity underlying the Choquet integral,


we use the principle of parsimony.
We first try to use a weighted sum Wv in which case (n - 1) param-
eters have to be determined and all points of view are considered to be
independent.
In case of failure, (there is no solution satisfying the sorting constraints
and the constraints: I:iEN v(i) = 1, v(i) ~ 0) we successively introduce
k-additive measures in the Choquet integral, k going for 2 to n, until we
Ordinal Multiattribute Sorting and Ordering 237

obtain a non-empty set of solution. For each step k, we solve the linear
programme
maxc: (2)
under constraints linked to a k-additive capacity and sorting constraints
of type (1) where the strict inequalities are replaced by
Cv(SN (x)) ~ Cv(SN (y)) + c:, for all x E Nd r , y E ND r- 1
with c: ~ o.
The objective fonction is introduced in order to maximize the separa-
tion between the classes of the sorting.
The stopping rule corresponds to the first k such that c: > o.

4. An illustrative example of ranking


Let us consider a Decision Maker (DM) that is confronted to the
ranking of several cars rated according to Price (p) and Maximum Speed
(s) on the following ordinal 5-point scales:
for (p): every expensive (0), expensive (1), reasonable (2),
low (3), very low (4)
for (s): very slow (0), slow (1), medium (2), high (3),
very high (4)
The DM indicates that he prefers a car of reasonable price and medium
maximum speed to low priced car with slow maximum speed which is
itself preferred to an expensive car with high maximum speed. In terms
of profiles, we have :
((2), (2)) >- ((3), (1)) >- ((1), (3))
For each profile, let us determine Si and Sf considering first that
any profile is potentially plausible (we have however doubts that a very
expensive car with very slow maximum speed practically exists). In this
framework, m = 25 and
x Sp(x) Ss(x) S: (x) Sf (x)

((2), (2)) o o 12/24 12/24


((3), (1)) 10 -10 17/24 7/24
((1), (3)) -10 10 7/24 17/24
The ranking proposed by the DM cannot be captured by a weighted
sum W v . Indeed, we have
24Wv ((2), (2)) = 12v(p) + 12(1 - v(p)) = 12
24Wv ((3), (1)) = 17v(p) + 7(1 - v(P)) = 7 + 1Ov(p)
24Wv ((1), (3)) = 7v(p) + 17(1 - v(p)) = 17 - 1Ov(p)
238 AIDING DECISIONS WITH MULTIPLE CRITERIA

and it is impossible to satisfy

12 > 7 + lOv(p) > 17 - lOv(p)


v(p) < .5 and v(p) > .5.
If a Choquet integral is introduced, one gets:
24Cv ((2), (2)) 12
24Cv ((3), (1)) = 7[1 - v{p)] + 17v(p) = 7 + lOv{p)
24Cv {(1), (3)) = 7[1 - v{s)] + 17v(s) = 7 + lOv(s)
Any solution such that the capacity satisfies

.5> v(p) > v{s)


is convenient and we observe that

v(p, s) = 1 > v(p) + v(s)


which implies that Price and Maximum speed are redundant points of
view.
We observe that the procedure does not satisfy one of the axioms
proposed by Arrow : the independence of irrelevant alternatives. If
we consider that A is defined by a subset of all possible combinations
belonging to the product space X of cardinality ITi=l ni, net scores will
be affected by this reduction and the constraints induced by the ranking
on Choquet integrals will change.
This reduction might occur if "cognitive monsters" like very slow and
very expensive cars are deleted from the set A.
Moreover, Choquet integrals being additive for comonotonic profiles
(see Schmeidler [9]) some specific a priori acceptable proposals of the
DM cannot be treated with the use of Choquet integrals.
We recall that two profiles SN (x) and SN (y) are comonotonic if

{Sf (x) - Sf (x))(sf (y) - Sf (y)) 2:: 0, for all i,j EN

In other words, the ordering of SN (x) coincides with the ordering of


SN(y).
From the well-known commonotonic additivity of a Choquet integral
If SN (x), SN (y), u are comonotonic vectors then
Cv(SN (x)) 2:: Cv(SN (y)) implies
Cv(SN (x) + u) 2:: Cv(SN (y) + u)
Ordinal Multiattribute Sorting and Ordering 239

Suppose that the DM indicates the following preferences: a very low


priced car with medium speed is preferred to a low priced car with high
speed and also a low priced car with medium speed is strictly preferred
to a slow very low priced car.
In terms of Choquet integrals, we should have :

Cv((4), (2)) ~ Cv((3), (3)) ~ Cv((3), (2)) > Cv((4), (1))


However:

Cv((4), (2)) = Cv (~!, ~!) and Cv((3), (3)) = ~: (idempotency).

22 12) (17 17) ( 5)


( 24 '24 ' 24 ' 24 and 0, - 24 are comonotonic vectors
and

( 22 12) >
17 17). .
Cv ( 24 ' 24 Implies
Cv 24' 24

( 22
Cv 24' 24
7) > Cv ( 24'
17 12)
24 = Cv ((3), (2)) or

Cv ((4), (1)) > Cv ((3), (2)) which contradicts the DM's preferences.
5. An illustrative example of sorting
Let us consider a typical sorting problem presented in Greco and al.
[5]. Suppose that a school director wants to assign students to different
classes of merits on the basis of their scores in Mathematics (m), Physics
(p) and Literature (1). The ordinal scales of the evaluation in the three
courses as well as the global evaluation scale have been composed of
three grades : "bad" (B), "medium"(M), "good" (G).
Table 1 reproduces the evaluation of twenty seven students with re-
spect to three criteria and a sorting decision made by the director to-
gether with the net scores and the values of the Choquet integrals
(m = 27). In this case P = A.
Figure 1 presents the subgraphs G(Cfr, rr), r = 3,2,1 that corre-
sponds to the classes of "good" students (r = 3), "medium" students
(r = 2) and "bad" students (r = 1), as a Hasse diagram. The non-
dominating sets and non-dominated sets are :
NDa = ({GGG)}, Nda = ({GMM),{MGM)},
ND2 = ({MMG), (GGB)}, Nd2 = ({GMB), (MGB), (MMM)},
NDI = ({BGG), (GBG), (MMB)}, Nd 1 = ({BBB)}
240 AIDING DECISIONS WITH MULTIPLE CRITERIA

We now turn to a solution of the sorting that is given by the director.


A weighted sum cannot be used as sorting function as it can be easily
seen by the following contradictory constraints :
Table 1 Evaluation of 27 students, related net scores and Choquet
integral

Student Decision 26SmN 26SpN 268;' 26Cv

81 (BBB) B 4 4 4 4
82 (MBB) B 13 4 4 4 + 9v(m)
83 (GBB) B 22 4 4 4 + 18v(m)
84 (BMB) B 4 13 4 4 + 9v(p)
85 (MMB) B 13 13 4 4 + 9v(m,p)
86 (GMB) M 22 13 4 4 + 9v(m,p) + 9v(m)
87 (BGB) B 4 22 4 4 + 18v(p)
88 (MGB) M 13 22 4 4 + 9v(m,p) + 9v(P)
89 (GGB) M 22 22 4 4 + 18v(m,p)
810 (BBM) B 4 4 13 4 + 9v(£)
811 (MBM) B 13 4 13 4 + 9v(m, £)
812 (GBM) B 22 4 13 4 + 9v(£, m) + 9v(m)
813 (BMM) B 4 13 13 4 + 9v(p,£)
814 (MMM) M 13 13 13 13
815 (GMM) G 22 13 13 13 + 9v(m)
816 (BGM) B 4 22 13 4 + 9v(p, £) + 9v(p)
817 (MGM) G 13 22 13 13 + 9v(p)
818 (GGM) G 22 22 13 13 + 9v(m,p)
819 (BBG) B 4 4 22 4 + 18v(£)
820 (MBG) B 13 4 22 4 + 9v(m,£) + 9v(£)
821 (GBG) B 22 4 22 4 + 18v(m, £)
822 (BMG) B 4 13 22 4 + 9v(p, £) + 9v(£)
823 (MMG) M 13 13 22 13 + 9v(£)
824 (GMG) G 22 13 22 13 + 9v(m,£)
825 (BGG) B 4 22 22 4 + 18v(p, £)
826 (MGG) G 13 22 22 13 + 9v(p, £)
827 (GGG) G 22 22 22 22
Ordinal Multiattribute Sorting and Ordering 241

Figure 1. Hasse diagram related to 27 students and values of Choquet integral

Wv(MMM) > Wv(BGG) and Wv(MMM) > Wv(GBG). Indeed


(MMM) belongs to class 2 (M) and (BGG) together with (GBG) be-
long to class 1 (B).

26Wv(MMM) > 26Wv (BGG) => 13> 4 + 18v(P,£) =


4 + 18(v(p) + v(£))
=> v(P) + v(£) < ~
26Wv(MMM) > 26Wv (GBG) => 13> 4 + 18v(m,£) = 22 -18v(p)
=> v(p) > ~, a contradiction.

If one consider a Choquet integral, many non additive capacities sat-


isfy the following constraints :

v(0) = 0, v(m,p, £) = 1
1 ~ v(m,p) ~ v(m) ~ 0, 1 ~ v(m,p) ~ v(p) ~ 0,
1 ~ v(m,£) ~ v(m) ~ 0, 1 ~ v(m,£) ~ v(£) ~ 0,
1 ~ v(p,£) ~ v(p), 1 ~ v(p,£) ~ v(£)
242 AIDING DECISIONS WITH MULTIPLE CRITERIA

13 + 9v(p) > 13 + 9v(£) (Cv(MGM) > Cv(MMG))


> 4 + 18v(m,p) (Cv(MGM) > Cv(GGB))
13 + 9v(m) > 13 + 9v(£) (Cv(GMM) > Cv(MMG))
> 4 + 18v(m,p) (Cv(GMM) > Cv(GGB))
4 + 9[v(m,p) + v(p)] > 4 + 18v(p, £) (Cv(MGB) > Cv(BGG))
> 4 + 18v(m, £) (Cv(MGB) > Cv(GBG))
> 4 + 9v(m,p) (Cv(MGB) > Cv(MMB))
4 + 9[v(m,p) + v(m)] > 4 + 18v(p, £) (Cv(GMB) > Cv(BGG))
> 4 + 18v(m, £) (Cv(GMB) > Cv(GBG))
> 4 + 9v(m,p) (Cv(GMB) > Cv(MMB))
13 > 4 + 18v(p, £) (Cv(MMM) > Cv(BGG))
> 4 + 18v(m, £) (Cv(MMM) > Cv(GBG))
> 4 + 9v(m,p) (Cv(MMM) > Cv(MMB))
We consider the feasible capacity (there is no 2-additive capacity that
satisfies the strict constraints) that maximizes the objective function (2)

v(m,p, £) = 1, v(m,p) = .5, v(m, £) = v(p, £) = .25,


v(m) = v(p) = .25, v(£) = 0
with c = 0.0865.
The particular values of the Choquet integrals appear in Figure 1
between braquets, and we conclude with the presentation of the sorting
rule:
If Cv(x) 2:: .587, then student (x) is "good"
If Cv(x) 2:: .413, then student (x) is at least "medium"
otherwise student (x) is "bad".
As a matter of comparison we give the set of "at least" decision rules
that corresponds to our example as given by Greco and al. [6].
1 If "Mathematics" is good and "Physics" and "Literature" are at
least medium, then student is "good".
2 If "Physics" is good and "Mathematics" and "Literature" are at
least medium, then student is "good".
3 If "Mathematics", "Physics" and "Literature" are at least medium,
then student is "at least medium" ..
4 If "Mathematics" is good and "Physics" is at least medium, then
student is "at least medium".
5 If "Physics" is good and "Mathematics" is at least medium, then
student is "at least medium".
Ordinal Multiattribute Sorting and Ordering 243

6 All uncovered students are bad.


These rules represent completely the decision policy of the director
and use 13 elementary conditions, i.e. 16% of all descriptors from the
condition part of Table 1.

6. Learning with prototypes


It is rather unusual to obtain from the DM a sorting related to
every possible combination of all points of each ordinal scale in order
to obtain a training set of m = I1i ni profiles. Practically, we will get
a reference subset PeA of profiles called "prototypes" that will be
classified by the DM and we proceed to a supervised learning procedure.
From these specific alternatives we will learn the values of a feasible
capacity using the principle of parsimony.
We now introduce the following notation:

Cf~ := Cfr n P, Nd~ := Ndr n P, and ND~ := ND r n P.


Let us come back to our example and suppose that the director only
provides information about six students :
(MGG) and (GM M) are members of Ct; ("good" students)
(MMM) and (MGB) are members of Ct~ ("medium" students)
(BGG) and (GBG) are members of Ct; ("bad" students)

Figure 2 presents the Hasse diagram related to the prototypes


ND; = Nd3= {(MGG), (GMM)}
ND~ = Nd~ = {(MMM), (MGB)}
NDI = Nd~ = {(BGG), (GBG)}
Figure 3 represents the subclasses of students that are automatically
classified in a non ambiguous way by the prototypes (due to dominance
relation). Six students remain unclassified (dominance relation between
them and the classified students is also indicated) :
(GM B) and (M BG) belong to any of the three classes
(GGB), (MGM) and (M MG) belong either to Cl3(G), either to Cl2(M)
(MMB) belongs to Cl 2(M) or Cll(B).

The set of "at least" decision rules corresponds to


1 If "Mathematics" is at least medium and "Physics" and "Litera-
ture" are good, then student is "good".
2 If "Mathematics" is good and "Physics" is at least medium, then
student is "good".
244 AIDING DECISIONS WITH MULTIPLE CRITERIA

E---~CI'3

Figure 2. Hasse diagram related to G( CR r , r r) for six prototypic students

Figure 3. Hasse diagram for 27 students

3 If "Mathematics" and "Physics" are at least medium, then student


is "at least medium".

4 All uncovered students are bad.

From these rules, we learn that :


(GMB) and (GGB) are classified as (G) by rule (2), (MMB), (MMG)
and (MGM) are classified as (M) by rule (3) and (MBG) is classified
as (B) by default.
Let us now use the Choquet integral as an aggregating function. From
constraints :
REFERENCES 245

Wv(MMM) > Wv(BGG) and Wv(MMM) > Wv(GBG) we know that


no weighted sum can be used.
We now try to obtain a 2-additive capacity that satisfies constraints :
Cv(MGG) > Cv(MGB) , Cv(MGG) > Cv(MMM),
Cv(GMM) > Cv(MMM) , Cv(GMM) > Cv(MGB),
Cv(MMM) > Cv(BGG) , Cv(MMM) > Cv(GBG),
Cv(MGB) > Cv(BGG) , Cv(MGB) > Cv(GBG),
and we get f = 0.1154 and
v(m,p,£) = 1, v(m,p) = .8333, v(m,£) = v(P,£) = .3333,
v(P) = .1666, v(m) = .3333, v(£) = 0

that corresponds to the following Mobius tranform :

m(m,p,£) = 0, m(m,p) = .3333, m(m,£) = 0, m(p,£) = .1666,


m(m) = .3333, m(p) = .1666, m(£) = 0

together with the sorting rules :

If CV (x) ~ .615 then student (x) is "good",


If Cv(x) ~ .5 then student (x) is "at least medium",
Choquet integrals of (MGG), (GMM), (MMM), (MGB), being respec-
tively equal to .615, .615, .5, .5.
The unclassified students receive the following scores :
Cv(GGB) = {4 + 18v(m,p)}/26 = .731
Cv(MGM) = {13 + 9v(p)}/26 = .558
Cv(MMG) = {13 + 9v(£)}/26 = .5
Cv(GMB) = {4 + 9[v(m,p) + v(m)]} /26 = .558
Cv(MMB) = {4 + 9v(m,p)}/26 = .442
Cv(MBG) = {4 + 9[v(m, £) + v (£)]} /26 = .269
(GGB) and (MGM) enter the class of "good students", (MMG) and
(GMB) are considered to be "medium" and (MMB) together with
(M BG) are "bad" students.
The results related to this small example do not represent any valida-
tion of both learning procedures but indicate the links between the rule
based and discriminant function approaches.

Acknowledgment
The author is grateful to an anonymous referee whose remarks im-
proved the contents of the part related to the rule based methodology.
246 AIDING DECISIONS WITH MULTIPLE CRITERIA

References
[IJ Choquet, G., (1953), Theory of capacities, Annales de l'Institut Fourier, 5, 131-
295.
[2J Grabisch, M. (1997), k-order additive discrete fuzzy measures and their represen-
tation, Fuzzy Sets and Systems 92, 167-189.
[3J Grabisch, M., Roubens, M. (2000), Application of Choquet Integral in Multicrite-
ria Decision Making, in Grabisch, M., Murofushi, T. and Sugeno, M. (Eds) Fuzzy
Measures and Integrals, Theory and Applications, Physica Verlag, Heidelberg.
[4J Grabisch, M., Marichal, J.-L., Roubens, M. (2000), Equivalent representations of
set functions, Mathematics of Operations Research 25/2, 157-178.
[5J Greco, S., Matarazzo, B., Slowinski, R. (1998), A rough set approach to multicri-
teria and multiattribute classification, in Polkowski, L. and Skowron, A. (Eds),
Rough sets and current trends in computing, vol. 1424 of Lecture Notes in Arti-
ficial Intelligence, 60-67, Springer Verlag, Berlin.
[6J Greco, S., Matarazzo, B., Slowinski, R. (2000), Conjoint measurement and rough
set approach for multicriteria sorting problems in presence of ordinal criteria, in:
Colorni, A., Parruccini, M., Roy, B. (Eds), Selected papers from 49th and 50th
Meeting of the EURO working group on MCDA, EUR-Report, Ispra-Paris, in
print.
[7J Marichal, J.-L. (2000), Behaviorial Analysis of Aggregation in Multicriteria De-
cision Aid, in Fodor, J., De Baets, B., and Perny, P. (Eds), Preferences and
Decisions under Incomplete Knowledge, Physica Verlag, Heidelberg.
[8J Marichal, J.-L. (2000), An axiomatic approach of the discrete Choquet integral
as a tool to aggregate interacting criteria, IEEE Trans. Fuzzy Syst., to appear.
[9J Schmeidler, D. (1986), Integral representation without additivity, Roc. Amer.
Math. Soc. 97, 255-261.
IV

PREFERENCE MODELING
MULTIATTRIBUTE INTERVAL ORDERS

Peter C. Fishburn
AT&T Shannon Laboratory
fish@research.att.com

Abstract This paper describes and analyzes a simple additive-utility threshold


representation for preferences on multiattribute alternatives in which
the marginal preference relation on each attribute is an interval or-
der. The representation is related to multiattribute models discussed
by Doignon (1984), Doignon, Monjardet, Roubens and Vincke (1986),
Suppes, Krantz, Luce and Tversky (1989), and Piriot and Vincke (1997),
among others, but has features that appear to be new. The paper's pur-
pose is not so much to advocate yet another multiattribute model as it
is to allow an exposition of issues in multiattribute/multicriteria deci-
sion theory that have influenced the field during the past thirty to forty
years.

Keywords: MCMDj Preference modelingj Interval orders

1. Introduction
In this introduction we present basic terminology and assumptions
along with two versions of our focal representation. Section 2 describes
implications of the representation and comments on some of its features
and specializations. Section 3 takes a closer look at the model's indepen-
dence or cancellation conditions and notes a scheme of such conditions
that is necessary and sufficient for the representation. The paper con-
cludes with a brief discussion.
We assume throughout that >-, interpreted as strict preference, is an
asymmetric binary relation on a nonempty finite set X = Xl X X 2 x··· X
X n . We denote the symmetric complement of>- by so X Y if neither
f"V, f"V

x >- y nor y >- x. It is customary to interpret as an indifference relation


f"V

even though the reasons for x y might vary from pair to pair. For
f"V

example, x y in one case because x and yare nearly indistinguishable,


f"V

whereas x y in another case because the two alternatives are, in some


f"V

sense, incomparable.
250 AIDING DECISIONS WITH MULTIPLE CRITERIA

It will be assumed that the attribute sets Xl through Xn are mu-


tually disjoint. Then UXi is the set of all levels of all attributes, and
every member of UXi is associated with a single attribute. Given x =
(XI, X2,"" xn) and Y = (YI, Y2,'" ,Yn) in X, let
I(x,y) = {i: Xi =f Yi} .
An important feature of our basic representation is that a preference
comparison between x and Y depends solely on the attributes at which
they differ.
Let I, g, and l be real-valued functions on UXi. We write I::; 9
when I(a) ::; g(a) for all a E UXi , and write l ;::: 0 when l(a) ;::: 0 for
all a E UXi. Given 1 ::; g, it is useful to view [J(a), g(a)] as a real
interval for a. When l(a) = g(a) - I(a), 1 is a length function. Our basic
representation is
MODEL 1. There are I,g : UXi -7 IR with 1 ::; 9 such that, for all
x,yEX,
x >- Y {:} L [J(Xi) - g(Yi)] > O.
iEI(x,y)
According to Model 1, x is strictly preferred to Y precisely when the
sum over I(x, y) of the left endpoints of Xi'S intervals exceeds the sum
over I(x, y) of the right endpoints of Yi'S intervals. An equivalent version
is obtained when 9 is replaced by 1 = 9 - I.
MODEL 1*. There are 1,1 : UXi -7 IR with l ;::: 0 such that, for all
x,yEX,
n
X >- Y {:} L[J(Xi) - I(Yi)] > L l(Yi).
i=l iEI(x,y)
When 1 is viewed as a threshold function, Model 1* says that thresh-
olds are additive over attributes where x and Y differ. I have written
~~=I[I(Xi) - I(Yi)] instead of the equivalent ~iEI(x,y)[J(Xi) - I(Yi)] to
make it clear that I(x, y) affects only the threshold feature of the model.
As a final point of introduction, we recall definitions and numerical
representations of several binary relations. Let P be an asymmetric
binary relation on a nonempty finite set A, and let I be the symmetric
complement of P. The relation P, which is viewed as a strict or strong
preference relation, is cyclic if there are aI, a2, ... , am E A with m ;::: 2 for
which alPa2P'" PamPal. Representations for cyclic P are described
by Tversky (1969), Suppes et al. (1989), Fishburn (1991), and Pirlot
and Vincke (1997), but will not be considered further here. With respect
to all a, b, c, d E A, five increasingly restricted P relations are defined by
Multiattribute Interval Orders 251

the following properties:


acyclic: P is not cyclicj
partial order: (aPb, bPc) => aPc [transitivitylj
interval order: (aPb, cPd) => aPd or cPbj
semiorder: (aPb, cPd) => aPd or cPbj aPbPc => aPd or dPej
weak order: aPb => aPe or ePb.
Relation I is reflexive (ala) and symmetric (alb => bla) for all five, but
it is transitive (alb and blc => alc) if and only if P is a weak order.
Utility representations for four of the five have the following forms with
u : A --+ R, 81/(a, b) ~ 0 for all (a, b) E A x A with 81/(a, b) = 81/(b, a),
8' : A --+ R with 8' ~ 0, and constant 8 ~ 0:
acyclic: aPb ¢:} u(a) > u(b) + 81/(a,b)
interval order: aPb ¢:} u(a) > u(b) + 8'(b)
semiorder: aPb ¢:} u(a) > u(b) + 8
weak order: aPb ¢:} u(a) > u(b).
The acyclic representation is due to Abbas and Vincke (1993), and the
constant-threshold semiorder representation is due to Scott and Suppes
(1958). Acyclic relations and partial orders also have the one-way rep-
resentation aPb => u(a) > u(b). We note also that interval orders and
semiorders have the following representations with u, v : A --+ Rand
u ::; v:
interval order: aPb ¢:} u(a) > v(b)j
semiorder: aPb ¢:} u(a) > v(b), aPbPc => u(a) > v(d) or u(d) > v(e).
The latter representation is decidedly less elegant than the Scott-Suppes
representation. However, it is suitablp- for Modell when the marginal
preference relations for all attributes are semiorders, whereas the Scott-
Suppes form may be inapplicable for Model 1* under the same marginal
preference conditions. Proofs for all representations in this paragraph
apart from the Abbas-Vincke model for acyclic P are included in Fish-
burn (1985).

2. Implications
This section identifies implications of Modell or Modell * (Lemmas 1
through 4), then notes special cases of the model. The inapplicability of
the Scott-Suppes representation for Modell * when attribute preference
relations are semiorders is verified by the proof of Theorem 1. The
section concludes with critical comments on the model. Prior to that,
Model 1 or Model 1* is assumed to hold with f, g, and I = 9 - f as
defined therein, and with n ~ 2.
252 AIDING DECISIONS WITH MULTIPLE CRITERIA

We begin by defining a strict preference relation >-i on attribute set


Xi by
Xi h Yi if X >- Y whenever Xj = Yj for all j i- i.
Also let ""i be the symmetric complement of h on Xi' Let Ii, gi, and
li be the restrictions of f, g, and 1 on Xi. Then
Xi h Yi {:} h(Xi) > gi(Yi) {:} h(Xi) > h(Yi) + li(Yi).
Lemma 1 >-i on Xi is an interval order for i = 1, ... ,n.
Modell is unaffected by similar affine transformations of the (h, gi).
In particular, if Co > 0 and if Ci E IR for i = 1, ... ,n, then the UI, gD
satisfy Modell in place of the (h, gi) when
ff(xi) = COh(Xi) + Ci, gHXi) = cogi(xd + Ci
for all Xi E Xi, i = 1, ... ,n. The full uniqueness status of f and 9 is more
complex than this because of the finite nature of the representation.
We observe next that uniformity of ""i or of >-i across attributes has
the anticipated effect on their holistic counterparts.
Lemma 2 If Xi ""i Yi for all i, then x"" y. If Xi h Yi for all i E I(x, y),
then X >- y.

Proof. If Xi ""i Yi for all i, then f(xi) ::; g(Yd for all i, so not
(x >- y). Similarly, not(y >- x), so x "" y. If Xi h Yi for all i E I(x,y),
then f(xd > g(Yi) for all i E I(x, Y), so L:I(x,y)[J(Xi) - g(Yi) > 0 and
x >- y . •
Even when n is large, x >- Y is not assured when Xi ""i Yi and Xj'r---j-Yj
for all j i- i. For example, if Xl ""1 YI and Xj >-j Yj for j = 2, ... , n, we
have x"" Y iflf(xd-g(Ydl > L:j~2[J(Xj)-g(Yj)]. This may be unlikely,
but is plausible when one attribute is substantially more important than
the others.
We have already remarked on the importance of I(x, Y), whereby
attributes for which Xi = Yi have no bearing on X >- y. If I(x, y) were
omitted from Modell, with L:~=I replacing L:iEI(x,y) , then >- on X
would be an interval order. We note shortly that this need not be true
for ModelL First, an elementary observation about I.
Lemma 3 I(x,z) ~ I(x,y) UI(y,z) for all X,y,Z E X.

Proof. If Xi i- Zi then either Yi i- Zi, whence i E I(y, z), or Yi = Zi,


in which case i E I(x, y). •
Lemma 4 >- on X is a partial order but is not necessarily an interval
order.
Multiattribute Interval Orders 253
Proof. If x )- Y and Y )- Z then, by Model 1* and Lemma 3,
n n n
L[J(Xi) - !(Zi)] L[!(Xi) - !(Yi)] + L[!(Yi) - !(Zi)]
i=l i=l i=l

iEI(x,y) iEI(y,z)

> L l(Zi).
iEI(x,z)

Hence )- is transitive, so it is a partial order.


To show that )- need not be an interval order, it suffices to take
n = 2. Let x = (Xl, X2), Y = (Xl, Y2), Z = (Zl' Z2) and W = (WI, Z2) with
I{XI, Zl, wIll = I{X2, Y2, zdl = 3. Then

X )- Y {:} !(X2) - !(Y2) > l(Y2)


Z )- W {:} !(zt} - !(WI) > l(WI)
not(x )- w) {:} !(xt} + !(X2) - !(WI) - !(Z2) :::; l(wt} + l(Z2)
not(z )- y) {:} !(zt} + !(Z2) - !(xt} - !(Y2) :::; l(xt} + l(Y2) ,

and it is easily seen that all four inequalities can hold simultaneously. •
Our first special case of Model 1 is the basic additive utility repre-
sentation in which! = 9 or 1 == 0 with )- and every )-i a weak order.
Axioms for this case are included in Scott (1964), Krantz, Luce, Suppes
and Tversky (1971), and Fishburn (1970).
A second specialization arises when every )-i is a semiorder. The
threshold structure for this case is more uniform than for the interval
orders case, but a proof like that for the second part of Lemma 4 shows
that )- need not be a semi order or an interval order. In addition, the
Scott-Suppes semiorder representation for each h may be inapplicable
for Modell.

Theorem 1 If Modell * holds and every )-i is a semiorder, then there


may be no ! that satisfies Model 1* when 1 is constant on each Xi.

and
bl )-2 b3 )-2 b4 )-2 b5 , b2 )-2 b4, bl rv2 b2 ""2 b3 .
Then )-1 and )-2 are semiorders. Suppose Modell * holds with 1 == (St ~ 0
on Xl and 1 == 02 ~ 0 on X 2. Suppose further that (a2,bd )- (al,b3)
254 AIDING DECISIONS WITH MULTIPLE CRITERIA

and (a3, b5) >- (a5, b4), which are consistent with the original form of the
model. We have

f(ad - f(a2) > (h by a1 >-1 a2


f(bd - f(b 3) > 82 by b1 >-2 b3
f(b3) + 282 > f(bd by b1 "'2 b2 "'2 b3 .
Addition of the first three inequalities and two times the fourth inequal-
ity implies 82 > 81. We also have

f(a3) - f(a5) > 81 by a3 >-1 a5


f(b4) - f(b 5) > 82 by b4 >-2 b5
f(a5) + 281 > f(a3)
and a similar addition implies 81 > 82, This contradicts 82 > 81 and
verifies the theorem. •
Theorem 1 raises questions about conditions for >- on X that are
necessary and sufficient for each of two semi order versions of Modell.
The first version is Modell when every h is a semiorder. The second
is the more restrictive representation
n
X >- Y ¢:} L)f(Xi) - f(Yi)] > L 8i ,
i=1 iEI(x,y)

with l constant at 8i ~ 0 on Xi. Although I shall not do so here, both


questions can be answered by applications of the methodology in the
next section.
There are also fully or partly lexicographic specializations of Model
1. The simplest example is the fully lexicographic case where each >-i
on Xi is a weak order and

x >- Y if not (Xi "'i Yi) for some i, and Xi h Yi for


the smallest i for which not(xi "'i Yi).

Modell * holds here when l == 0 and, for each i < n for which f on Xi
is not constant,
n
min{J(xi) - f(Yi) : Xi h Yi} > L max{J(Xj) - f(Yj} : Xj >-j Yj}·
j=i+l
Multiattribute Interval Orders 255

Axioms for this case that do not presume a lexicographic hierarchy


directly are described in Fishburn (1975). Less extreme lexicographic
versions with thresholds allow limited tradeoffs among attributes. An
example is the multiattribute semiorder representation with differential
attribute scaling where

and
n
X >- Y {::} I)!{Xi) - !(Yi)] > II{x, y)l·
i=l

Attributes with small indices will tend to dominate preference compar-


isons, but those with large indices can be determinative when the !
differences for the small indices are small.
We conclude this section with critical comments on Model 1 and re-
lated models. Many of our concerns are discussed at greater length in
Suppes et al. (1989, Chapter 16) and Pirlot and Vincke (1997, Chapter
6).
A basic issue is how one approaches multiattribute and multicriteria
problems. Our approach is highly structured with a single preference
relation on a product set. In practice, one often formulates a menu
of holistic options without a complete product structure, and identifies
salient criteria for comparing options. A constructive procedure is used
for holistic comparisons, often with two or more levels of preference.
This approach has been developed by Roy (1968, 1991) and others in
the ELECTRE, PROMETHEE, and related procedures. A good ex-
ample is Hokkanen and Salminen (1997). One interesting aspect is the
axiomatization of double threshold orders in Vincke (1988) and Tsoukias
and Vincke (1998), where a 'weak preference' relation is sandwiched be-
tween strong preference and indifference to reflect bare but discernible
preferences. A natural extension of the levels-of-discernment paradigm
assigns probabilities to preferences or choices in theories of probabilistic,
random, or stochastic choice (Suppes et al., 1989, Chapter 17; Fishburn,
1998).
A potential shortcoming of Modell and similar multiattribute prefer-
ence representations is assumptions of independence or additivity across
attributes. Many problems have valuewise interdependencies among at-
tributes that invalidate independence. This can be alleviated by group-
ing related attributes or criteria into super-criteria, but there is always
the danger that salient interactions will go unrecognized.
When independence across attributes is defensible, another concern is
the particular algebraic aggregation structure of a representation. Model
256 AIDING DECISIONS WITH MULTIPLE CRITERIA

1 uses a rather simple structure. Other models involve more complex


aggregation with monotonic transformations, step functions, and other
features designed to accommodate perceived deficiencies of straightfor-
ward additive forms.
The way that thresholds are treated is another matter of importance.
Model 1* excludes thresholds for attributes with identical levels, but
questions can be raised about its additive accumulation of thresholds
for the other attributes. One criticism of additive accumulation is that
it can produce widespread indifference and dilute >-. This can be offset
with relatively short intervals, but the possibility remains that there may
be better ways to accumulate thresholds across attributes.

3. Independence
We conclude our technical analysis of Model 1 by noting a scheme
of independence/cancellation conditions that is necessary and sufficient
for the model. We precede it with a simpler scheme that is necessary
but not sufficient. Both are based on a cancellation or balance relation
E between finite lists of members of X. Given m E {I, 2, ... } and
1 ... ,yffi EX ,wewne
x 1, ... ,xffi ,y, 't

(Xl, ... , Xffi)E(yl, ... , yffi) if yI, yr, ... , yrn if a permutation
of Xi1, Xi2, ... , Xiffi£or z'-1
- , ... , n.

In other words, (xl, ... ,Xffi)E(yl, . .. ,yffi) ifthere is an identity bijection


between the attribute components in the two lists.
Suppose Modell * holds. If (xl, ... ,x ffi )E(yl, ... ,yffi) and x j >- yj
for j = 1, ... , m, then
ffi n ffi
0= L L[j(x{) - f(y{)] > L L l(y{) ~ 0
j=l i=l j=l iEI(xi ,yi)

and we obtain the contradiction 0 > O. Hence the following scheme is


necessary for Modell.
SI. If m ~ 1, (xl, ... , xffi)E(yl, . .. ,yffi) and x j >- yj for all j < m,
then not (Xffi >- yffi).
When m = 2, this implies among other things that >- is asymmetric.
Its insufficiency for Modell, or for any additive model in which >- is a
partial order, is shown by the fact that it does not imply transitivity.
We have (x,y,z)E(y,z,x), so the m = 3 part of SI says that if x >- y
and y >- z then not(z >- x), but this allows x '" z as well as x >- z. In
fact, as shown by Theorem 4.1A in Fishburn (1970), SI is necessary and
Multiattribute Interval Orders 257

sufficient for the acyclic additive representation


n
X ~ Y ~ ~)f(xi) - f(Yi)] > 0 .
i=l

A stronger scheme than 81 is clearly needed to imply Modell. We


will use 82.
82. If m ~ 1, (xl, ... , xm)E(yl, .. . , ym), 1 ~ k ~ m, and for every
a E UXi ,
I{j ~ k : a = yf and i E I (x j , vi)} I ~
I{k < j ~ m :a = x{ and i E I (x j , yj)} I ,
then it is false that
(1) x j ~ yj for j = 1, ... , k
(II) x j '" yj for j = k + 1, ... , m .
When k = m throughout 82, it reduces to 81. The added strength of
82 lies in its k < m parts. To see how this relates to Model 1*, suppose
the model holds and 82 is false. Then there are k and m that satisfy the
hypotheses of 82 and also satisfy (1) and (II):
k n k

(1) ~ L L[f(x{) - f(vi)] > L L l(vi)


j=li=l j=l iEI(xi ,yi)
m n m
(II) ~ L L[J(yf) - f(x{)] ~ L L l(x{) ,
j=k+l i=l j=k+l iEI(xi ,yi)

where in (II) we have used yj '" xj. Because (xl, ... , xm)E(yl, . .. ,ym),
the left sides of the preceding inequalities are equal, and therefore
m k
L L l(xi) > L L l(vi)·
j=k+l iEI(xi ,yi) j=l iEI(xi ,yi)
However, the new ~ hypothesis of 82 implies
k m
L L l(vi) ~ L l(x{)
j=l iEI(xi ,yi) j=k+l iEI(xi ,yi)

because 1 ~ 0, and we have a contradiction. It follows that 82 is neces-


sary for Modell. We prove shortly that it is also sufficient.
258 AIDING DECISIONS WITH MULTIPLE CRITERIA

Theorem 2 Modell holds if and only if 82 holds.


Before proving sufficiency, we note how 82 implies that >- is transitive.
Suppose x >- y and y >- z. By 81, or k = m = 3 in S2, we have x >- z or
z '" x. Now take k = 2 and m = 3 in 82 with
(x 1,x 2,x3) = (x,y,z)
(y1,y2,y3) (y,z,x).
If i E I(z,x), so that Zi is an xl on the right of 2: in 82's hypotheses,
then either Yi =/:. Zi or Xi =/:. Yi = Zi, so that Zi is yl or y; on the left of 2:
with i E I(x j , yj). Then all hypotheses hold, so we conclude that Z '" x
is false. Hence x >- Z and therefore >- is transitive.

Sufficiency proof of Theorem 2. Assume that 82 holds. We show


that the representation of Model 1 follows from solution theory for a
finite system of linear inequalities as described, for example, in Lemmas
5.2 and 5.3 in Fishburn (1972).
Let N = I U Xi I, and identify the members of UXi as C1, C2, ... , CN·
Let Sj = f(cj) and tj = g(Cj) in a potential solution for Modell, and let
(s, t) = (81, ... ,8 N, t1, ... , tN)' Also let (a, (3) = (a1' ... ,aN, {31, ... ,(3 N)
be a vector in {O, 1, -1 }2N, and let ((a, (3), (8, t)) denote the inner prod-
uct Eaj8j + E{3jtj of (a, (3) and (8, t). For every (x, y) E X X X define
(a, (3)xy as follows:
aj = 1 if Cj is an Xi with i E I(x, y), aj = 0 otherwise;
(3j = -1 if Cj is a Yi with i E I(x,y), {3j = 0 otherwise.
Also define (a,{3)j byaj = 1, {3j = -1, and (ap,{3p) = (0,0) for all
p =/:. j. Then Model 1 holds if and only if the following system of linear
inequalities has an (8, t) solution:
(i) ({a, (3)xy, (8, t)) °
> for every x >- y;
(ii) ({a, (3)xy, (8, t)) ~ 0 for every x'" y;
(iii) ({a, (3)j, (8, t)) ~ °for j = 1, ... , N,
where (iii) is tantamount to f ~ g.
By linear solution theory, this system has no (s, t) solution if and only
if there are nonnegative integers, one for each x >- y in (i), at least one
of which is positive, and nonpositive integers, one for each x '" y in (ii)
and for each j in (iii), such that, for every j E {I, ... ,N},
(iv) the sum of all aj in (i)-(iii) multiplied by their corresponding in-
tegers equals 0;
Multiattribute Interval Orders 259

(v) the sum of all {3j in (i)-(iii) multiplied by the same corresponding
integers equals O.
This gives 2N equations, two for each Cj.
Proceeding under the supposition that there is no (8, t) solution, let
k ~ 1 be the sum of the integers for (i), let m - k ~ 0 be the sum of
the absolute values of the integers for (ii), and let rj ~ 0 be the absolute
value of the integer for j in (iii). Then, with multiples of the x >- y
and x '" y pairs according to the corresponding integers, we have lists
xl >- yl, ... , xk >- yk and x k+ l '" yk+1, ... ,xm '" ym such that, for every
Cj E Xi and every i E {I, ... ,n}, (iv) and (v) yield
I{l :S 'Y :S k: xl = Cj and i E I(x'Y, y'Y)}1
-I{k < 'Y :S m : xl = Cj and i E I(x'Y, y'Y)}1 - rj =0

1{1 :S 'Y:S k: YI = Cj and i E I(x'Y,y'Y)}1


-I{k < 'Y :S m : YI = Cj and i E I(x'Y, y'Y)}1 - rj = O.
As a notational convenience, interchange x'Y and y'Y in each x'Y '" y'Y with
'Y > k in these equations so that, for all Cj E Xi and all i E {I, ... , n},
the two equations for Cj yield
I{l :S 'Y :S m : xl = Cj and i E I(x'Y, y'Y)}1
= 1{1:S 'Y :S m : YI = Cj and i E I(x'Y, y'Y)}1
with
I{l :S 'Y :S k : YI = Cj and i E I(x'Y, y'Y)}1
I{k < 'Y:S m: xl = Cj and i E I(x'Y,y'Y)}1 = rj ~ O.
If i ¢ I(x'Y, y'Y), then Cj equals both xl and YI
or equals neither xl nor
YI. We can therefore omit the I condition from the first of the preceding
two equations to obtain
I{l :S 'Y :S m: xl = Cj}1 = I{l :S 'Y:S m: YI = Cj}1
with
1{1:S 'Y:S k: YI = Cj, i E I(x'Y,y'Y)}1
> I{k < 'Y:S m: xl = Cj, i E I(x'Y,y'Y)}I·
These hold for all Cj E Xi and all i E {I, ... , n}. The equations imply
(xl, ... ,xm)E(yl, ... ,ym), and the inequalities imply the ~ hypotheses
of 82. Moreover, (I) and (II) of 82 hold by construction. This contradicts
our assumption that 82 holds, and we conclude that (i)-(iii) have an (8, t)
solution. Hence 82=} ModelL.
260 AIDING DECISIONS WITH MULTIPLE CRITERIA

4. Discussion
Several approaches to multiattribute/multicriteria decision making
have risen to prominence during the past generation. The present paper
focuses on the approach in which a primitive strict preference relation
on a product set is represented numerically by an algebraic structure
based on the attributes of the product sets. Its particular emphasis is
an additive representation with attribute-preference interval orders and
strict-preference thresholds that accumulate additively over attributes
on which alteratives differ.
I have mentioned aspects of the representation that are liable to refu-
tation, including independence across attributes and its algebraic struc-
ture. A related concern is model validation, testing S2 or some of its
simpler implications as in Lemmas 1, 2 and 4. If the model is grossly
inadequate, this can usually be discovered quickly. A more interesting
situation occurs when no simple refutation is uncovered. If small-m cases
of S2 hold, then the representation is probably adequate for all practical
purposes. It is true for finite X that S2 holds in general if it holds up
to some value m* of m for given values of the lXii, but m* will usually
be large enough to discourage extensive testing of S2 for nearby values
ofm.

References
Abbas, M. and Ph. Vincke: Preference structures and threshold models, Journal of
Multi-Criteria Decision Analysis 2 (1993), 171-178.
Doignon, J.-P.: Threshold representations of preference relations, manu-script (1984).
Doignon, J.-P., B. Monjardet, M. Roubens and Ph. Vincke: Biorder families, valued
relations, and preference modelling, Journal of Mathematical Psychology 30 (1986),
435-480.
Fishburn, P. C.: Utility Theory for Decision Making. New York: Wiley, 1970.
Fishburn, P. C.: Mathematics of Decision Theory. Paris: Mouton, 1972.
Fishburn, P. C.: Axioms for lexicographic preferences, Review of Economic Studies
42 (1975), 415-419.
Fishburn, P. C.: Interval Orders and Interval Graphs: A Study of Partially Ordered
Sets. New York: Wiley, 1985.
Fishburn, P. C.: Nontransitive preferences in decision theory, Journal of Risk and
Uncertainty 4 (1991), 113-134.
Fishburn, P. C.: Stochastic utility, in Handbook of Utility Theory (ed. S. Barbera, P.
J. Hammond and C. Seidl), 273-319. Dordrecht: Kluwer, 1997.
Hokkanen, J. and P. Salminen: ELECTRE III and IV decision aids in an environmen-
tal problem, Journal of Multi-criteria Decision Analysis 6 (1997), 215-226.
Krantz, D. H., R. D. Luce, P. Suppes and A. Tversky: Foundations of Measurement:
Volume 1. New York: Academic Press, 1971.
Piriot, M. and Ph. Vincke: Semiorders: Properties, Representations, Applications.
Dordrecht: Kluwer, 1997.
REFERENCES 261
Roy, B.: Classement et choix en presence de points de vue mUltiples (Ia methode
Electre), Revue Franr;aise d'Informatique et de Recherche Operationnelle 8 (1968),
57-75.
Roy, B.: The outranking approach and the foundations of ELECTRE methods, Theory
and Decision 31 (1991), 49-73.
Scott, D.: Measurement structures and linear inequalities, Journal of Mathematical
Psychology 1 (1964), 233-247.
Scott, D. and P. Suppes: Foundational aspects of theories of measurement, Journal
of Symbolic Logic 23 (1958), 113-128.
Suppes, P., D. H. Krantz, R. D. Luce and A. Tversky: Foundations of Measurement:
Volume 2. New York: Academic Press, 1989.
Tsoukias, A. and Ph. Vincke: Double threshold orders: a new axiomatization, Journal
of Multi-Criteria Decision Analysis 7 (1998), 285-301.
Tversky, A.: Intransitivity of preferences, Psychological Review 76 (1969), 31-48.
Vincke, Ph.: {P, Q, I}-preference structures, in Non-Conventional Preference Rela-
tions in Decision Making (ed. J. Kacprzyk and M. Roubens), 72-81. Berlin: Springer,
1988.
PREFERENCE REPRESENTATION
BY MEANS OF CONJOINT MEASUREMENT
AND DECISION RULE MODEL

Salvatore Greco
Facolta di Economia, Universita di Catania, Italy
salgreco@mbox.unict.it

Benedetto Matarazzo
Facolta di Economia, Universita di Catania, Italy
matarazz@mbox.unict.it

Roman Slowinski
Institute o/Computing Science, Poznan University o/Technology, Poland
slowinsk@sol.put.poznan.pl

Abstract: We investigate the equivalence of preference representation by numerical


functions and by "if .. , then ... " decision rules in multicriteria choice and
ranking problems. The numerical function is a general non-additive and non-
transitive model of conjoint measurement. The decision rules concern pairs of
actions and conclude either presence or absence of a comprehensive
preference relation; conditions for the presence are expressed in "at least"
terms, and for the absence in "at most" terms, on particular criteria Moreover,
we consider representation of hesitation in preference modeling. Within this
context, two approaches are considered: dominance-based rough set
approach-handling inconsistencies in expression of preferences through
examples, and four-valued logic-modeling the presence of positive and
negative reasons for preference. Equivalent representation by numerical
functions and by decision rules is proposed and a specific axiomatic
foundation is given for preference structure based on the presence of positive
and negative reasons. Finally, the following well known multicriteria
aggregation procedures are represented in terms of the decision rule model:
lexicographic aggregation, majority aggregation, ELECTRE I and TACTIC.

Key words: Preference modeling; Conjoint measurement; Decision rules; Ordinal criteria;
Inconsistency; Rough sets; Axiomatization
264 AIDING DECISIONS WITH MULTIPLE CRITERIA

1. Introduction
In economics, social choice theory and multicriteria decision making, a
lot of attention has been paid to conjoint measurement models representing
preferences by means of numerical functions. Researchers investigated a
wide variety of models going from simplest classical additive and transitive
models to most sophisticated non-additive and non-transitive models.
Recently, an alternative approach to representation of preferences has been
considered: preference models in terms of"ij. .. , then ... " decision rules.
The decision rules concern pairs (x, y) of actions and conclude either
presence or absence of a comprehensive preference relation between x and y;
conditions for the presence are expressed in "at least" comparison terms, and
for the absence in "at most" comparison terms, on particular criteria. For
example, in multicriteria choice and ranking problems, the two kinds of rules
are like: "if x is at least better than y on criterion i and x is at least weakly
preferred to y on criterionj, than x is at least as good as y", or "if x is at most
worse than y (i.e. worse or much worse) on criterion i and x is at most
indifferent (i.e. indifferent or worse) on criterion j, then x is not as good as
y".
Traditionally, preferences have been modeled using a value function u(-)
such that action x is at least as good as action y, denoted by xSy, iff
u(x);:::u(y). This implies that the relation S is complete (for each couple of
actions x,y, xSy or ySx) and transitive (for each triple of actions x,y,z, xSy
and ySz imply xSz). In a multicriteria context, each action x is generally
seen as a vector X=[Xt,X2,""Xo] of features of x with respect to criteria 1, ... ,no
It is often assumed that the value function is additive (see, e.g., Keeney and
Raiffa 1976, Krantz et al. 1978, and Wakker 1989, for an axiomatic
characterization), i.e.
o
u(x)= L uj(x j), (1)
j=l

where Uj is a marginal utility of action x with respect to criterion i (i=1, ... ,n).
The additive and transitive model represented by the additive value
function is inappropriate in many situations, because in reality:
• the indifference (the symmetric part of S) may not be transitive,
• S may not be complete, i.e. some pairs of actions may be incomparable,
• the compensation between evaluations of conflicting criteria and the
interaction between concordant criteria are far more complex than the
capacity of representation by the additive value function.
Conjoint Measurement and Decision Rule Model 265

To overcome these limitations, a variety of extensions of the additive


model has been proposed (e.g. Tversky 1969, Fishburn 1991). The most
general model has been proposed recently by Bouyssou and Pirlot (1997)
providing an axiomatic basis to many multicriteria decision methods
considered in the literature (see, e.g., Roy and Bouyssou 1993, Vincke
1992). Precisely, they consider a non-transitive and non-additive model
represented by a function G:Rn~R, non-decreasing in each coordinate, and
by functions \}Ij:R2~R, i=I, ... ,n, non-decreasing in the first argument and
non-increasing in the second argument, such that for all pairs of actions x,y
(2)
In this paper, we intend to show that the preferences represented by this
and similar models can also be represented by means of a set of "if. .. ,then ... "
decision rules.
To illustrate this equivalence, let us suppose that a Decision Maker (DM)
compares some actions evaluated by two criteria, i and j, with the aim of
choosing the best one. For each criterion, the DM considers three possible
evaluations: "bad", "medium" and "good", related as follows:
"medium" is better than "bad" and "bad" is worse than "medium",
"good" is better than "medium" and "medium" is worse than "good",
"good" is much better than "bad" and "bad" is much worse than "good",
there is indifference in case of identical evaluation, i.e. "good" is
indifferent to "good", and so on.
When comparing pairs of actions, say x and y, the DM uses the following
four decision rules:
a) ifx is much better than y on criterion i, then x is at least as good as y,
b) if x is better than y on criterion i and not much worse on criterion j, then
x is at least as good as y,
c) ifx is indifferent to y on criteria i and j, then x is at least as good as y,
d) otherwise, x is not at least as good as y.
Decision rules a)-d) constitute a preference model of the DM. These
preferences can also be represented in terms of model (2), as follows:
uj(bad) = uj(bad) = 0, ulmedium) = Uj (medium) = 1, Uj(good) = Uj(good) = 2,
266 AIDING DECISIONS WITH MULTIPLE CRITERIA

3 if u;{x;}-u;{y;}=2
1 if U;{X;}-U;{y;} =1
o if U;{X;}-U;{y;} =0
-I if u;{x;}-u;{Y;}=-1
-2 if u;{x;}-u;{y;}=-2
0.75 ifuj{xj)-uj{yJ= 2
0.5 if uj{x.)-uj{Y·)=1
o if uj(XJ-Uj(yJ= 0
-I ~f uj(xj)-uj(yJ=-1
-2 If Uj(XJ-Uj(yJ=-2
and
G{'Pj[Uj(Xj), Uj(Yj)], 'Pj[Uj(Xj), Uj(Yj)]} = 'Pj[Uj(Xj), Uj(Yj)] + 'Pj[Uj(Xj), Uj(Yj)]·
The above equivalence is not by chance. The equivalence of preference
models represented by numerical function (2) and by a set of decision rules
is a fundamental result of this paper. The advantage of model (2) over
decision rules is that numerical representation of preferences simplifies the
calculation. The advantage of decision rules over numerical function models
relies on their intelligibility and capacity of explanation. Preference
representation in terms of decision rules seems closer to human reasoning in
everyday decision making because, as remarked by Slovic (1975), "people
make decisions by searching for rules which provide good justification of
their choices". In result of the established equivalence of the two classes of
preference models, all the multicriteria decision aiding methods based on
model (2) can be presented in terms of decision rules.
Another important aspect related to preference representation is
hesitation of DMs while expressing their preferences. Every preference
model is build upon some preferential information acquired from the DM.
The DMs have no doubt when expressing this information rather rarely. This
is due to complexity of multicriteria comparisons and unstable character of
DM's preferences. Hesitation may lead to inconsistencies that should be
handled by the modeling procedure and by the model itself. Model (2),
although very general, does not handle the inconsistencies.
Hesitations are manifested in different ways. In this paper we consider
two cases:
1) The case of inconsistent preferences: there are actions, say x,y,W,Z, such
that on all considered criteria the preference of x over y is at least as
Conjoint Measurement and Decision Rule Model 267
strong as the preference ofw over z but, unexpectedly, the DM considers
w comprehensively at least as good as z, and x comprehensively not at
least as good as y. The reason for such an inconsistency could be a
missing criterion, however, construction of this criterion may be either
very difficult, or impossible. Hesitations manifested in this way can be
represented by approximate decision rules: "if [conditions on particular
criteria], then there are no sufficient reasons to conclude that x is, or is
not, at least as good as y". For example: ''ifaction x is worse than action
y on criterion i and x is better than y on criterion j, then there are no
sufficient reasons to conclude that x is, or is not, at least as good as y".
In the paper we show that also a numerical function model, similar to
model (2), can handle these inconsistent preferences as follows:
G['Pi[ulxi), Ui(Yi)], H ..... n] ~ t2 iff x is at least as good as y without
doubt;
G['Pi[Ui(Xi), UlYi)], i=I .....n] ~ tl iff x is not at least as good as y without
doubt;
tl < G['Pi[ Ui(Xi), UlYi)], i=I .....n] < t2 iff there are no sufficient reasons to
conclude that x is, or is not, at least as good as y;
t\,t2ER such that t l< t2.
In this model the value of numerical function G['Pi[Ui(Xi), Ui(yi)].i=I ..... n]
can be interpreted as strength of the arguments in favor of the statement
"x is at least as good as y". Consequently, the value of tl is the upper
bound of the strength of arguments in favor of the conclusion "x is not
at least as good as y", while t2 is the lower bound for the conclusion "x
is at least as good as y". The values between tl and t2 represent an
intermediate strength of arguments corresponding to the situation of
hesitation.
2) The case of positive and negative reasons for preferences: in this case
the DM considers arguments for the statement: "x is at least as good as
y", and for the opposite statement: "x is not at least as good as y"
(Tsoukias and Vincke 1995, 1997). In this context, the hesitation occurs
when for the statement "x is at least as good as y":
a) there are both, reasons in favor and reasons against (contradiction),
b) there are neither reasons in favor nor reasons against (ignorance).
Decision rules can express naturally the reasons in favor and the reasons
against. As an example consider the following three decision rules:
268 AIDING DECISIONS WITH MULTIPLE CRITERIA

i) "if x is better than Y on criterion i, then x is at least as good as y",


ii) "if x is worse than Y on criterionj, then x is not at least as good as y",
iii) "if x is indifferent to y on criterion i and j, then x is at least as good
as y".
Then, if some action w is better than action z on criterion i and worse
on criterion j, rule i) is a reason in favor of the statement "w is at least
as good as z", while rule ii) is a reason against (contradiction). If,
however, some action u is worse than action v on criterion i and better
on criterion j, none of the above three rules matches this case and,
therefore, there are neither reasons in favor nor reasons against the
statement "u is at least as good as v" (ignorance). In the paper we show
that also a numerical function model, similar to model (2), can handle
the positive and negative reasons for preferences as follows:
G['I'i[ UiCXi) , Ui(Yi)], H ....• n] ~ 0 iff x is at least as good,
GC['I'i[Ui(Xi), UiCYi)], i=I .....n] < 0 iff x is not at least as good as y,

where G['I'i[Ui(Xi), Ui(Yi)], i=I .....n] and GC['I'i[Ui(Xi), Ui(Yi)], i=I .....n] are
functions non-decreasing in each argument.
In this model, function G['I'i[uiCxi), UiCYi)], i=I .....n] represents reasons in
favor of the statement "x is at least as good as y", while function
GC['I'i[Ui(Xi), Ui(Yi)], i=I .....n] represents reasons against. Four cases are
possible with respect to each pair of actions (x,y):
a) G['I'i[uiCxi), Ui(Yi)], i=I .....n] ~ 0 and GC['I'i[Ui(Xi), Ui(Yi)], i=I .....n] ~ 0: in
this case there are reasons in favor and there are no reasons against,
thus x is at least as good as y, without doubt;
[3) G['I'i[Ui(Xi), Ui(Yi)], i=I .....n] < 0 and GC['I'i[Ui(Xi), Ui(Yi)], i=I .....n] < 0: in
this case there are reasons against and there are no reasons in favor,
thus x is not at least as good as y, without doubt;
y) G['I'i[Ui(Xi), Ui(Yi)]>i=I .....n] ~ 0 and GTI'i[Ui(Xi), UiCYi)], i=I .....n] < 0: in
this case there are both, reasons in favor and reasons against, thus
there is a contradictory information about the statement "x is at least
as good as y";
0) G['I'i[Ui(Xi), Ui(Yi)], i=I .....n] < 0 and GC['I'i[Ui(Xi), Ui(Yi)], i=I .....n] ~ 0: in
this case there are neither reasons in favor nor reasons against, thus
there is no information about the statement "x is at least as good as y".
Conjoint Measurement and Decision Rule Model 269

Even if model (2) is our basic reference in the investigation of


equivalence between decision rule model and numerical function model, we
will consider the following, slightly more general model:
G['l'i[Ui(Xi), UlYi)], i=I, ... ,k, UlXi), UlYi) i=k+l,,,.,n] ~ 0 iff xSy (2')
where function G:Rk+2(n-k)~R is non-decreasing in the first k arguments,
non-decreasing in each (k+s)-th argument with s odd and non-increasing in
each (k+s)-th argument with s even, where s=l, ... ,2(n-k).
The difference between model (2) and model (2 ') concerns the treatment
of the strength of preference with respect to a single criterion. The strength
of preference of action x over action y on criterion i is measured by function
'l'j[Ui(Xi), Uj(Yi)]. Model (2) has been conceived under assumption that it is
possible to define the strength of preference on each criterion i=l, ... ,n. In
model (2 ') it is assumed, however, that the strength of preference can be
measured by means of functions 'l'i[Ui(Xi),Ui(Yi)] on the first k criteria only.
This is not possible on the remaining n-k criteria, so the values of Ui(Xi) and
UlYi) have to be handled directly by the model, without passing through
functions 'l'j[Ui(Xi),Ui(Yi)].
To give an example of a preference structure incompatible with model (2)
let us consider the following situation where a DM compares eight actions
{A,B,C,D,E,F,G,H} on two criteria, i and j, with the aim of choosing the
best one. For each criterion, the DM considers four possible evaluations:
"bad", "medium", "good" and "very good", related analogously as in the
previous example. The eight actions have the following evaluations:
action A: "very good" on i, "bad" onj,
action B: "good" on i, "medium" onj,
action C: "medium" on i, "bad" onj,
action D: "bad" on i, "medium" onj,
action E: "very good" on i, "good" onj,
action F: "good" on i, "very good" onj,
action G: "medium" on i, "good" onj,
action H: "bad" on i, "very good" onj.
The DM expresses the following comparisons:
i) action A is at least as good as action B,
ii) action C is not at least as good as action D,
270 AIDING DECISIONS WITH MULTIPLE CRITERIA

iii) action E is not at least as good as action F,


iv) action G is at least as good as H.
Let us remark that actions A, Band C, D have (in pairs) the same
evaluation on criterion j. Therefore, the fact that A is at least as good as B
while C is not at least as good as D, depends on evaluation of these actions
on criterion i only. More precisely, comparisons i) and ii) say that, when
comparing actions being "bad" and "medium" on criterion j, the preference
of "very good' over "good' action on criterion i is stronger than the
preference of "medium" over "bad" action on this criterion. In terms of
model (2) this means that
a) \f'j[uiC"very good'), uiC"good")] > \f'j[uiC"medium"), uiC"bad")].
Actions E, F and G, H have also (in pairs) the same evaluation on
criterion j. Therefore, the fact that G is at least as good as H while E is not at
least as good as F, depends again on evaluation of these actions on criterion i
only. More precisely, comparisons iii) and iv) say that, when comparing
actions being "good' and "very good' on criterion j, the preference of
"medium" over "bad' action on criterion i is stronger than the preference of
"very good' over "good' action on this criterion. In terms of model (2) this
means that
(3) \f'j[uiC"very good"), uiC"good')] < \f'j[uiC"medium"), uj("bad')].
Of course, a) and (3) are inconsistent. In consequence, it is impossible to
represent the DM's preferences using model (2). In this case it is necessary
to consider the more general model (2') because it does not require the
definition offunction \f'j[Uj(Xj), Uj(Yj)].
The paper is organized as follows. In section 2, we prove the equivalence
of preference representation by the functional model (2') and by a decision
rule model. This equivalence is illustrated by a didactic example. In section
3, we investigate the two cases of hesitation in preferences mentioned above.
A numerical function representation and an equivalent decision rule
representation are presented for the two cases. Moreover, an axiomatic basis
is presented for the preference structure based on positive and negative
reasons. In each case, the representation by numerical function and by
decision rules is illustrated by a didactic example. In section 4, some well
known multicriteria aggregation procedures are represented in terms of the
decision rule model. Section 5 groups conclusions.
Conjoint Measurement and Decision Rule Model 271

2. Decision rules and an outranking function


n
Let X= TIX i be a finite or countably infinite product space, where Xi is
i=l
a set of evaluations of actions from set A with respect to criterion gi
n
identified by its index i=I, ... ,n. Let (Xi. Z.i), XiEXi, Z.iEXi= TIXj, denote
j=l.Ji'i
an element of X equal to z except for its ith coordinate being equal to Xi.
A comprehensive outranking relation S is defined on X such that xSy
means "x is at least as good as y". The only minimal requirement imposed on
S is its reflexivity. Given a comprehensive outranking relation S on X, two
marginal outranking relations sf and S~, can be defined with respect to
each gi. i= 1,... ,n, as follows:

Xi sf Yi iff for all a.iEX.i and ZEX, [(Yi. a.i) S Z ~ (Xi. a.v S z],

XiS~ Yi iff for all a.iEX.i and ZEX, [z S (Xi. a.i) ~ Z S (Yi. a.i)].

The relation Xi sf Yi reads: "Xi is at least as good as yi". The relation


XiS~Yi can be read analogously. Let us remark that, due to the implication in
their definitions, binary relations sf
and S~ are transitive.
The comparison of the strength of preference of Xi over Yi with the
strength of preference of Wi over Zj is expressed by the binary relation S~
defined on XiXXi in the following way:

(Xi. Yi) S~ (Wi. Zj)

(Wi. V.i) S (Zi. t.i) implies (Xi. V.i) S (Yi. t.J

The relation (Xi. Yi) S~ (Wi. Zi) reads: "the strength of preference of Xi
over Yi is at least as great as the strength of preference of Wi over Zj". Due to
the implication in its definition, binary relation S~ is transitive.
Let us remark that the binary relation S~ is coherent with the marginal
outranking relation sf, i.e.
272 AIDING DECISIONS WITH MULTIPLE CRITERIA

This means that if Xi is at least as good as Yi and if Yi is preferred to Wi at


least as much as Zi is preferred to Uj, then, after substituting Yi by Xi, Xi is
again preferred to Wi at least as much as Zj is preferred to Ui.
To prove (3), let us suppose that

i) x·s~y·
I I I and

ii) (Yj, Wi) s~ (Zj, uJ


This means that
a) [(Yj, a-i) S b => (Xj, a_i) S b], for each a_iEX_i and bEX,
b) (Zj, V_i) S (uj, q implies (Yj, V-i) S (Wi, ti), for each V_j,tiEXi.
If b=(wi, q and a_i=V_i, then from a) we obtain
c) [(Yi, V-i) S (Wi, ti) => (Xi, V_i) S (Wi, ti)].
From b) and c) we obtain: [(Zj, V-i) S (Ui, ti) => (Xi, V-i) S (Wi, q], i.e.
(Xi, Wi) S~ (Zi, Ui), which concludes the proof.
Analogously, the binary relation S~ is coherent with the marginal
outranking relation S~ , i.e.

Xi S~ Yi implies [(Wi, Xi) S~ (Zj, Ui) => (Wi, Yi) S~ (Zi, Ui)]. (4)

Let us remark that coherence of S~ with sf and S~ implies coherence


of S~I with S·=S~
I I
(\S~

If S~ , i=l, ... ,n, is a complete preorder, each equivalence class constitutes


a graded preference relation pri on Xi' where hi represents a specific degree
of preference belonging to a set Hi of possible degrees of preference
corresponding to each i=l, ... ,n. More precisely, H i={1,2, ... ,rJ, where r i
represents the greatest degree of preference with respect to gi, i= 1, ... ,no

For each Xi,Yi,Wj,ZiEXi, i=l, ... ,n, and hi,kiEHi' hi~ki, XiPr i Yi and
Wi pfi Zi mean that Xi is preferred to Yi at least as much as Wi is preferred to
Zi, i.e. (Xi'Yi)S~(Wj,zJ For example, ifH i={1,2, ... ,5}, then for each Xi,yiEX
we could have:

• Xi p! Yi' meaning that Xi is much worse than Yi,


Conjoint Measurement and Decision Rule Model 273

• Xi Pf Yi' meaning that Xi is worse than Yi,

• Xi Pf Yi' meaning that Xi is indifferent to Yi.

• Xi P( Yi' meaning that Xi is better than Xi.

• Xi Pf Yi' meaning that Xi is much better than Xi·


The above definitions allow us to express any type of multiple relational
preference structure (on this subject see, e.g., Roberts 1971, Cozzens and
Roberts 1982, Roubens and Vincke 1985, Doignon et al. 1986, Doignon
1987, Moreno and Tsoukias 1996).
Given a set of preference degrees Hi, we can define a set of upward
cumulated preferences Pfh i and a set of downward cumulated preferences
pfh i ,hiEHi, as follows:

~ Xi Pfhi Yi' which means that "Xi is preferred to Yi in degree at least h;",
if there exists kiEHi such that ki~hi and Xi pfi Yi.

~ Xi pfh i Yi' which means that "Xi is preferred to Yi in degree at most


h;", if there exists kiEHi such that ki::;;hi and Xi pr Yi'
i

Continuing the above example, we can have:


• Xi Pf2 Yi' i.e. Xi is at most worse than Yi.

• Xi Pf3 Yi' i.e. Xi is at least indifferent to Yi,

• Xi Pf4 Yi' i.e. Xi is at least better than Yi.

• Xi pf5 Yi' i.e. Xi is (at least) much better than Yi.

• Xi pfl Yi' i.e. Xi is (at least) much worse than Yi,

• Xi pf2 Yi' i.e. Xi is at least worse than Yi.

• Xi pf3 Yi' i.e. Xi is at most indifferent to Yi,

• Xi pf4 Yi' i.e. Xi is at most better than Yi.

The coherence of s~ with sf and s~, considered from the viewpoint of


the graded preference relations, gives the following properties:
274 AIDING DECISIONS WITH MULTIPLE CRITERIA

X· S®
1
. y.
1 1
and u·1 ph.
. X.1
1
I
........
-.'" u.1 p>h.
;- y.l'
1
I

Characterization of the non-additive and non-transitive model of conjoint


measurement (2) is based on a set of cancellation properties (Bouyssou and
Pirlot 1997) specified below.
For all i=I, ... ,n, and for all XhYhZhWiEXh a_hb_hc_hd_iEX_h the following
properties hold:
Cl(i) [(Xi, a_i) S (Yh b_i) and (Wh C_i) S (Zi, d_ i)] ~
[(Xh C_i) S (Yh d_ i) or (Wh a_i) S (Zj, b_i)];
C2(i) [(Xh a_i) S (Yh b_i) and (Wi, C-i) S (Zi, d_ i)] ~
[(Wi, a_i) S (Yi, b_i) or (Xi, C_i) S (Zj, d_ i)];
C3(i) [(Xi, a_i) S (Yi, b_i) and (Wi, C-i) S (Zi, d_ i)] ~
[(Xi, a_i) S (Zi, b_i) or (Wi, C_i) S (yi, d_ i)];
C4(i) [(Xh a_i) S (Wi, b_i) and (Zj, C_i) S (Xi, d_ i)] ~
[(Yh a_i) S (Wi, b_i) or (Zj, C_i) S (yi, d_ i)].

Condition Cl(i) ensures that, on the basis ofS, the relation S~ (i=l,oo.,n)
is strongly complete on XiXX i: in fact, S; would be incomplete if there were
Xi,yi,Zi,WiEXh a_i,b_i,c_i,d_iEX.i such that

i) (Xi, a_i) S (Yh b_i) and not (Wi, a-i) S (Zi, b_i): by definition of S; this
implies not (Wi, Zi) S; (Xi, Yi);

ii) (Wi, C_i) S (Zh d_ i) and not (Xi, C-i) S (Yh d_ i): by definition of S; this
implies not (Xi, Yi) S; (Wi, Zi).
However, for Cl(i) we cannot have i) and ii) at the same time. Therefore, if
condition Cl(i) holds, binary relation S; is a complete preorder on XiXXi
because it is transitive (due to implication in its definition) and strongly
complete (for Cl(i».

Condition C2(i) says that marginal outranking relation sf (i=l,oo.,n) is


strongly complete on Xi: in fact, sf
would be incomplete if there were
Xi,Yi,ZhWiEXh a_i,b_i,c_i,d_iEX.i such that
Conjoint Measurement and Decision Rule Model 275

i) (Xi, a-i) S (Yi, b_ i) and not (Wi, a-i) S (yi, b_i): by definition of sf this
implies not Wi sf Xi;

ii) (Wi, C_i) s (Zi, d_ i) and not (Xi, C_i) S (Zi, d_ i): by definition of sf this
implies not Xi sf Wi·
However, for C2(i) we cannot have i) and ii) at the same time. Therefore, if
condition C2(i) holds, binary relation sf is a complete preorder on Xi
because it is transitive (due to implication in its definition) and strongly
complete (for C2(i)).
Analogously, condition C3(i) says that marginal outranking relation sf
(i=l, ... ,n) is a complete preorder on Xi.
Condition C4(i), together with C2(i) and C3(i), ensures that the orderings
obtained from sf and sf are compatible, i.e. there is no Xi,YiEXi such that
Xi sf Yi and not Yi sf Xi (i.e. Xi is preferred to Yi with respect to Sf), and not
Xi sf Yi and Yi S~ Xi (i.e. Yi is preferred to Xi with respect to S~), for i=l, ... ,n.
In fact, sf and S~ would not be compatible if there were XhYj,Zj,WjEXj,
a_i,b_i,c_hd_iEX_i such that

i) (Xi, a_i) S (Wi, b_i) and not (Yi, a-i) S (Wi, b_i): by definition of sf this
implies not Yi sf Xi, and for completeness of sf (due to C2(i)), also
Xi sf Yi;

ii) (Zi, C-i) s (Xi, d_ i) and not (zj, C_i) S (yi, d_ i): by definition of S~ this
implies not Xi s~ Yi, and for completeness of S~ (due to C3(i)),
also Yi S~ Xi.
However, for C4(i) we cannot have i) and ii) at the same time.
The information given by outranking relations sf and S~ can be
synthesized by the binary relation Si= sf n s~ . Analogously to sf and S~ ,
the relation XjSiYi reads: "Xi is at least as good as yi". Transitivity of sf and
S~ ensures transitivity of Si. Compatibility of the orderings of sf and S~
ensures completeness of Si. Since transitivity of sf and S~ is satisfied by
276 AIDING DECISIONS WITH MULTIPLE CRITERIA

defmition and compatibility of the orderings of sf


and S~ holds iff C2(i),
C3(i) and C4(i) are satisfied, then Si is a complete preorder, i.e. transitive
and strongly complete, iff C2(i), C3(i) and C4(i) are satisfied.
S satisfies Cl iff it satisfies Cl(i) for each i=I, ... ,n. Analogously, S
satisfies C2, C3 and C4 iff it satisfies C2(i), C3(i) and C4(i) for each
i=I, ... ,n, respectively.
Let us pass now to decision rules. Within the rough set approach, given
the preferential information in the form of exemplary decisions on some
reference actions, one can construct a representation of DM's preferences in
terms of a set of "if ... , then ... " decision rules, called decision rule model.
Greco, Matarazzo and Slowinski (1999, 2001) have shown some interesting
relationships between their decision rule model and the non-additive and
non-transitive model of conjoint measurement.
In the following, we consider decision rules having the following syntax:
1) D>-decision rule, called "at least decision rule", being a statement of the
type:

if Xii p~hil Yil and ... Xid p~hid Yid and Xid+ ISid+ Irid+ 1 and ... XieSierie and
Sie+ISie+lYie+1 and ... SjfSifYif, then xSY,
where
- strength of preference can be defined for criteria gil, ... ,gid, while not
for criteria gid+h" .,gif,
- {hil, ... ,hid)eHilx ... xHid, (rid+I, ... ,rie)eXid+lx ... XXie, (Sie+l, ... ,sif)eXie+1
x ... XXif, {id+ 1, ... ,ie} and {ie+ 1, ... ,if} are not necessarily disjoint;
An example of a D>-decision rule: "if X is (at least) better than yon
criterion i and x is at least medium on criterion j and Y is at most
medium on criterion k, then x is at least as good as y";
2) D=:;;-decision rule, called "at most decision rule", being a statement of the
type:

if Xii pfthil Yil and ... XidP~hid Yid and rid+lSid+lXid+l and ... rieSiexie and
Yie+ISie+lSie+1 and ... YitSitSif, then not xSY,
where
- strength of preference can be defined for criteria gil, ... ,gid, while not for
criteria gid+h ... ,gif,
Conjoint Measurement and Decision Rule Model 277

- (hib ... , hid)EHiI X ... X Hid, (rid+b ... , rie)EXid+1 x X X ie, (Sie+b ... ,
Sif) EX ie+IX ... XXif, {id+ 1, ... ,ie} and {ie+ 1, ... ,if} are not necessarily
disjoint;
An example of a D:$-decision rule: "if x is (at most) worse than y on
criterion i and x is at most medium on criterion j and y is at least
medium on criterion k, then x is not at least as good as y";
3) D~:$-decision rule, called "at least-at most decision rule", being a
statement of the type:
if XiI PfIh il Yil and ... Xid p~hid Yid and Xid+1 P~~i+1 Yil and ... Xie p~hie Yie and
Xie+ISie+lrie+l and ... XirSitfif and rif+1Sif+lXif+l and ... rigSigXig and
Sig+ISig+IYig+1 and ... SioSioYio and Yio+lSio+ISio+l and ... YipSipsip,
then xSy or not xSy,
where
- strength of preference can be defined for criteria gib ... ,gie, while not
for criteria gie+b·· .,gip,
- (hqb ... , hqe)EHql x ... x Hqe, (rie+b ... , rif)EX ie+1 x ... x Xif, (rif+b ... ,
rig)EX if+1 x x Xig, (Sig+b ... , sip) EXig+1 X X Xio,
(Sio+b ... ,Sip)EXio+IX ... XXip, {il, ... ,id} and {id+l, ... ,ie}, {ie+l, ... ,if} and
{if+l, ... ,ig}, {ig+l, ... ,io} and {io+l, ... ,ip} are not necessarily
disjoint, respectively.
An example of a D~:$-decisionrule: "if X is (at most) worse than Y on
criterion i and at least better on criterion j, then there are no sufficient
reasons to conclude that x is, or is not, at least as good as y".

We say that a D~-decision rule covers a pair (W,Z)EXXX iff (w,z)


satisfies both the condition and the decision part of the rule. More formally,
a D~-decision rule defined above covers a pair (W,Z)EXXX iff Wil PfIh il Zil
and ... WidPfdhid Zid and Wid+lSid+lrid+l and ... and WieSierie and Sie+lSie+1Zie+l
and ... and SirSif'Zif, and wSz.
Set R~ of D~-decision rules is complete iff each pair (X,Y)EXXX, such
that xSy, is covered by at least one D~-decision rule. A pair (W,Z)EXXX
contradicts a D~-decision rule iff (w,z) satisfies its condition part and does
not satisfy its decision part. More formally, a D~-decision rule defined above
is contradicted by a pair (W,Z)EXXX iff Wi! PfIhil Zi! and ... WidP~hid Zid and
278 AIDING DECISIONS WITH MULTIPLE CRITERIA

Wid+1Sid+lrid+l and ... and WieSierie and Sie+1Sie+1Zie+l and ... and SitSitZif, while
not w S z. For example, a pair of actions (w,z) such that
- w is better than z on criterion i,
- w is medium on criterionj,
- z is medium on criterion k
- w is not at least as good as z,
contradicts decision rule "i/x is (at least) better than y on criterion i and x is
at least medium on criterion j and y is at most medium on criterion k, then x
is at least as good as y" .
Set R"? of D"?-decision rules is non-contradictory iff there is no rule
contradicted by a pair (W,Z)EXXX.
We say that set R"? of D"?-decision rules represents the outranking
relation S on X iff it is complete and non-contradictory.
Let us remark that decision rules describe DM's preferences in terms of
"condition profiles" relative to pairs of actions. As the condition profiles
concern, in general, subsets of criteria, they are called partial profiles. They
involve two specific marginal relations: a marginal outranking relation Sj, for
all criteria, and a marginal relation S;, for comparison of "preference
strength" on criteria permitting definition of the strength of preference.
These marginal relations are used to define a specific dominance relation
with respect to pairs of actions, such that pair (x,y) dominates pair (w,z) if
action x is preferred to y at least as much as action w is preferred to z for
each considered criterion. Thus, the decision rule model R"? involves partial
profiles for pairs of actions and a specific dominance relation.
Analogous definitions hold for set R5, ofD5,-decision rules.

Theorem 2.1 (Greco, Matarazzo and Slowinski 2000b) The following four
propositions are equivalent:
1) the binary relation S on X satisfies properties C2, C3, C4 and Cl(i) for
i=I, ... ,k, 05,k5,n
2) there exist
• function G:Rk+2(n-k)-+R, non-decreasing in the first k arguments, non-
decreasing in each (k+s)-th argument with s odd and non-increasing in
each (k+s)-th argument with s even, where s=I, ... ,2(n-k),
• functions 'I'i:R2~ R, for all i= 1, ... ,k, non-decreasing in the first
argument and non-increasing in the second argument,
Conjoint Measurement and Decision Rule Model 279
• functions Ui:Xi~R,for all i=l, ... ,n,
such thatfor all x,yeX
G['Pi[Ui(Xi), Ui(Yi)], i=I, ... ,k> Ui(Xi), Ui(Yi), i=k+I, ... ,n] ~ 0 iff xSy;
3) there exist
• for all i=l, ... ,n, a marginal outranking relation Si being a complete
preorder on Xi>

• for all i=l, ... ,k, a binary relation S; being a complete preorder on
XiXXi>
• one set ofD~-decision rules representing S;
4) there exist
• for all i= I, ... ,n, a marginal outranking relation Si being a complete
preorder,

• for all i=l, ... ,k, a binary relation S; being a complete preorder,
• one set ofD<-decision rules representing S.
Proof. First, we prove that I) implies 2). Since C2, C3 and C4 hold, then Si
is a complete preorder for each i=l,oo .,n. Therefore, there exists a function
Uj:Xi~R such that XiSiYi iffui(xi)~ui(Yi)' Since CI(i) holds for criteria l,oo.,k,
then binary relation S;, for i=l, ... ,k, is a complete preorder. Therefore, there
exists a function <l>i:XiXXi~R such that (Xi,yi) S; (wj,Zj) iff
<l>i(Xj,Yi)~<l>i(Wj,Zi)' Taking into account the coherence of S; with Sj, we can
conclude that there exists a function 'Pj:R2~R, non-decreasing in the first
argument and non-increasing in the second argument, such that
<l>i(Xi,yi)='Pi[Uj(Xi), Ui(Yi)]. On the basis of binary relations S;, i=l,oo.,k, and
Sj, i=k+ 1,00' ,n, a dominance relation D can be defined on XxX as follows:
for each (x,y),(w,z)eXxX:

(x,y)D(w,z) iff (xj, yj) S; (wj, Zi) for each i=l,oo .,k, and Xj Sj Wi and Zi Si Yi
for each i=k+l,oo.,n.
The dominance relation D can be defined on XxX also In terms of
functions Uj and 'Pj, as follows: for each (x,y),(w,z)eXxX
(x,y)D(w,z) iff 'Pj[uiCxj), Ui(yj)]~'Pj[Uj(Wi)' Uj(Zj)], for each i=l, ... ,k, and
Uj(Xj)~Ui(Wi) and Uj(Zj)~Uj(Yi) for each i=k+l,oo.,n.
280 AIDING DECISIONS WITH MULTIPLE CRITERIA

Binary relation (x,y)D(w,z) reads: "action x is preferred to y at least as


much as action w is preferred to z on each criterion".
The definition of the binary relations S~, i= 1, ... ,k and Sj, i=k+ I, ... ,n,
implies the following Coherence Condition with respect to Dominance
(CCD): there is no x,y,W,ZEX such that (x,y)D(w,z) and wSz and not xSy.
Let us now consider the following binary relation T on XxX, defined as
(x,y) T (w,z) if at least one of the two following conditions is satisfied
1) (x,y)D(w,z),
2) xSy and not wSz.
Let us observe that due to reflexivity of D, the binary relation T is also
reflexive. Furthermore, T is transitive, i.e. for each (x,y),(w,z),(S,t)EXXX,
(x,y)T(w,z), (w,z)T(s,t) implies (x,y)T(s,t). The proof of the transitivity ofT
is the following. For each (x,y),(w,z),(s,t) EXXX, such that (x,y)T(w,z),
(w,z)T(s,t), we can have the following cases:
i) (x,y)D(w,z) and (w,z)D(s,t): then from property 1) and for the
transitivity ofD we have (x,y)T(w,z), (w,z)T(s,t) and (x,y)T(s,t),
ii) (x,y)D(w,z), and wSz and not sSt: then from CCD we have xSy, thus
for 2) we have (x,y)T(s,t),
iii) xSy and not wSz and (w,z)D(s,t): then from CCD we have not sSt,
thus (x,y)T(s,t).
Since T is reflexive and transitive then it is a partial preorder (Roubens
and Vincke 1985). Therefore there is a function h:XxX~R such that for
each (x,y),(W,Z)EXXX
a) (x,y)T(w,z) and not (w,z)T(x,y) implies h(x,y) > h(w,z),
b) (x,y)T(w,z) and (w,z)T(x,y) implies h(x,y) = h(w,z).
Let us observe that we can have (x,y)T(w,z) and (w,z)T(x,y) only if one
of the following situations holds:
i) xSy and not wSz, and (w,z)D(x,y)
ii) (x,y)D(w,z), and wSz and not xSy,
iii) (x,y)D(w,z) and (w,z)D(x,y), i.e. 'f'i[Ui(Xi), UiCYi)]='f'i[Ui(Wi), Ui(Zi)],
i=I, ... ,k, Ui(Xi)=Ui(Wi) and Ui(Zi)=Ui(Yi) for each i=k+ 1, ... ,n.
Since for CCD, i) and ii) are not possible, then the only possibility is iii).
Thus, (x,y)T(w,z) and (w,z)T(x,y) if and only if 'f'i[Ui(Xi), UiCYi)]=
'f'i[Ui(Wi), Ui(Zi)], i=1 , ... ,k, UiCxi)=uiCwi) and UiCZi)=uiCYi) for each i=k+ 1, ... ,n.
Conjoint Measurement and Decision Rule Model 281

On the basis of b), this means that there exists function V:Rk+2(n-k)~R,
such that

V['Pi[Ui(Xi), UlYi)], i=I .....k, Ui(Xi), Ui(Yi), i=k+l .....n] = h(x,y).


One can see, moreover, that (x,y)D(w,z) and not (w,z)D(x,y) implies
(x,y)T(w,z) and not (w,z)T(x,y) because otherwise we should have wSz and
not xSy which contradicts CCD. Let us remark that (x,y)D(w,z) and not
(w,z)D(x,y) means 'Pi[Ui(Xi), Ui(Yi)] ;?: 'Pi[Ui(Wi), Ui(Zi)], for i=I, ... ,k,
Ui(Xi);?:Ui(Wi) and Ui(Yi);?:Ui(Zi), for i=k+ 1,... ,n, with at least one strict
inequality. Thus, on the basis of a), V['Pi[Ui(Xi), Ui(Yi)], i=I, ... ,k, Ui(Xi),
Ui(Yi), i=k+1, ... ,n] is increasing in the first k arguments, increasing in each (k+s)-
th argument with s odd and decreasing in each (k+s)-th argument with s
even, where s=I, ... ,2(n-k).
If xSy and not wSz, then, according to point 2) of the definition of T,
(x,y)T(w,z) and not (w,z)T(x,y), because otherwise, as already shown, we
should have 'Pi[Ui(Xi), UlYi)] = 'Pi[Ui(Wi), Ui(Zi)], for i=l, ... ,k, Ui(Xi)=Ui(Wi) and
Ui(Zi)=Ui(Yi), for i=k+l, ... ,n, which contradicts CCD. This gives to function V
a discriminating capacity in the sense that for each x,y,W,ZEX, if xSy and
not wSz then h(x,y) > h(w,z), and thus
V['Pj[ulxj), UlYj)]oi=I, ... ,k, Uj(Xj), ulYj), j=k+l, ... ,n] >
V['Pj[ Uj(Wj), Ui(Zj)], i=I, ... ,k, Uj(Wj), ulzj), j=k+l, ... ,n],
or, in other terms, for each (W,Z)EXXX such that not wSz

V['Pi[Ui(Wj), Ui(Zi)]oi=I, ... ,k. Ui(Wi), Ui(Zi), i=k+l, ... ,n] <
min V['Pi[Ui(Si), ulti)] , i=I, ... ,k, Ui(Si), Ui(ti), i=k+l, ... ,n]'
sSt

Given the function V ['Pi[UlXi) , Ui(Yi)], i=I, ... ,k, Ui(Xi), UlYi), i=k+l, ... ,n] we can
define function G:Rk+2(n-k)~R as follows

G['Pi[ Ui(Wi), Ui(Zi)], i=I, ... ,k. Ui(Wi), Ui(Zi), i=k+1, ... ,n] =
V ['Pi[UlWi) , Ui(Zi)], i=I, ... ,k, Ui(Wi), Ui(Zi), i=k+l, ... ,n] - Q
where Q = min V['Pi[ulsi), Ui(ti)]oi=I, ... ,k, Ui(Sj), Ui(ti), i=k+l, ... ,nl
sSt

Similarly to function V, function G is increasing in the first k arguments,


increasing in each (k+s)-th argument with s odd and decreasing in each
(k+s)-th argument with s even, where s=I, ... ,2(n-k). Thus, function G
satisfies the monotonicity properties specified in proposition 2).
Moreover, by definition and due to discriminating capacity of function V,
function G satisfies the following property: for all (X,y)EXXX such that xSy
we have
282 AIDING DECISIONS WITH MULTIPLE CRITERIA

G['Pi[Ui(Xi), Ui(Yi)], i=l .....k, Ui(Xi), Ui(Yi), i=k+l ..... n] ~ 0


and for all (W,Z)EXXX such that not wSz we have
G['Pi[ Uj(Wj), Uj(Zj)], j=l .....k, Uj(Wj), uiCzj), j=k+l .....n] < O.
Thus, we proved that 1) => 2).
Now let us prove that 1) => 3). As in the proof of 1) => 2), Cl(i),
i=l, ... ,k, ensures that binary relation S~, is a complete preorder on XjxX j for
i=I, ... ,k, and C2, C3 and C4 ensure that Sj is a complete preorder on X j for
each i=l, ... ,n. Since S~, i=I, ... ,k, is a complete preorder, its equivalence
classes define a set of graded preference relations. For each pair (W,Z)EXXX,
such that wSz, the following D~-decision rule can be built:

if Xl Pfh l Yl and ... XkP~hk Yk and Xk+lSk+lrk+l and ... xnSnrn and Sk+lSk+lYk+l
and ... snSnYn, then xSy,

The set of D~-decision rules corresponding to all pairs (W,Z)EXXX is


obviously complete. Moreover, it is non-contradictory because otherwise
CCD would not hold. Therefore, this set of D~-decision rules represents the
outranking relation S. This completes the proof of 1) => 3).
The proof of 1) => 4) is analogous to the proof of 1) => 3). For each pair
(W,Z)EXXX, such that not wSz, the following D;5-decision rule can be built:

if Xl pfh l Yl and ... XkP~hk Yk and rk+lSk+lXk+l and ... rnSnxn and Yk+lSk+lSk+l
and ... YnSnsn, then not xSy,

The set of D;5-decision rules corresponding to all pairs (W,Z)EXXX is


obviously complete. Moreover, it is non-contradictory because of CCD.
Therefore, this set ofD;5-decision rules represents the outranking relation S.
The proof of 2) => 1),3) => 1) and 4) => 1) is simple and left to the reader. 0

Let us remark that the sets of D~-decision rules and D:s;-decision rules
considered above in the proof of 1) => 3) and 1) => 4), respectively, are not
unique. There is a decision rule for each pair (W,Z)EXXX, so these sets are
maximal in the sense that they contain all rules that can be defined with
complete profiles (all criteria are considered). Practically, much more
synthetic representations can be considered, involving less rules and partial
Conjoint Measurement and Decision Rule Model 283
profiles. The authors proved that the minimal (i.e. the most synthetic)
representation of outranking relation S by decision rules is unique (Greco,
Matarazzo and Slowinski 2000b).
Two remarks are necessary with respect to Theorem 2.1 :
1. As already pointed out in the introduction, proposition 2) of Theorem 2.1
is a bit more general than the corresponding proposition formulated by
Bouyssou and Pirlot (1997). Theorem 2.1 assumes that condition C1(i) is
satisfied for some criteria (i=1, ... ,k) and not satisfied for the other
(i=k+ 1, ... ,n), while Bouyssou and Pirlot consider that C 1(i) is satisfied,
for all criteria (i=1, ... ,n). This means that Theorem 2.1 permits to
distinguish between:
- criteria on which it is possible to measure preference strength by
function 'I'i (these are the criteria for which property C1(i) is satisfied)
- criteria on which it is not possible to measure preference strength by
function 'I'i (these are the criteria for which property Cl(i) is not
satisfied).
Consequently, the functional representation of S proposed in Theorem 2.1
IS

G['I'i[ Ui(Xi), Ui(Yi)], i=I, ... ,k, UlXi), UiCYi), i=k+I, ... ,n] ~ 0 iff xSy
while the functional representation of S proposed by Bouyssou and Pirlot
(1997) is

2. In Theorem 2.1 it is supposed that condition C4 is satisfied, i.e. C4(i)


holds for all i=1, ... ,n. If this was not the case, then we would have the
following consequences in the representation of S:
2a) As to the functional representation, if C4 is not satisfied, then there
exist two functions Ui:Xi~R and Vi:Xi~R, i=l, ... n, such that
G['I'i[Ui(Xi), V;(Yi)], i=I, ... ,k, Ui(Xi), V;(Yi), i=k+I, ... ,n] ~ 0 iff xSy.

This means that in order to represent the outranking between two


actions x and y, we need function Ui(Xi) defining the "value" of Xi and
function Vi(Yi) defining the "value" ofYi.
2b) As to the decision rule representation, if C4 is not satisfied, then
sf and S~ can be non-compatible, so the decision rules must be
284 AIDING DECISIONS WITH MULTIPLE CRITERIA

defined in terms of sf and S~ rather than in terms of Si only.


Precisely, given a pair (X,y)EXXX having evaluations (Xj,Yi) with
respect to criterion gj, i= 1, ... ,n,
• when comparing XiEXj, to a reference level riEXj, one has to use
relation sf in the elementary condition of the rule, i.e. "if
xiSfrt in D~-decision rules and "ifriSfxt in D~-decision rules,

• when comparing YiEXj, to a reference level SiEXj, one has to use


relation S~ in the elementary condition of the rule, i.e. "if
Si S~ yt in D~-decision rules and "if Yi s~ st in D~-decision
rules.
Thus, if C4 is not satisfied, a D~-decision rule has the following
syntax:

1if Xii Pi!~h·II Yil an d ... XidP id


~h·Id Yid an d Xid+1 S#id+l rid+! an d ... Xie S#ie rie an d

Sie+1 S~+l Yie+1 and ... SifS~ Yif, then xSy.


The syntax of the D~-decision rules must be modified analogously.

2.1 A didactic example


Let us consider a typically didactic decision problem about buying a car.
Suppose that the cars considered by a DM are evaluated on the criteria of
price and speed. For comparison of two cars on the criterion of price, the
DM accepts to use a scale of the strength of preference composed of the
following grades: "better", "indifferent", "worse". As to the criterion of
speed, the DM wants to express hislher preferences directly, in terms of
speed of the cars being compared. The scale of evaluation of single cars on
the criterion of speed is composed of three following grades: "high",
"medium", "low".
Table 2.1 presents all possible profiles of the pairs of cars with respect to
the two considered criteria. Let us observe that the outranking relation
specified on these pairs in Table 2.1 satisfies proposition 1) of Theorem 2.1.
In fact, it can be seen in the table that each time car x is preferred to car y at
least as much as car w is preferred to car z on both criteria, the situation "not
xSy and wSz" does not happen. Therefore, it is possible to build a function
representing the outranking relation and satisfying the conditions presented
in proposition 2) of Theorem 2.1. The values of this function are shown in
Conjoint Measurement and Decision Rule Model 285

the last column of Table 2.1 (in the head of the column,
G(x,y)=G ['I' I(UI(XI),UI(YI»,U2(X2),U2(Y2)]). The outranking relation S is
represented by the function G as follows:
G['I'I(UI(XI),UI(YI»,u2Cx2),U2(Y2)] ~ 0 <=> xSy,
G['I'I(UI(XI),UI(YI»,U2(X2),U2(Y2)] < 0 <=> not xSy,
where
• 'I'1(UI(XI),UI(YI» represents the strength of preference with respect to
price of cars x and y, equal to Xl and Yb respectively (UI(XI) and UI(YI)
mean the utility of Xl and Yb respectively),
• U2(X2) and U2(Y2) represent the utility of X2 and Y2 equal to the speed of
cars X and y, respectively.
Finally, according to proposition 3) of Theorem 2.1, the outranking
relation from Table 2.1 can be represented by the following set of D~-
decision rules:
# 1) "if x is at least indifferent to Y on the price and the speed of x is at
least medium and the speed of Y is at most medium, then x is at least
as good as y";
#2) "if x is (at least) better than y on the price and the speed ofx is at least
medium, then x is at least as good as y";
#3) "if x is at least indifferent to y on the price and the speed of x is (at
least) high, then x is at least as good as y";
#4) for all uncovered pairs of cars (x,y), x is not at least as good as y.
Let us observe that, according to proposition 4) of Theorem 2.1, the
outranking relation from Table 2.1 can also be represented by a set of D::;-
decision rules:
#5) "if x is (at least) worse than y on the price, then x is not at least as
good as y",
#6) "if the speed ofx is (at most) low, then x is not at least as good as y",
#7) "if x is at most indifferent to y on the price and the speed of x is at
most medium and the speed ofy is (at least) high, then x is not at least
as good as y",
#8) for all uncovered pairs of cars (x,y), x is at least as good as y.
The identity numbers of rules matching the corresponding pair of cars are
indicated in the column "Outranking" of Table 2.1.
286 AIDING DECISIONS WITH MULTIPLE CRITERIA

Table 2.1 Outranking on all 27 cases of possible evaluations of pairs of cars


Pairs of
Price Speed ofx Speed ofy Outranking G(x,y)
cars
PI x is worse than v low high not S (#5,6,7) -19
P2 x is indifferent to v low high not S (#6,7) -10
P3 x is better than v low high not S (#6) -6
P4 x is worse than v medium high not S (#5,7) -10
P5 x is indifferent to v medium high not S (#7) -4
P6 x is better than v medium high S (#2) 0
P7 x is worse than v high high not S (#5) -6
P8 x is indifferent to v high high S (#3) 0
P9 x is better than v high high S (#2,3) I
PIO x is worse than v low medium not S (#5,6) -13
Pll x is indifferent to y low medium not S (#6) -7
PI2 x is better than y low medium not S (#6) -5
P13 x is worse than y medium medium not S (#6) -7
PI4 x is indifferent to y medium medium S (#1) 0
PI5 x is better than y medium medium S (#1,2) 4
PI6 x is worse than y high medium not S(#5) -5
PI7 x is indifferent to y high medium S (#1,3) 4
PI8 x is better than y high medium S (#1,2,3) 6
PI9 x is worse than v low low not S (#5,6) -8
P20 x is indifferent to V low low not S (#6) 5
P21 x is better than v low low not S (#6) -5
P22 x is worse than v medium low not S (#5) -5
P23 x is indifferent to v medium low S (#1) I
P24 x is better than v medium low S (#1,2) 6
P25 x is worse than v high low not S (#5) -4
P26 x is indifferent to v high low S (#1,3) 6
P27 x is better than v high low S (#1,2,3) 7

3. Preference inconsistencies and conjoint


measurement
In this section we present a preference model that is able to represent the
preference inconsistencies considered in the rough approximation of an
outranking relation (Greco, Matarazzo and Slowinski 1999, 2000a, 2001).
Let us suppose that with respect to the first k (O~k:::;;n) criteria it is
possible to define a preference strength while this is not possible with
respect to the other n-k criteria. Thus, for each criterion i=I, ... ,k there exists
a set of preference degrees Pfi ,
while for each criterion i=k+ 1, ... ,n, there
exists a marginal outranking relation Si such that XiSiYi means "Xi is at least
Conjoint Measurement and Decision Rule Model 287

as good as yi". Si is strongly complete and transitive, therefore, it is a total


preorder.
Given (x,y),(W,Z)EXXX, (x,y) is said to dominate (w,z), denotation
(x,y)D(w,z), if x is preferred to y at least as much as w is preferred to z on
each i=I, ... ,n. Precisely, for each i=I, ... ,k, "Xi is preferred to Yi at least as
much as Wi is preferred to zi" means "Xi is preferred to Yi by at least the same
degree for which Wi is preferred to Zr,", i.e. Xi pf; Yi implies Wi Pfh; Zi, where
hiEHi. For each i=k+l, ... ,n, "Xi is preferred to Yi at least as much as Wi is
preferred to Zr," means XiSiWi and ZiSiYi. The binary relation D is reflexive and
transitive, therefore, it is a partial preorder.
We consider the following Consistency Condition with respect to
Dominance (CCD): for x,y,w,ZEX, if X is preferred to y at least as much as
w is preferred to Z on each gi, i=I, ... ,n, and w is at least as good as z, then
also X should be at least as good as y, i.e. (x,y)D(w,z) and wSz should imply
xSy. Condition CCD is rather unquestionable, however, it may not be
satisfied in some real world situations due to inconsistencies in the
preferences. Pairs of actions (X,Y)EXXX satisfying CCD with respect to all
pairs (W,Z)EXXX are defined consistent. Within dominance-based rough sets
theory (Greco, Matarazzo and Slowinski 1996, 1999, Slowinski et al. 2000)
- the set of all consistent pairs (X,Y)EXXX such that xSy, constitutes the
lower approximation of S, denotation ,S.,
- the set of all consistent pairs (X,Y)EXXX such that not xSy, constitutes
the lower approximation of not S, denotation NS,
- the set of all inconsistent pairs (X,Y)EXXX constitutes the boundary
region, denotation B.
Intuitively,
- (X,Y)E,S. can be interpreted as: "x is at least as good as y without any
hesitation" ,
- (x,Y)ENS can be interpreted as: "x is not at least as good as y without
any hesitation",
- (x,Y)EB can be interpreted as: "x is at least as good as y with some
hesitation" .
By definition, we have obviously: ,S.uNSuB=XxX, and ,S.nNS=0,
,S.nB=0, NSnB=0.
288 AIDING DECISIONS WITH MULTIPLE CRITERIA

Theorem 3.1 For any reflexive relation S on X, as well as for any set of
graded preference relations P~i , i= 1, ... ,k, O:S;k$;n, and for any outranking
relation Si being a complete preorder on Xj, i=k+ 1, ... ,n, there exist
• functions ui:Xr-~R, i=I, ... ,n, such that for each Xj,YiEXi, Ui(Xi) ~ Ui(Yi)
implies XiSiYj,
• functions <l>i:XiXXi ~R, i=I, ... ,k, such that for each Xj,Yi,Wj,ZiEXj,
<l>i(Xi,yi) ~ <l>i(Wj,Zi) implies Xi PP Yi and Wi pfi Zi with ri~sj,

• function G:Rk+2(n-k)~R, non-decreasing in the first k arguments, non-


decreasing in each (k+s)-th argument with s odd (s=I,3, ... ,2(n-k)-I) and
non-increasing in each (k+s)-th argument with seven (s=2,4, ... ,2(n-k»,
s=I, ... ,2(n-k),
• two thresholds tt,t2ER, t l<t2,
such that
G[<I>i(Xj,Yi), i=I, ... ,k, Ui(Xi), Ui(Yi), i=k+I, ... ,n] ~ t2 iff (X,Y)E~ ,
G[<I>i(Xj,Yi), i=I, ... ,k, Ui(Xi), Ui(Yi), i=k+I, ... ,n] :s; tl iff (x,Y)ENS,
t) < G[<I>i(Xi,yi), i=), ... ,k, UlXi), Ui(Yi), i=k+I, ... ,n] < t2 iff (x,Y)EB.

Proof. Since Si is a complete preorder on Xj, i=k+ 1, ... ,n, there exist
functions Ui:Xi~R such that, for each Xj,YiEXj, Ui(Xi)~Ui(Yi) implies XiSYi.
Moreover, for each Xj,YiEXj, i=I, ... ,k, let us set <l>i(xj,Yi)=h if XiP~Yi'
Therefore, for each x,y,w,zEX, (x,y)D(w,z) can also be expressed by
<l>i(Xj,Yi)~<I>i(Wj,Zi)' i=I, ... ,k, and UlXi)~Ui(Wi) and Ui(Zi)~Ui(Yi)' i=k+l, ... ,n.

Now, we prove the following Coherence Conditions with respect to


Dominance for rough Approximations (CCDA):
I) there is no (X,Y)E~ and (w,z)EBuNS such that (w,z)D(x,y),
II) there is no (x,Y)EB and (w,Z) ENS such that (w,z)D(x,y).
Let us consider condition I). If (X,Y)E~ and (w,z)EBuNS, three cases are
possible:
i) (X,Y)E~, (w,z)EB and not wSz: then, if (w,z)D(x,y), (x,y) could
not belong to ~ because, by definition, it would not be consistent;
ii) (X,Y)E~,(w,z)EB and wSz: then, there should exist (u,v) such that
not uSv and (u,v)D(w,z); therefore, if (w,z)D(x,y), by transitivity
Conjoint Measurement and Decision Rule Model 289

of D, (u,v)D(x,y) and (x,y) could not belong to ~ because, by


definition, it would not be consistent;
iii)(x,Y)E~, (w,z) ENS: then, (X,Y)E~ implies xSy and (w,z)ENS
implies not wSz; therefore, if (w,z)D(x,y), (x,y) could not belong
to ~ because, by definition, it would not be consistent.
Condition IT) can be proved analogously.
Let us consider now the following binary relation T on XxX, defined as
follows: (x,y)T(w,z) if at least one of the three following conditions is
satisfied
I) (x,y)D(w,z),
2) (X,Y)E~ and (w,z)EBuNS,
3) (x,y)EB and (w,z)ENS.
Let us observe that due to the reflexivity of D, the binary relation T is
also reflexive. Furthermore, T is transitive, i.e. for all (x,y),(w,z),(S,t)EXXX,
(x,y)T(w,z), (w,z)T(s,t) implies (x,y)T(s,t). This can be proved by checking
all the possible cases. For example, let us consider the following case:
(x,y)D(w,z), (W,Z)E~ and (s,t)EBuNS. (x,y)D(w,z) and (W,Z)E~, implies
(x,y)~BuNS, and thus (X,Y)E~. As this corresponds to condition 2) in the
definition ofT, we have (x,y)T(s,t).
Since T is reflexive and transitive then it is a partial preorder (Roubens
and Vincke, 1985). Thus, there is a function h:XxX~R such that for all
(x,y),(w,z) EXXX
a) (x,y)T(w,z) and not (w,z)T(x,y) implies h(x,y»h(w,z),
b) (x,y)T(w,z) and (w,z)T(x,y) implies h(x,y)=h(w,z).
Considering all the possible cases, one can also prove that (x,y)T(w,z)
and (w,z)T(x,y) if and only if (x,y)D(w,z) and (w,z)D(x,y), i.e.
cI>i(XioYi)=cI>i(WioZi), for i=I, ... ,k, Ui(Xi)=Ui(Wi) and Ui(Zi)=Ui(Yi) for i=k+I, ... ,n.
On the basis of b), this means that there exists G:Rk+2(n-k)~R, such that

G[cI>i(Xi,Yi), i=l .....k, Ui(Xi), Ui(Yi), i=k+l .... ,n] = h(x,y).


Furthermore, let us remark that (x,y)D(w,z) and not (w,z)D(x,y) implies
(x,y)T(w,z) and not (w,z)T(x,y) because otherwise the above coherence
conditions with respect to dominance for rough approximations would not be
satisfied. Let us also remark that (x,y)D(w,z) and not (w,z)D(x,y) means
cI>i(XioYi)~cI>i(WioZj) for i=I, ... ,k, Ui(Xi)~Ui(Wi) and Ui(Zi)~Ui(Yi)' for i=k+I, ... ,n,
290 AIDING DECISIONS WITH MULTIPLE CRITERIA

with at least one inequality strict. Thus, on the basis of a), G[<I>i(Xi,yi), i=I, ... ,k,
UlXi), Ui(Yi), i=k+I, ... ,n) is increasing in the first k arguments, increasing in each
(k+s)-th argument with s odd and decreasing in each (k+s)-th argument with
s even, where s=1, ... ,2(n-k). Therefore, function G satisfies the
monotonicity condition required by the theorem.
If (X,Y)E~ and (w,z)ENSuB, then due to condition 2) in the definition of
T, (x,y)T(w,z) and not (w,z)T(x,y), because otherwise, as already shown, we
should have <l>i(Xi,yi)=<I>i(Wi,Zi), for i=I, ... ,k, Ui(Xi)=Ui(Wi) and Ui(Zi)=Ui(Yi), for
i=k+ 1, ... ,n, which contradicts CCDA. This implies that for each x,y,w,zEX,
if(x,Y)E~ and (w,z)ENSuB, then h(x,y»h(w,z) and thus

G[<I>lxi,yj), i=I, ... ,b Ui(Xi), Uj(Yi), i=k+I, ... ,n) >


G[<I>i(Wi,Zi), j=I, ... ,k, Uj(Wi), Ui(Zi), j=k+l, ... ,n),
or, in other terms, for each (W,Z)EXXX such that (w,z)ENSuB
G[<I>i(Wj,Zi), j=I, ... ,k, Uj(Wj), Uj(Zi), j=k+l, ... ,n) <
min G[<I>j(rj,tj), j=I, ... ,k, uj(rj), Uj(tj), i=k+I, ... ,n).
(r,t)e~

One can prove analogously that for each (W,Z)EXXX, such that
(w,z)E~uB

G[<I>·(w-
1 h z·) '-1 , ...• k' u·(w·)
t ,1- 1 1, U·(7.) '-k+ 1,... ,n) >
I "'I , 1-

max G[<I>i(rj,ti), i=I, ... ,k, Ui(rj), Uj(ti), i=k+I, ... ,n).
(r,t)eNS

Therefore, if we set
tl = max G[<I>i(ri'~)' i=I, ... ,k, ui(ri), Ui(tj), j=k+I, ... ,n) and
(r,t)eNS

function G satisfies also the following properties


G[<I>i(Xi,yj), i=I, ... ,k, Uj(Xj), uiCyj), j=k+I, ....n) 2! t2 iff (X,Y)E~ ,
G[<I>j(Xi,yj), j=I, ... ,k, Ui(Xj), Ui(Yj), i=k+I .... ,n) ~ tl iff (x,Y)ENS,
tl < G[<I>j(Xj,yi), i=I, ....k, Ui(Xj), Ui(Yi), i=k+l .... ,n) < t2 iff (x,Y)EB. 0

The next result ensures that the outranking relation S considered in


Theorem 3.1 can also be represented by a set R={R2!' R~, R2!~} of decision
rules composed of a set R2! of D2!-decision rules, a set R~ of D~-decision
Conjoint Measurement and Decision Rule Model 291

rules and a set R~'5, of D~'5,-decision rules. Set R= {R~, R'5" R~'5,} of decision
rules is complete iff
• each pair (X,Y)EXXX, such that (X,Y)ES, is covered by at least one D~­
decision rule belonging to R~,

• each pair (X,Y)EXXX, such that (x,Y)ENS, is covered by at least one


D'5,-decision rule belonging to R'5"

• each pair (X,y) EXXX, such that (x,Y)EB, is covered by at least one
D~'5,-decision rule belonging to R~'5,.

Set R={R~, R'5" R~'5,} of decision rules is non-contradictory iff

• each D~-decision rule belonging to R~ covers only pairs (X,Y)EXXX,


such that (X,Y)ES,
• each D'5,-decision rule belonging to R'5, covers only pairs (X,y) EXXX,
such that (X,y) ENS,
• each D~'5,-decision rule belonging to R~'5, covers only pairs (X,Y)EXXX,
such that (X,Y)EB.
We say that set R={R~, R'5" R~'5,} of decision rules represents the
outranking relation S on X iff it is complete and non-contradictory.

Theorem 3.2 For any reflexive relation S on X, for any set of graded
preference relations pri ,
i= 1, ... ,k, and for any outranking relation Si being
a complete preorder on Xi. i=k+l,,,.,n, there exists a set R={R~, R'5" R~'5,}
of decision rules representing the outranking relation S on X ..
Proof. For each pair (W,Z)ES one can build the following D~-decision rule:

if Xl p~hl Yl and ... XkP~hk Yk and Xk+1Sk+lrk+l and ... xnSnrn and
Sk+1Sk+1Yk+l and ... SnSnYn, then xSy,

where (WhZl)E prl '''., (Wk,Zk)E p~k , Wk+l=rk+lo Wn=rn, Zk+l=Sk+h Zn=Sn.

For each pair (w,z)ENS one can build the following D'5,-decision rule:

if Xl pfh l Yl and ... XkP~hk Yk and rk+1Sk+1Xk+l and ... rnSnxn and Yk+1Sk+1Sk+l
and ... YnSnsn, then not xSy,
292 AIDING DECISIONS WITH MULTIPLE CRITERIA

Analogously, for each pair of inconsistent pairs (w,z),(u,v)eB, a D x -


decision rule can be built.
The obtained set R={R~, R~, R><l of decision rules is obviously
complete. Moreover, it is non-contradictory because of CCDA. Therefore,
set R= {R~, R~, Rx} of decision rules represents the outranking relation S 0

Let us remark that the set R={R~, R~, R~~}of decision rules considered
in the proof of Theorem 3.2 is not unique. Practically, much more synthetic
representations can be considered, involving less rules and partial profiles.
The authors proved that the minimal (i.e. the most synthetic) representation
of rough approximation £, NS and B by decision rules is unique (Greco,
Matarazzo and Slowinski 2000b).

3.1 An example of inconsistent outranking


Let us consider another version of the example presented in section 2.1.
As before, the DM has compared pairs of cars according to criteria of price
(for which strength of preference can be measured) and speed (for which
strength of preference cannot be measured). However now, in the pairwise
comparisons of the cars there are some inconsistencies in the sense of
dominance. All the 27 possible cases of comparison are presented in Table
3.1. As can be seen in the table, several pairs of cars are inconsistent in the
sense of dominance. For example, the pair (x,y) in P7 is dominated by the
pair (w,z) in P8. In fact, w is preferred to z at least as much as x is preferred
to y on the two considered criteria:
- with respect to the price, w is indifferent to z while x is worse than y
- with respect to the speed, wand z have the same high speed as x and y.
However, x is at least as good as y while w is not at least as good as z.
Two other pairs of cars are also inconsistent in the sense of dominance:
(P7,P25), (p16,P25).
The above inconsistencies represent situations of hesitation in comparing
pairs of cars.
In the last column of Table 3.1 there are values of function
G[<I>I(XhYI),U2(X2),U2(Y2)] considered in Theorem 3.1. In the head of this
column, G(x,y)= G[<I>I(XhYI),U2(X2),U2(Y2)], where
- <l>1(XhYI) represents the strength of preference with respect to price of
cars x and y, equal to Xl and Yh respectively,
Conjoint Measurement and Decision Rule Model 293

- U2(X2) and Uz(Y2) represent the utility of X2 and Y2 equal to the speed of
cars x and y, respectively.
The thresholds tb t 2, are set on the values t l =-4, tz=4, and the
approximations of outranking relations Sand SC are represented as follows:
G[<fl I(XbYI),U2(X2),U2(Y2)] ~ 4 <:::> (X,Y)E~ (i.e. "x is at least as good as y
without hesitation"),
G[<flI(XbYI),Uz(X2),U2(Y2)] ~ -4 <:::> (x,Y)ENS (i.e. "x is not at least as good
as Y without hesitation"),
-4 < G[<fl I(XbYI),U2(X2),U2(Y2)] < 4 <:::> (x,Y)EB (i.e. "x is at least as good as
y, or x is not at least as good as y, with some hesitation between the two
possibilities").
The outranking relation from Table 3.1 can be represented by the
following set ofD~-, D~- and D~~-decision rules:

#1) "if x is (at least) better than y on the price, then x is at least as good as y",
#2) "if x is at least indifferent to y (i.e. indifferent or better) on the price
and the speed of y is at most medium, then x is at least as good as y ",
#3) "if x is (at least) worse than y on the price and the speed ofx is at most
medium, then x is not at least as good as y",
#4) "if x is at most indifferent to Y (i.e. indifferent or worse) on the price
and the speed of x is at most medium and the speed of y is (at least)
high, then x is not at least as good as y",
#5) "if x is (at least) worse than y on the price and the speed of x is (at
least) high, then x is at least as good as y or x is not at least as good as y",
#6) "if x is at most indifferent to y (i.e.
indifferent or worse) on the price
and the speed of x and Y is (at least) high, then x is at least as good as
y or x is not at least as good as y".
The identity numbers of rules matching the corresponding pair of cars are
indicated in the column "Outranking" of Table 3.1.
294 AIDING DECISIONS WITH MULTIPLE CRITERIA

Table 3.1 Inconsistent outranking on all 27 cases of possible evaluations of pairs of cars

Lowerappx.
Pairs
Price Speed ofx Speed ofy Outranking and G(x,y)
of cars
boundary
PI x is worse than y low high not xSy (#3,4) NS -II
P2 x is indifferent to y low high not xSy (#4) NS -5
P3 x is better than y low high xSy (#1) S 7
P4 x is worse than y medium high not xSy (#3,4) NS -7
P5 x is indifferent to y medium high not xSy (#4) NS -4
P6 x is better than y medium high xSy (#1) S 10
P7 x is worse than y high high xSy (#5,6) B -3
P8 x is indifferent to y high high not xSy (#6) B 0
P9 x is better than y high high xSy(#I) S 13
PIO x is worse than y low medium not xSy (#3) NS -7
PII x is indifferent to y low medium xSy (#2) S 4
PI2 x is better than y low medium xSy (#1,2) S 10
P13 x is worse than y medium medium not xSy (#3) NS -5
PI4 x is indifferent to y medium medium xSy (#2) S 8
PI5 x is better than y medium medium xSyi#I,2) S 12
PI6 x is worse than y high medium xSy(#51 B -I
PI7 x is indifferent to y high medium xSy (#21 S 12
PI8 x is better thany_ high medium xSy (#1,2) S 14
PI9 x is worse than y low low not xSy (#3J NS -5
P20 x is indifferent to J' low low xSy (#2) S 10
P21 x is better than y low low xSy(#1,2) S 13
P22 x is worse than y medium low not xSy(#3) NS -4
P23 x is indifferent to y medium low xSy (#2) S 12
P24 x is better than y medium low xSy (#1,2) S 14
P25 x is worse than y high low not xSy (#5) B 0
P26 x is indifferent to y high low xSy(#2) S 14
P27 x is better than y high low xSy(#1,2) S 15

3.2 Representation of a four-valued outranking


The result that we are going to present also concerns representation of
inconsistent preferences. It differs, however, from Theorems 3.1 and 3.2 by
the following two features:
1. In Theorems 3.1 and 3.2 we considered a single outranking relation S,
however, in this section we are considering two relations: an outranking
relation S and a negative outranking relation SC. They have the following
interpretation:
Conjoint Measurement and Decision Rule Model 295

• xSy means that there are some arguments in favor of the statement "x
is at least as good as y", i.e. there are some arguments in favor of the
outranking ofx over y,
• xScy means that there are some arguments against the statement "x is
at least as good as y", i.e. there are some arguments against the
outranking ofx over y.
While the only minimal requirement imposed on S is its reflexivity, the
only minimal requirement imposed on SC is its irreflexivity, i.e. for each
XEX, not xScx.

As the arguments for outranking and for the negative outranking may
come from different sources, the relations Sand SC are not
complementary. Thus, in general, not S 7:- Sc. In consequence, the four
following situations are possible:
• uSv and not uScv, that is true outranking (denotation uSTv),
• uScvand not uSv, that isfalse outranking (denotation uSFv),
• uSv and uScv, that is contradictory outranking (denotation uS~),
• not uSv and not uScv, that is unknown outranking (denotation uSuv).
The above four situations together constitute the so-called four-valued
outranking (see Tsoukias and Vincke 1995, 1997). It has been
introduced to underline the presence and the absence of positive and
negative reasons for the outranking. Moreover, it enables to distinguish
contradictory situations from unknown ones.

2. In Theorems 3.1 and 3.2 it is assumed that there exists a set of graded
preferences P~i with respect to each criterion i for which it is possible to
define a preference strength and one marginal outranking relation Sj with
respect to each criterion for which it is not possible to define a
preference strength. In this section, the graded preferences P~i and/or
the marginal outranking relation Sj, for i=I, ... ,n, do not pre-exist but
must be constructed starting from Sand Sc. To permit an appropriate
construction, Sand SC have to satisfy some specific conditions, in a
sense analogous to conditions Cl(i) - C4(i) above.
Considering comprehensive negative outranking binary relation SC, two
marginal outranking relations sjk and S~c, can be defined with respect to
each gi, i=l, ... ,n, as follows:
296 AIDING DECISIONS WITH MULTIPLE CRITERIA

Xi sfC Yi iff for all a_iEX_i and ZEX, [(Xi. a-i) SC Z => (Yi. a_i) SC z],

Xi S~C Yi iff for all a_iEXi and ZEX, [z SC (Yi, a-i) => Z SC (Xi. a_i)].

Let us remark that Xi S~ Yi and Xi S~C Yi must be read: "Xi is at least as


good as Yi on the basis of SC".
Due to the implication in their definitions, binary relations S~ and S~c
are transitive.
The comparison of the strength of preference of Xi over Yi with the
strength of preference of Wi over Zi can be defined on the basis of the
comprehensive negative outranking binary relation SC and expressed by the
binary relation S~c defined on X iXXi in the following way:

ifffor all V_i.tiEXi.


(Xi. V_i) SC (Yi. ti) implies (Wi. V-i) SC (Zi. ti).

The relation (Xi. Yi) S~c (Wi. Zj) reads: "on the basis of negative
outranking SC, the strength of preference of Xi over Yi is at least as great as
the strength of preference of Wi over zi" .
Due to the implication in its definition, binary relation S~c is transitive.
While S~ is coherent with sf and S~ , S~c is coherent with S~ and S~c , i.e.

Xi sfC Yi implies [(Yi. Wi) S~C (Zi. Ui) => (Xj, Wi) S~C (Zj, Ui)],

Xi S~C Yi implies [(Wi. Xi) S~C (Zi. Ui) => (Wi. Yi) S~C (Zj, Ui)].
Representation of an outranking relation S and a negative outranking
relation SC proposed below is based on the following set of cancellation
properties:
la) for all i=l,oo .,k, for all Xi.Yi.Zi.WiEXi, a_i.b_i,c_i,d_iEX_i we have
Cl(i) [(xj, a-i)S(Yi, b_ i) and (Wi. C_i)S(Zj, d_ i)] =>
[(wj, a_i)S(zj, b_i) or (Xi. C-i)S(yi. d_ i)];
Cl'(i) [(xj, a_i)SC(yj, b_i) and (Wi,C_i)SC(Zj, d_ i)] =>
[(Wi. a_i)SC(Zj, b_i) or (xj, C_i)SC(yj, d_ i)];
Cl "(i) [(xj, a-i)S(Yi, b_i) and (Xi. C_i)SC(yi. d_ i)] =>
[(Wi. a_i)S(zj, b_i) or (Wi. C_i)SC(Zj, d_ i)];
Conjoint Measurement and Decision Rule Model 297

C2(i) [(xj, a.i)S(yj, b_i) and (wj, C_i)S(Zj, d_ i)] :::)


[(wj, a.i)S(Yi, b_i) or (xj, C_i)S(Zj, d_i)];
C2'(i) [(xj, a.i)SC(Yi, b_i) and (wj, C_i)SC(Zj, elJ] :::)
[(wj, a.i)SC(yj, b_i) or (xj, C_i)SC(Zj, eli)];
C2"(i) [(Xj, a_i)S(yj, b_J and (Xi, C_i)SC(Zj, elJ] :::)
[(Wi, a.i)S(yj, b_i) or (Wi> C_i)SC(Zj, elJ];
C3(i) [(Xj, a_i)S(yj, b_i) and (wj, c_JS(Zj, eli)] :::)
[(Xj, a_i)S(Zj, b_i) or(wj, C_i)S(Yi, elJ];
C3'(i) [(Xi. a_JSc(yi. b_i) and (Wi, C_i)SC(Zj, elJ] :::)
[(Xj, a_i)SC(Zj, b_i) or (Wi. C_i)Sc(yj, elJ];
C3"(i) [(Xj, a.i)S(Yi, b_i) and (Wi> C_i)SC(Yi, d_i)] :::)
[(xj, a_i)S(Zj, b_i) or (Wi, C_i)SC(Zj, d_i)];
C4(i) [(Xj, a.i)S(Yi, b_i) and (Wi, C_i)S(Xj, eli)] :::)
[(Zj, a-i)S(Yi, b_i) or (Wj, C_i)S(Zj, d_i)];
C4'(i) [(Xj, a.i)SC(yj, b_i) and (wj, C_i)SC(Xi, eli)] :::)
[(Zj, a_i)SC(yj, b_i) or (Wi. c_JSC(Zj, d_i}].
C5(i) [(Xj, a.i)S(yj, b_i) and (wj, C_i)SC(Zj, eli)] :::)
[(Zj, a-i)S(Yi> b_i) or (Wj, C_i)SC(Xj, eli)];
C5'(i) [(xj, a_JS(yj, b_i) and (Wi, C_i)SC(Zj, eli)] =>
[(Xj, a_i)S(Wi, b_J or (Yi, C_i)SC(Zj, eli)].
Conditions C1 '(i), C2'(i), C3'(i) and C4'(i) with respect to SC have
analogous interpretation to conditions C1(i), C2(i), C3(i) and C4(i) with
respect to S, i.e.

1') condition C1 '(i) ensures that, on the basis of Sc, the relation S~c
(i=1, ... ,k) is a complete preorder on XiXX i and it is fully meaningful to
speak about the strength of preference; otherwise the strength of
preference is not meaningful for criterion i,

2') condition C2'(i) says that marginal outranking relation src (i=1, ... ,n) is a
complete preorder on Xi,
3') analogously, condition C3'(i), says that marginal outranking relation S~c
(i=1, ... ,n) is a complete preorder on Xi>
298 AIDING DECISIONS WITH MULTIPLE CRITERIA

4') condition C4'(i), ensures for i=l, ... ,n, that the orderings obtained from
src and Sr c are compatible, i.e. there is no Xj,YiEXi such that XiSrc Yi
and not Yi src Xi (i.e. Xi is preferred to Yi on the basis of Src), and not
Xi Sr Yi and Yi Sr Xi (i.e. Yi is preferred to Xi on the basis of Sr
c c C ).

To show that properties based on Cl '(i), C2'(i), C3'(i) and C4'(i) with
respect to SC can be proved analogously as corresponding properties based
on, Cl(i), C2(i), C3(i) and C4(i) with respect to S, let us consider, as an
example, condition Cl '(i). It ensures that, on the basis of SC, the relation s~c
(i=l,oo .,k) is strongly complete: in fact, S~c would not be strongly complete
if there were xj,Yj,Zj,WiEXj, a_j,b_j,c_j,d_iEX_ i such that

i) (xj,a_i) Sc (Yi,b_ i) and not (wj,a_i) Sc (zj,b_i): by definition of S~c this


implies not (Xj,Yi) S~c (Wj,Zi);

ii) (Wj,C_i) sc (Zj,d_i) and not (Xi,C-i) Sc (Yj,d_ i): by definition of S~c this
implies not (Wj,Zi) S~c (xj,yJ

However for Cl '(i) we cannot have i) and ii) at the same time.
Therefore, if condition Cl '(i) holds, binary relation S~c, is a complete
preorder, because it is transitive (as pointed out before, by definition) and
strongly complete (for Cl '(i».

sr
Conditions Cl"(i), C2"(i) and C3"(i) ensure coherence between relations
S; , sf, from one side and relations S~c , src , Sr c from the other side.
More precisely,

1")Cl "(i) says that S~ and S~c are compatible, i.e. there is no Xi,yj,Wj,ZiEXi
such that (Xi,yi) S~ (Wj,Zi) and not (Wj,Zi) S~ (Xi,yi) (i.e. Xi is preferred to Yi
more than Wi is preferred to Zi on the basis of S~), and (Wj,Zi) S~c (Xi,Yi)
and not (Xi,yi) S~c (Wj,Zi) (i.e. Wi is preferred to Zj more than Xi is preferred
to Yi on the basis of S~c ),

2")C2" (i) says that sf and src are compatible, i.e. there is no Xj,YiEXi such
that Xi sf Yi and not Yi sf Xi (i.e. Xi is preferred to Yi on the basis of sf),
and Yi src Xi and not Xi sfC Yi (i.e. Yi is preferred to Xi on the basis of src ),
Conjoint Measurement and Decision Rule Model 299

3")C3"(i) says that S~ and S~c are compatible, i.e. there is no Xj,YiEXi such
that Xi S~ Yi and not Yi S~ Xi (i.e. Xi is preferred to Yi on the basis of S~),
and YiS~cXi and not XiS~cYi (i.e. Yi is preferred to Xi on the basis of
S~C).

Finally, condition C5(i) ensures coherence between relations sf and


S~c , and condition C5'(i) ensures compatibility between relations S~ and
sfk . More precisely,

5) C5(i) says that sf and S~c are compatible, i.e. there is no Xj,YiEXi such
that Xi sf Yi and not Yi sf Xi (i.e. Xi is preferred to Yi on the basis of sf),
and Yi S~c Xi and not Xi S~c Yi (i.e. Yi is preferred to Xi on the basis of
S~C)
1 ,

5') C5'(i) says that S~ and sfk are compatible, i.e. there is no Xj,YiEXi such
that Xi S~ Yi and not Yi S~ Xi (i.e. Xi is preferred to Yi on the basis of S~),
and Yi sfk Xi and not Xi sfk Yi (i.e. Yi is preferred to Xi on the basis of sfk ).

If the marginal outranking relations sf, s~, sfk and S~c are complete
preorders (i.e. if conditions C2(i), C3(i), C2'(i), C3'(i) hold) and are all
pairwise-compatible (i.e. if conditions C2"(i), C3"(i), C4(i), C4'(i), C5(i),
C5'(i) hold), then binary relation si=sf ns~ nSfk nS~c is a complete
pre order. In fact, Si is transitive because it is obtained as intersection of
transitive binary relations. Moreover, Si is strongly complete because
relations sf, s~ ,sfk ,s~c are strongly complete and pairwise-compatible.
Remark that Si would not be strongly complete in one of the two following
cases:

i) if there was KE {Sf ,s~ ,sfk ,S~C} and Xi,yiEXi such that not XiKYi and
not YiKxj, but this is impossible because sf, s~, sfk and S~c are all
strongly complete;

ii) ifthere were K),K 2 E {Sf ,s~ ,sfk ,s~c } and Xj,YiEXi such that not XiKIYi
and not YiK2Xi, but for the strong completeness of sf, s~ , sfk , S~c and,
300 AIDING DECISIONS WITH MULTIPLE CRITERIA

in consequence, for the strong completeness of K, and K2, not xjK,Yj


implies yjK,xj and not yjK2xj implies xjK2Yj; this gives yjK,xj and not
xjK,yj, xjK2Yj and not yjK2xj, which is impossible because of pairwise-
compatibility of sf , S~ , s(k and S~c .

Analogously, if the binary relations S~ and S~c are complete preorders


(i.e. Cl(i) and Cl'(i) hold) and are mutually compatible (i.e. Cl"(i) holds),
then the binary relation S~ n S~c is a complete pre order. Therefore, a set of
graded preference relations pri on Xj can be defined corresponding to
equivalence classes of the binary relation S~ n S~c being a complete
preorder. Let us remark that since S~ is coherent with sf and S~ , and S~c
is coherent with s(k and S~c , then S~ n S~c is coherent with Sj, i.e.

The next result concerns representation of an outranking relation S and a


negative outranking relation SC defined on X by a set of decision rules
R= {R~, R~} composed of a set R~ of D~-decision rules and a set R~ of D~-
decision rules. In particular, set R={R~, R~} of decision rules is complete iff

• each pair (X,y)EXXX, such that xSy, is covered by at least one D~­
decision rule belonging to R~,

• each pair (X,y) EXXX, such that xSCy, is covered by at least one D~­
decision rule belonging to R~.
Set R={R~, R~} of decision rules is non-contradictory iff

• each D~-decision rule belonging to R~ covers only pairs (X,y)EXXX,


such that xSy,
• each D~-decision rule belonging to R~ covers only pairs (X,y)EXXX,
such that xScy.
We say that set R={R~, R~} of decision rules represents the outranking
relation S and the negative outranking relation SC on X iff it is complete and
non-contradictory .
Conjoint Measurement and Decision Rule Model 301

Theorem 3.3 (Greco, Matarazzo and Slowinski 2000b) Let S be a rejlexive


relation on X and let SC be an irrejlexive relation on X. The following three
propositions are equivalent:
1) conditions 1a) and 1b) above are satisfied;
2) there exist
• functions ui:Xi~R,for i=l, ... ,n,
• functions \}'i: RxR~R, for i=l, ... ,k, O~Ic:;n, non-decreasing in the
first argument and non-increasing in the second argument,
• two functions G:Rk+2(n-k)~R and GC:Rk+2(n-k)~R, non-decreasing in
the first k arguments, non-decreasing in each (k+s )-th argument with
s odd (s=1,3, ... ,2(n-k)-1) and non-increasing in each (k+s)-th
argument with seven (s=2,4, ... ,2(n-k», s=1, ... ,2(n-k),
such that
G[\}'i(Ui(Xi), Ui(Yi», i=I, ... ,k, Ui(Xi), Ui(Yi), i=k+l, ... ,n] ~ 0 iff xSY,
GC[\}'i(Ui(Xi), UiCYi», i=I, ... ,k, Ui(Xi), Ui(Yi), i=k+l, ... ,n] < 0 iff xScy.

3) there exist

• a marginal outranking relation Si= Sr ( l S~ ( l Src ( l s~c for each


i=l, ... ,n, being a complete preorder on Xj,

• two binary relations S~ and S~c on XiXX i for each i= 1, ... ,k, being
complete preorders, which are mutually compatible such that also
S~ ( l S~c is a complete preorder and from its equivalence classes a set
of graded preference relations pri on Xi can be defined,
• a set R={R~, R~} of decision rules representing the outranking
relation S and the negative outranking relation SC.
Proof. First we prove that 1) ~ 2). Since Si is a complete pre order for each
i=l, ... ,n, then there exists a function Ui:Xi~R such that, for each Xi,yiEXj,
XiSiYi if and only if UiCXi)~ulYi)' Moreover, since also binary relation
S~ ( l S~c , for i= 1, ... ,k, is a complete pre order coherent with binary relation
Sj, then there exists a function \}'i:R2 ~ R non-decreasing in the first
argument and non-increasing in the second argument such that
(Xj,Yi) S~ ( l S~c (Wj,Zi) if and only if \}'lUi(Xi),Ui(Yi»~\}'i(Ui(Xi),ulYi»' On the
302 AIDING DECISIONS WITH MULTIPLE CRITERIA

basis of binary relations S~ n S~c , i= 1, ... ,k, and Sj, i=k+ 1, ... ,n, a dominance
relation D can be defined on XxX as follows: for each (x,y),(w,Z) EXXX

(x,y)D(w,z) iff (Xi,yi)S~ nS~c (Wj,Zi) for each i=I, ... ,k, and Xi Si Wi and
ZiSiYi for each i=k+ 1, ... ,no

The definition of the binary relations S~ nS~c, i=I, ... ,k, and Si,
i=k+ 1, ... ,n, implies the following Coherence Conditions with respect to
Dominance for the outranking relation S (CCDS):
there is no x,y,w,zEX such that (x,y)D(w,z), and wSz and not xSy,
Analogously, there exists the following Coherence Conditions with
respect to Dominance for the negative outranking relation SC (CCDSC):
there is no x,y,w,zEX such that (x,y)D(w,z), and not wScz and xScy.
As far as representation of outranking relation S is concerned, on the
basis of dominance relation D and CCDS, function G:Rk+2(n-k)~R,
increasing in the first k arguments, increasing in each (k+s)-th argument with
s odd (s=I,3, ... ,2(n-k)-I) and decreasing in each (k+s)-th argument with s
even (s=2,4, ... ,2(n-k)), s=I, ... ,2(n-k), can be build in a way described in
Theorem 2.1, such that
G['Pi(Ui(Xi),Ui(Yi)), i=l, ... ,b Ui(Xi), Ui(Yi), i=k+l, ... ,n] ~ 0 iff xSy.
Let us concentrate, therefore, on representation of negative outranking
relation SC. To this end, let us consider the following binary relation T on
XxX, defined as follows: (x,y)T(w,z) if at least one of the two following
conditions is satisfied:
1) (x,y)D(w,z),
2) not xScy and wScz.
On the basis of dominance relation D and CCDSc, one can prove that
binary relation T is reflexive and transitive, i.e. it is a partial pre order.
Therefore, there is a function hC:XxX~R such that, for each
(x,y),(W,Z)EXXX, (x,y)T(w,z) implies hC(x,y)~hC(w,z).
Taking into account functions 'Pj, i=I, ... ,k, and Uj, i=k+1, ... ,n, and
CCDS c, one can construct a function VC:Rk+2(n-k)~R, increasing in the first k
arguments, increasing in each (k+s)-th argument with s odd and decreasing
in each (k+s)-th argument with s even, where s=I, ... ,2(n-k), such that
VC['Pi[Ui(Xi), Ui(Yi)], i=l, ... ,k, UiCXi)' Ui(Yi), i=k+l, ... ,n] = hC(x,y). On the basisi of
CCDS c, the following property of discriminating capacity of functions hC
Conjoint Measurement and Decision Rule Model 303

and yc holds: for each x,Y,W,ZEX, if not xScy and wScz, then hC(x,y»hC(w,z)
and thus
yC['Pi[Ui(Xi), UlYi)], i=I, ... ,b Ui(Xi), Ui(Yi), i=k+l, ... ,n] >
yC['Pi[Ui(Wi), Ui(Zi)], i=I, ... ,k, Ui(Wi), Ui(Zi), i=k+l, ... ,n],
or, in other terms, for each (W,Z)EXXX such that wScz
yC['Pi[Ui(Wi), Ui(Zi)], i=I, ... ,k, Uj(Wj), uiCzj), j=k+l, ... ,n] <
min yC['Pj[Uj(Sj), Uj(tj)], j=I, ... ,k, Uj(Sj), ultj), j=k+l, ... ,n].
not sSC t

Given the function yC['Pj[Uj(Xj), Uj(Yj)], j=I, ... ,k, Uj(Xj), Uj(Yj), j=k+l, ... ,n], one
can define function GC:Rk+2(n-k)~R, as follows
GC['Pj[Uj(Wj), Uj(Zj)], j=I, ... ,b Uj(Wj), Uj(Zj), j=k+l, ... ,n] =
yC['Pj[Uj(Wj), Uj(Zj)], j=I, ... ,k. Uj(Wj), Uj(Zj), j=k+l, ... ,n] - Q
where Q = min yC['Pj[Uj(Sj), Uj(tj)].i=l, ... ,k, Uj(Sj), Uj(tj), j=k+l, ... ,n].
not sSC t
Similarly to function yc, function GC is increasing in the first k
arguments, increasing in each (k+s)-th argument with s odd and decreasing
in each (k+s)-th argument with s even, where s=I, ... ,2(n-k). Consequently,
function GCsatisfies the monotonicity properties specified in proposition 2).
Moreover, by definition and due to discriminating capacity of function
yc, function GCsatisfies the following property: for all (X,Y)EXXX such that
not xScy we have
GC['Pj[Uj(Wj), Uj(Zj)], j=l .... ,k, Uj(Wj), UlZi), j=k+l, ... ,n] ~ 0
and for all (W,Z)EXXX such that wScz we have
GC['Pj[Uj(Wj), Uj(Zj)], j=I, ... ,k. Uj(Wj), Uj(Zi), j=k+l, ... ,n] < O.
Thus, we. proved that 1) =>2).
Now let's prove that 1) => 3). As explained before, in view of
cancellation properties assumed in proposition 1), Sj is a complete preorder
on Xj, while s7 n s7c is a complete preorder on XjxXj. Since s7 n s7 c , for
i=I, ... ,k, is a complete preorder, its equivalence classes define a set of
graded preference relations. For each pair (W,Z)EXXX, such that wSz, the
following D~-decision rule can be built:

if Xl p~hl Yl and ... XkP~hk Yk and Xk+1Sk+lrk+l and ... xnSnrn and Sk+1Sk+1Yk+l
and ... snSnYn, then xSy,
where (Wt.Zl)E p~l , ... , (Wk,Zk)E p~k , Wk+l=rk+t. wn=rn, Zk+l=Sk+t. Zn=Sn.
304 AIDING DECISIONS WITH MULTIPLE CRITERIA

For each pair (W,Z)EXXX, such that wScz, the following Ds;-decision rule
can be built:

ifXt pfh 1 Yt and ... Xk p~hk Yk and rk+tSk+tXk+t and '" rnSnxn and Yk+tSk+tSk+t
and ... YnSnsn, then xScy,

where (WhZt)E pfl , ... , (Wk,Zk)E p~k , Wk+t=rk+h wn=rn, Zk+t=Sk+h Zn=Sn.
The set R={R-e., R::;;} of decision rules corresponding to all pairs
(W,Z)EXXX is obviously complete. Moreover, it is non-contradictory
because otherwise CCDS for D~-decision rules and CCDSc for Ds;-decision
rules would not hold. Therefore, this set R={R-e., R:s;} of decision rules
represents the outranking relation S and the negative outranking relation SC.
This completes the proof of I) => 3).
The proof of2) => I) and 3) => I) is simple and left to the reader. 0

Let us remark that the set R={R-e., R:s;}of decision rules considered in the
proof of Theorem 3.3 is not unique. There is a decision rule for each pair
(W,Z)EXXX, so these sets are maximal in the sense that they contain all rules
that can be defined with complete profiles (all criteria are considered).
Practically, much more synthetic representations can be considered,
involving less rules and partial profiles. The authors proved that the minimal
(i.e. the most synthetic) representation of outranking and negative outranking
relations Sand SC by decision rules is unique (Greco, Matarazzo and
Slowinski 2000b).
Let us remark that in Theorem 3.3 conditions C2"(i), C3"(i), C4(i), C4'(i),
C5(i), C5'(i) are supposed to be satisfied for all i=I, ... ,n. If this was not the
case, then we would have the following consequences in the representation
ofS and Sc:
I) As to the functional representation, one should consider four functions
Ui:Xi~R, Vi:Xi~R, uf :Xi~R, and vf :Xi~R for i=I, ... n, and functions

'l'i:RxR ~R and 'I'{ :RxR ~R for i=I, ... k, such that

G['I'i(uiCxi), Vi(Yi», i=t, ... ,k> Ui(Xi), ViCYi), i=k+t, ....n] -e. 0 iff xSY

G C[ 'l'ic (uf (Xi), Vf (Yi», i=t, ... ,k, uf (Xi), vf (Yi), i=k+t .... ,n] < 0 iff xScy.
This means that in order to represent the outranking relation S and the
negative outranking relation SC for two actions X and y, we need function
UiCXi) defining the "value" of Xi and function ViCYi) defining the "value" of
Conjoint Measurement and Decision Rule Model 305

Yi on the basis of S, as well as function ui (Xi) defining the "value" of Xi


and function vi (Yi) defining the "value" of Yi on the basis of SC.
2) As to the decision rule representation, if C2"(i), C3"(i), C4(i), C4'(i),
C5(i), C5'(i) were not satisfied, then sf,
sr, Sr: and Sr c would not be
mutually compatible and thus the decision rules should be defined in
terms of sf, sr , Sr: and Sr c rather than in terms of
S·=S#
1 1
nS~
I
nS!knS~c
I I'

. Thus, if C2"(i), C3"(i), C4(i), C4'(i), C5(i), C5'(i) are not satisfied, a D~-
decision rule has the following syntax:

if Xii Pil~h·
l II Yil an d ... XidP id
~h·,d Yid an d Xid+l S#id+1 rid+l an d ... Xie S#ierie an d

Sie+l S~+I Yie+l and ... SifS~ Yif, then xSy.


Analogously, a D~-decision rule has then the following syntax:

if Xii Pils:h·,I Yil


l an d ... Xid Pid
s:h·,d Yid an d rid+l S#c
id+1 Xid+l an d ... rie S#c
ie Xie an d
Yie+l S~~l Sie+l and ... YifS~c Sif, then xScy.

3.3 An example of four-valued outranking


Let us consider yet another version of the example presented in section
2.1. As before, the DM wants to compare pairs of cars according to criteria
of (1) price (for which the strength of preference can be measured) and (2)
speed (for which the strength of preference cannot be measured).
Table 3.2 presents all possible profiles of the pairs of cars with respect to
the two considered criteria. Let us observe that the positive and the negative
outranking relations Sand SC specified on these pairs in Table 3.2 satisfy
proposition 1) of Theorem 3.3. In fact, it can be seen in the table that each
time car X is preferred to car Y at least as much as car w is preferred to car z
on both criteria, the situation "not xSY and wSz" or "xScY and not wScz"
does not happen. Therefore, it is possible to build functions G and GC
representing the positive and the negative outranking relations and satisfying
the conditions present in proposition 2) of Theorem 3.3. The values of these
functions are shown in the two last columns of Table 3.2 (in the head of
these columns, G(x,y)=G['I'I(Ul(Xl),Ul(Yl)),U2(X2),U2(Y2)] and GC(x,y)=
G['I'1(Ul(X,),U,(Y,)),U2(X2),U2(Y2)])' Therefore, the outranking relations Sand
SC are represented as follows:
306 AIDING DECISIONS WITH MULTIPLE CRITERIA

G['I'I(UI(XI),UI(YI)), U2(X2),U2(Y2)] ~ 0 <=> xSy,


GC['I'I(UI(XI),UI(YI)), U2(X2),U2(Y2)] < 0 <=> xSCy
where
- 'l'1(ul(xl),ul(YI)) represents the strength of preference with respect to
price of cars x and y, equal to XI and Yh respectively (UI(XI) and UI(YI)
mean the utility of XI and Yh respectively),
- U2(X2) and U2(Y2) represent the utility of X2 and Y2 equal to the speed of
cars X and y, respectively.
According to proposition 3) of Theorem 3.3, the positive and the negative
outranking relations Sand SC can be also represented by the following set of
decision rules (in Table 3.2, in the columns of Sand SC, there are given
identity numbers of the rules matching the corresponding pair):
#1) "if the
speed of x is at least medium and the speed of Y is at most
medium, then x is at least as good as Y (i.e. xSy)",
#2) "if x is at least indifferent to y (i.e. indifferent or better) on the price
and the speed of y is (at most) low, then x is at least as good as y (i.e.
xSy) ",
#3) "if x is (at least) worse than y on the price, then x is not at least as
good as y (i.e. xSCy)",
#4) "if x is at most indifferent to y (i.e. indifferent or worse) on the price
and the speed of x is at least medium and the speed of y is (at least)
high, then x is not at least as good as y (i.e. xSCy)".

The four cases of the four-valued outranking are also represented in


Table 3.2:
a) pairs of actions (x,y) covered by a rule whose consequent is "xSy" and
not covered by any rule whose consequent is "xSCy" are in relation of
true outranking xSTy; these are the pairs: PI4, PIS, PI7, PI8, P20, P2I,
P23, P24, P26, P27;
b) pairs of actions (x,y) covered by a rule whose consequent is "xSV' and
not covered by any rule whose consequent is "xSy" are in relation of
false outranking xSFy; these are the pairs: PI, P2, P4, PS, P7, PIO, PI9;
c) pairs of actions (x,y) covered by a rule whose consequent is "xSy" and
by a rule whose consequent is "xSV' are in relation of contradictory
outranking xSI)r; these are the pairs: P13, PI6, P22, P2S;
Conjoint Measurement and Decision Rule Model 307

d) pairs of actions (x,y) not covered by any rule whose consequent is "xSy"
and not covered by any rule whose consequent is "xSCy" are in relation
of unknown outranking xSUy; these are the pairs: P3, P6, P8, P9, PII,
P12.
Table 3.2 Four-valued outranking on all 27 cases of possible evaluaions of pairs of cars

4-valued
Pairs of Speed Speed
Price S S' outran- G(x,y) G'(x,y)
cars ofx Ofy
king
PI x is worse thany low high SC (#3,4) SF -15 -15
P2 x is indifferent to y low high SC (#4) SF -10 -6
P3 x is better than y low high SU -6 3
P4 x is worse than y medium high SC (#3,4) SF -8 -II
P5 x is indifferent to y medium high SC (#4) SF -6 -5
P6 x is better than y medium high SU -4 6
P7 x is worse than y high high SC (#3) SF -5 -7
P8 x is indifferent to 'I high high SU -4 6
P9 x is better than y high high SU -3 9
PIO x is worse than y low medium SC (#3) SF -6 -10
Pll x is indifferent to 'I low medium SU -4 0
P12 x is better than y low medium SU -3 6
PI3 x is worse than y medium medium S (#1) SC (#3) SK 0 -8
P14 x is indifferent to 'I medium medium S (#1) ST 4 4
P15 x is better than y medium medium S (#1) ST 8 8
Pl6 x is worse than y high medium S (#1) SC (#3) SK 6 -6
PI7 x is indifferent to y high medium S (#1) ST 8 8
P18 x is better than y high medium S (#1) ST 10 10
P19 x is worse than y low low SC (#3) SF -3 -7
P20 x is indifferent to 'I low low S (#2) ST 6 6
P2l x is better than y low low S (#2) ST 9 9
P22 x is worse than y medium low S (#1) SC (#3) SK 6 -6
P23 x is indifferent to 'I medium low S (#1,2) ST 8 8
P24 x is better than y medium low S (#1,2) ST 10 10
P25 x is worse than y high low S (#1) SC (#3) SK 9 -5
P26 x is indifferent to )< high low S (#1,2) ST 10 10
P27 x is better than y high low S (#1,2) ST 11 11

4. Comparison with some aggregation procedures


In this section we recall some well known multicriteria aggregation
procedures used in decision aiding and we show how they can be
represented in terms of the decision rule model. Let us mention that the
decision rule model is explicitly used in the dominance-based rough set
308 AIDING DECISIONS WITH MULTIPLE CRITERIA

approach to multicriteria choice and ranking problems (Greco, Matarazzo


and Slowinski 1999, 2000a, 2001). Earlier, these aggregation procedures
have been represented in terms of the non-additive and non-transitive
conjoint measurement model by Bouyssou, Pirlot and Vincke (1997),
although, to represent these aggregation procedures it is not necessary to
drop the additivity property; an additive and non-transitive model of conjoint
measurement is enough indeed.

LEXICOGRAPHIC AGGREGATION (Fishburn 1974, 1975)


xSy if and only if the criteria are ordered according to their decreasing
importance, such that for each iJ E {I ,... ,n}, i<j if and only if gi is more
important than gj, and one of the following conditions holds:
• !Ui(Xi)-Ui(Yi)!::;qi for each i=l, ... ,n, where qi> i=l, ... ,n, denotes a non-
negative indifference threshold,
• there exists iE{I, ... ,n} such that Ui(Xi)-ulYi»qi and for each jE{I, ... ,n}
such that j<i, !uj{Xj)-uj{Yj)!::;qi>.
On a particular criterion gi> we can state preference Xi pI Yi iff UlXi)-
Ui(Yi»qi> indifference Xi p? Yi iff !Ui(Xi)-Ui(Yi)!::;qi and inverse preference
Xi Pi! Yi iff Ui(Xi)-Ui(Yi)<--qi.
The set of decision rules describing the lexicographic aggregation
procedure has the following form:

if Xl Pf! Yt. then xSy,


if Xl PfO Yl and X2 p~l Y2, then xSy,
if Xl PfO Yl and X2 P~o Y2 and X3 p~l Y3, then xSy,

if Xl pfO Yl and X2 P~o Y2 and ... and Xn-2 P~~2 Yn-2 and Xn-l P~~l Yn-!, then xSy,
1if Xl PI>0 Yl and X2 P>0
2 Y2 an d .. .an d Xn-l Pii-l
>0 Yn-l an d Xn Pil
>0 Yn, t h en XSy.

MAJORITY AGGREGATION (Rochat 1980, Roy and Bouyssou 1993)


xSy if and only if the following condition holds:
Conjoint Measurement and Decision Rule Model 309

where Wi denotes a non-negative weight associated with criterion gj, s is a


majority threshold (1I2:::;;s:::;;1) and outranking Si is defined as follows: XiSiYi
iff Ui(Xi}-Ui(Yi)~-qi.
On a particular criterion gj, apart from outranking Si= p?, we can state
inverse preference Xi Pi 1 Yi iff Ui(Xi)-Ui(Yi)<-qi.
The set of decision rules describing the majority aggregation procedure
has the following form:
if Xii Pi!~o Yil and Xi2 Pi2~o Yi2 and ... and Xip Pip~o Yip, then XSY,
I

such that {il,i2, ... ,ip}c;;{1,2, ... ,n}, (Wil+ Wi2+ ... + wip)/IJ=1 w j ~ s.
Let us observe that, in comparison with the lexicographic aggregation,
not all the previous decision rules are necessary. Indeed, we can obtain a
representation of the binary relation S on X by majority aggregation
considering only those decision rules that do not involve any i=i1,i2, ... ,ip for
which (Wil+ Wi2+ ... + Wip-Wi)/IJ=IWj ~ s.

ELECTRE I (Roy, 1968)


xSy if and only if the two following conditions hold:
• Ui(Xi) - Ui(Yi) ~ -Qj, i=1, ... ,n,

where Wi denotes a non-negative weight associated with criterion gj,


i=1, ... ,n, s is a concordance threshold (1I2:::;;s:::;;1), Qi is a veto threshold such
that Qi>qi~O, and outranking Si is defined as above.
On a particular criterion gj, apart from outranking Si= p? ' we can state
inverse 'weak' preference Xi Pil Yi. iff -Qi:::;;Ui(Xi}-Ui(Yi)<-qj, and veto
preference Xi Pi2 Yj, iff Ui(Xi}-Ui(Yi)<-Qi.

The set of decision rules describing the aggregation procedure of


ELECTRE I has the following form:
310 AIDING DECISIONS WITH MULTIPLE CRITERIA

if XiI Pn~o Yil an d Xi2 P i2~o Yi2 and ... Xip Pip~o Yip and Xkl P kl~-I Ykl an d Xk2
l

P~21 Yk2 and ... Xkq p~~1 Ykq, then xSy,

such that {il,i2, ... ,ip }u{kl,k2, ... ,kq}={1,2, ... ,n}, (WiI+Wi2+ ... +Wip)/
IJ=I Wj ~ s.
Also in this case not all the previous decision rules are necessary, i.e. we
can obtain a representation of the binary relation S on X by ELECTRE I
considering only those decision rules that involve sets {i 1,i2, ... ,ip} including
no i=i1 ,i2, ... ,ip for which (WiI+ Wi2+ ... + Wip-Wi)/ IJ=1 W j ~ s.

TACTIC (Vansnick 1986)


xSy if and only if the following two conditions hold:
• U;(Xi) - U;(Yi) ~ -Q;, i=I, ... ,n,

where preference Pi is the asymmetric part of outranking Si which is defined


as above, i.e. XiPiYi, iff Ui(Xi)-Ui(Yi»qi. Furthermore, in this case the
coefficient s~l. The other formalism has the same meaning as in
ELECTREI.
On a particular criterion g;, apart from preference Pi = p! ' we can state
indifference Xi p? Y;, iff IUi(Xi)-Ui(Yi)l~qi' as well as inverse 'weak' preference
Pi I and veto preference Pi2 defined in the same way as in ELECTRE I.

The set of decision rules describing the aggregation procedure of


TACTIC has the following form:

lif XiI P il~I YiI an d ... XipP ip~I Yip and Xjl P~o
jl Yjl an d ... XjgP ~o
jg Yjg
and Xkl P~ll Ykl and ... XkqP~~1 Ykq, then xSy,

such that {il,i2, ... ,ip }u{jl, ... jg}u{kl,k2, ... ,kq}={1,2, ... ,n},
S(WiI+Wi2+ ... +Wip) ~ Wkl+Wk2+ ... +Wkq.
Conjoint Measurement and Decision Rule Model 311

5. Conclusions
This paper shows the equivalence of preference representation by
numerical functions and by "if. .. , then ... " decision rules in multicriteria
choice and ranking problems. The decision rule model representing a
comprehensive preference relation involves partial profiles defined for
subsets of criteria plus a dominance relation on these profiles and pairs of
actions. Moreover, we considered representation of hesitation in preference
modeling. Within this context, two approaches were considered: dominance-
based rough set approach, handling inconsistencies in expression of
preferences through examples, and four-valued logic, modeling the presence
of positive and negative reasons for preference. Equivalent representation by
numerical functions and by decision rules was proposed and a specific
axiomatic foundation was given for preference structure based on the
presence of positive and negative reasons. Finally, some well known
multicriteria aggregation procedures were represented in terms of the
decision rule model; these are: lexicographic aggregation, majority
aggregation, ELECTRE I and TACTIC. In these cases the decision rules
decompose the synthetic aggregation formula into several possible
"scenarios", involving partial profiles and a dominance relation, that lead to
comprehensive outranking. Such a decomposition may be very instructive
for the decision maker.
We want to conclude this paper, by stressing one advantage of decision
rule models: decision rule representation of preferences does not need an
intermediation of numbers assigned by a synthetic utility function. This
reminds an excellent citation of Bachelard opening a chapter devoted to
preference structures in the book of Roy (1985): "Si l'ordre apparait
quelque part dans la qualite, pourquoi chercherions-nous a passer par
l'intermMiaire du nombre? ". Indeed, the decision rules do not convert
ordinal information into numeric one but keep the ordinal character due to
the syntax proposed. We think that this could be decisive for adoption of
decision rule model in real world problems.

Acknowledgements
The authors are grateful to Patrice Perny for his remarks on the first draft of
this paper. The research of S. Greco and B. Matarazzo has been supported by
the Italian Ministry of University and Scientific Research (MURST). R.
Slowinski wishes to acknowledge financial support of the KBN research
grant no. 8T11F 006 19 from the State Committee for Scientific Research,
and of the subsidy no. 1112001 from the Foundation for Polish Science.
312 AIDING DECISIONS WITH MULTIPLE CRITERIA

References
Bouyssou, D., Pirlot, M.: A general framework for the aggregation of semi0 rders. Technical
Report, ESSEC, Cergy-Pontoise, 1996.
Bouyssou, D., Pirlot, M., Vincke, Ph.: "A General Model of Preference Aggregation", in
M.H. Karwan, l. Spronk, J. Wallenius (eds.), A volume in Honour of Stanley Zionts,
Springer-Verlag, Berlin, 1997, 120-134.
Cozzens, M., Roberts, F.: "Multiple semiorders and multiple indifference graphs", SIAM
Journal ofAlgebraic Discrete Methods 3 (1982) 566-583.
Doignon, J.P.: "Threshold representation of multiple semiorders", SIAM Journal ofAlgebraic
Discrete Methods 8 (1987) 77-84.
Doignon, J.P., Monjardet, 8., Roubens, M., Vincke, Ph.: "Biorder families, valued relations
and preference modelling", Journal of Mathematical Psychology 30 (1986) 435-480.
Fishburn, P.c.: "Lexicographic orders, utilities and decision rules: A survey", Management
Science 20 (1974) 1442-1471.
Fishburn, P.C.: "Axioms for lexicographic preferences", Review of Economic Studies 42
(1975) 415-419.
Fishburn, P.C.: "Nontransitive additive conjoint measurement". Journal of Mathematical
Psychology 35 (1991) 1-40.
Greco, S., Matarazzo, B., Slowinski, R.: "The use of rough sets and fuzzy sets in MCDM", in
T. Gal, T. Stewart and T. Hanne (eds.) Advances in Multiple Criteria Decision Making,
chapter 14, Kluwer Academic Publishers, Boston, 1999, 14.1-14.59.
Greco, S., Matarazzo, B., Slowinski, R.: "Extension of the rough set approach to multicriteria
decision support", INFOR 38 (2000a) 161-196.
Greco, S., Matarazzo, B., Slowinski, R.: Conjoint measurement using non-additive and non-
transitive preference model able to represent preference inconsistencies. Report RA
10/2000, Institute of Computing Science, Poznan University of Technology, Poznan,
2000b.
Greco, S., Matarazzo, 8., Slowinski, R.: "Rough sets theory for multicriteria decision
analysis", European J. of Operational Research 129 (2001) 1-47.
Keeney, R.L., Raiffa, H.: Decision with Multiple Objectives - Preferences and value
TradeojJs. Wiley, New York, 1976.
Krantz, D.M., Luce, R.D., Suppes, P., Tversky, A.: Foundations of Measurements l.
Academic Press, New York, 1978.
Moreno, l.A., Tsoukias, A.: "On nested interval orders and semiorders", submitted to Annals
of Operations Research, 1996.
Roberts, F.S.: "Homogeneous families of semiorders and the theory of probabilistic
consistency", J. Math. Psychology 8 (1971) 248-263.
Rochat, l.C.: Mathematiques pour la gestion de l'environnement, Birkaiiser, BlUe, 1980.
Roubens, M., Vincke, Ph.: Preference Modelling, Springer-Verlag, Berlin, 1985.
Roy, B.: "Classement et choix en presence de points de vue multiples (Ia methode Electre)",
Revue Franraise d'lnformatique et de Recherche Operationnelle 8 (1968) 57-75.
a
Roy, 8.: Methodologie Multicritere d 'Aide la Decision. Economica, Paris 1985.
Roy, B., Bouyssou, D.: Aide Multicritere a la Decision: Methodes et Cas. Economica, Paris
1993.
Slowinski, R., Stefanowski, J., Greco, S., Matarazzo, B.: "Rough sets based processing of
inconsistent information in decision analysis", Control and Cybernetics 29 (2000) 379-
404.
Conjoint Measurement and Decision Rule Model 313

Tsoukias, A., Vincke, Ph.: "A new axiomatic foundation of the partial comparability theory",
Theory and Decision 39 (1995) 79-114.
Tsoukias, A., Vincke, Ph.: "Extended preference structures in MCDA". In: l Climaco (ed.):
Multicriteria Analysis, Springer-Verlag, Berlin 1997, pp. 37-50.
Tversky, A.: "Intransitivity of preferences". Psychological Review 76 (1969) 31-48.
Vansnick, lC.: "On the problem of weights in multiple criteria decision making", European
Journal of Operational Research 24 (288-294) 1986.
Wakker, P.P.: Additive representions of preferences. A new foundation of decision analysis.
Kluwer Academic Publishers, Dordrecht, 1989.
TOWARDS A POSSIBILISTIC LOGIC
HANDLING OF PREFERENCES

Salem Benferhat
IRIT-CNRS, Universite Paul Sabatier, France
benferhat@irit.fr

Didier Dubois
IRIT-CNRS, Universite Paul Sabatier, France
dubois@irit.fr

Henri Prade
IRIT-CNRS, Universite Paul Sabatier, France
prade@irit.fr

Abstract: A classical way of encoding preferences in decision theory is by means of


utility or value functions. However agents are not always able to deliver such
functions directly. In this paper, we relate three different ways of specifying
preferences, namely by means of a set of particular types of constraints on the
utility function, by means of an ordered set of prioritized goals expressed by
logical propositions, and by means of an ordered set of subsets of possible
choices reaching the same level of satisfaction. These different expression
modes can be handled in a weighted logical setting, here the one of
possibilistic logic. The aggregation of preferences pertaining to different
criteria can then be handled by fusing sets of prioritized goals. A logical
representation is not only expressive, but also enable preferences to be
reasoned about, and revised in a more transparent way.

Key words: Preference; Goal; Priority; Combination; Possibility theory; Possibilistic logic

1. Introduction
Decision Analysis and Artificial Intelligence have been developed almost
separately in the last half-century. Decision Analysis, with its different

315
316 AIDING DECISIONS WITH MULTIPLE CRITERIA
branches, Multicriteria Decision Analysis, Decision under Uncertainty,
Group Decision-Making, is concerned with aggregation schemes and has
often privileged quantitative approaches, while Artificial Intelligence deals
with reasoning and has an important logically-oriented tradition (Minker,
2000). Logic and decision have different concerns indeed; the first one
focuses on consistency and inference and is oriented towards symbolic
processing, while the other deals with trade-offs (and possibly with
uncertainty), and is more numerically inclined.
Central to decision is the modeling of agents' preferences. Approaches to
multicriteria decision can be roughly divided into two families respectively
based on : i) the construction of a global multi-attribute utility function used
for ranking the candidates: it is obtained by the aggregation of the
evaluations corresponding to the different criteria (Keeney and Raiffa,
1976); ii) the use of binary relations for comparing choices according to each
criterion, which are then combined in order to obtain a global ranking of the
candidates. This second approach, which has been especially developed by
Roy and his school (Roy and Bouyssou, 1993), is more faithful to the often
non-numerical expression of user's preferences, than multi-attribute utility
function methods.
Artificial Intelligence methods can contribute to a more implicit
specification of value functions, for instance in terms of constraints. This
will be in agreement with a more granular expression of preferences, which
is often the case in practice. This general line of research has been recently
illustrated in various ways by AI researchers (e.g., Boutilier, 1994; Tan and
Pearl, 1994; Lang 1996; Boutilier et al. 1999), and usually concentrate on
more qualitative evaluations which only require ordinal scales (rather than
numerical ones). The expected benefit of the logical handling of deeisioo
problems is not only to allow for a less abstract, and thus more human-like
expression of knowledge and preferences, but also to facilitate explanation
capabilities for the candidates selected among allowed choices by decision
support systems.
Non-classical logics, recently developed in Artificial Intelligence, are
often using ordering structures, and can thus handle some forms of
qualitative preferences. Among weighted logics, possibilistic logic based on
the conjoint use of classical logic and qualitative possibility theory (Dubois
and Prade, 1998; Zadeh, 1978) offers a framework at the meeting point of
the logical and decision traditions. Besides, possibilistic logic has been
already shown to be convenient for handling nonmonotonic reasoning
(Benferhat et al., 1997a). Its framework has also been used for modelling
preferences in terms of a set of prioritized goals (Lang, 1991a) and more
recently for decision under uncertainty (Dubois et al. , 1998).
Towards a Possibilistic Logic Handling ofPreferences 317

This paper provides a discussion of the potentials of possibilistic logic


in decision analysis, and more specifically in the representation of
preferences. Indeed a possibilistic logic base cannot only be seen as a set of
more or less certain pieces of information (which was the original
understanding when possibilistic logic was introduced and then applied to
nonmonotonic reasoning). Such a base can also be viewed as a layered set of
propositions expressing goals having different levels of priority (Lang,
1991a; Schiex, 1992). The latter view can be connected with the fuzzy set
representation of constraints or objective functions proposed by Bellman and
Zadeh (1970) a long time ago. Indeed, a utility function can be seen as a
membership function of a fuzzy set (the one expressing the more or less
acceptable candidates), which gives birth to a weighted set of goals (through
the level cuts ofthe fuzzy set).
Section 2 gives some background necessary for the reading of this
paper. Section 3 explains how to represent utility functions over set of
possible candidates, either in terms of set prioritized goals, or in terms of
subsets of possible candidates reaching the same level of satisfaction. We
also discuss in this section the symbolic aggregation of utility functions
pertaining to different criteria. Section 4 studies how constraint-based
specification of preferences can lead to a representation in the previous
framework.

2. Background on possibilistic logic


In the following, i denotes a finite propositional language. Formulae of i
are denoted by lower case Roman letters a, b, c, p, q, r ... We denote by {} the
set of interpretations. An interpretation (called also a world) for i is an
assignment of a truth value in {T, F} to each formula of i in accordance with
the classical rules of propositional calculus. An interpretation u is a model of
a formula p, and we write u F p iff u(p) = T. Let T represents a formula
satisfied by each interpretation (tautology), and 1.. denotes any inconsistent
formula. As usual a formula p is said to be consistent if and only if it has at
least one model, and is said to be inconsistent otherwise.
We give here some elementary definitions of possibility theory for
uncertainty handling (Zadeh, 1978), (Dubois and Prade, 1988). The basic
object of possibility theory is the notion of a possibility distribution, which is
a mapping from the set of classical interpretations {} to the interval [0,1].
More generally, the interval [0,1] can be replaced by any bounded linearly
ordered scale, finite or not. A possibility distribution corresponds to a
ranking on {}, such that the most plausible worlds get the highest value. The
318 AIDING DECISIONS WITH MULTIPLE CRITERIA
possibility distribution 11' represents the available knowledge about where the
real world is. By convention, 1I'(u) = 1 means that it is totally possible for u to
be the real world, 1I'(u) > 0 means that u is only somewhat possible, while
1I'(u) = 0 means that u is certainly not the real world. The inequality 1I'(u) >
1I'(u~ means that the situation u is a priori more plausible than u'. The
possibility distribution 11' is said to be normal if there exists at least one
interpretation u which is totally possible, namely 1I'(u) = 1. However, in
general there may exist several distinct interpretations which are totally
possible. This normalisation condition reflects the consistency of the
available knowledge represented by this possibility distribution. When
1I'(u) < 1 for all u, then 11' is said to be sub-normalized. A possibility
distribution 11' induces two mappings grading the possibility and the certainty
of a formula p respectively:
- the possibility degree TI(p) = max{1I'(u) I UFp} which evaluates to what
extent p is consistent with the available knowledge expressed by 11', i.e., to
what extent there exists a model of p which has a high level of possibility.
Note that we have: TI(pvq) = max(TI(p), TI(q»;
- the necessity (or certainty) degree N(p) = 1- TI(-p) which evaluates to
what extent p is entailed by the available knowledge. We have: N(p /\ q) =
min(N(p), N(q». If [0,1] is replaced by another linearly ordered scale, 1 - 0
will be changed into the order-reversing map ofthe scale.

2.1 Possibilistic knowledge bases and their semantics


A possibilistic knowledge base is a set of possibilistic logic formulas of
the form (p,a) where p is a classical propositional logic formula; a an
element of the semi-open real interval (0,1] in a numerical setting, or of a
finite linearly ordered scale in a symbolical setting. It estimates to what
extent it is certain that p is true considering the available, possibly
incomplete information about the world.
Given a possibilistic base K, we can generate a possibility distribution
from K by associating to each classical interpretation a degree in [0,1]
expressing the level of compatibility with the available information. When a
possibilistic base is made of one formula {(p, a)}, then each interpretation u
which satisfies p gets the degree 1I'(u) = I since it is completely consistent
with p, and each interpretation u which falsifies p gets a possibility degree
1I'(u) all the higher as the degree of certainty a. is low. The simplest way to
realize this constraint is to assign to 1I'(u) the degree 1 - a (on an ordered
scale, we use a reversing map of the scale). In particular, if a= 1 (i.e., p is
Towards a Possibilistic Logic Handling of Preferences 319

completely certain), then 1!(u) = 0 (i.e., u is impossible) if it falsifies p. Then,


the possibility distribution associated to rep, a)} is:
VUEn, lI'{(p, a)}(u) = 1 ifupp
= 1 - a otherwise. (1)
When K = {(Pi, ~), i= 1,n} is a general possibilistic base, then all the
interpretations satisfying all the beliefs Pi in K will have the highest
possibility degree, namely 1, and the other interpretations are ranked with
respect to the highest formula that they falsify, namely we get VUE n :
lI'K(u) = 1 ifu PPtl\ ... I\Pn
= 1 - max {~ : (Pi, ~) E K and UP l'i} otherwise.
Thus, lI'K can be viewed as the result of the combination of the lI'{(Pi llj)} 's

using the min operator, i.e.:


lI'K(U) = min {lI'{(pi, ai)(u) : (Pi, ~) E K }. (2)

If lI'K is subnormalized, then K is inconsistent to the degree:

Inc(K) = 1 - maxu lI'K(u).

Lastly, we say that q is (semantically) entailed by a consistent K with a


maximal degre ex, denoted by K P (q, ex) if NK(q) = ex, where NK is the
necessity measure induced from lI'K' When K is inconsistent, the further
condition NK(q) > NK(--'q) is added. Conversely, from K, we can build a
knowledge base K' = {(p, NK(p» with NK(p) > O} which is semantically
equivalent to K (subsumed formulas, which are inferred from formulas with
higher degree, can be deleted from K' ).

2.2 Possibilistic resolution principle


In this section we suppose that weighted formulas are put under the form
of weighted clauses; this can be done without loss of expressivity, due to the
compositionality of necessity measures with respect to conjunction (a
formula (p, a) with p == /\Pi is equivalent to the set of formulas (Pi, a». The
following resolution rule (Dubois and Prade, 1987)
(p v q, a) ; (l' v r, /3) I-(q V r, min(a, /3)
is valid in possibilistic logic. For more details on this possibilistic resolution
rule, see (Dubois, Lang and Prade, 1994).
320 AIDING DECISIONS WITH MULTIPLE CRITERIA

In order to compute the maximal certainty degree which can be attached


to a formula according to the constraints expressed by a knowledge base K,
just put K in a clausal form, and add to K the clause(s) obtained by refuting
the proposition to evaluate, with a necessity degree equal to 1. Then it can be
shown that any lower bound obtained on the empty clause .1, by resolution,
is a lower bound of the necessity of the proposition to evaluate. See (Dubois
et aI., 1994) for an ordered search method which guarantees that we obtain
the greatest derivable lower bound on.1. It can be shown (e.g., Dubois et aI.,
1994)), that this greatest derivable lower bound on .1 is nothing but the
inconsistency degree Inc(K u {(--or, I)}) where r is the proposition to
establish. Denote f- the syntactic inference in possibilistic logic, based on
refutation and resolution. Then the equivalence K f- (p,a) ¢:::> K F (p,a)
holds, i.e., f- is sound and complete for refutation with respect to the
semantics recalled in the previous sub-section (Dubois et aI., 1994). In case
of partial inconsistency of K, a refutation carried out in a situation where
Inc(K u {(--or, I)}) = a > Inc(K) yields the non-trivial conclusion (r, a),
only using formulas whose degree of certainty is strictly greater than the
level of inconsistency of the base (since it is at least equal to a).
The complexity of possibilistic inference is slightly higher than classical
logic refutation by resolution, and has been implemented in the form of an
A*-like algorithm (Dubois et aI., 1994). It has been shown (Lang, 1991b)
that the possibilistic entailment can be achieved with only Log(n)
satisfiability tests, where n is the number of uncertainty levels appearing in K.

3. Logical handling of prioritized goals


Let U be a finite set of possible candidates. A utility function, associated
with some criterion C, is a mapping from U to some valuation scale. In many
practical situations, a finite valuation scale is enough, first because the set of
candidates is finite, and moreover humans are often only able to discriminate
among candidates through a rather small number of valuations.
Besides, a fuzzy set, defined on a finite scale, can be eqUIvalently seen as
a finite family of nested level cuts, corresponding here to crisp constraints or
objectives. The equivalent representation of C as a set of prioritized goal is a
direct consequence of the semantics associated to a possibilistic logic base,
which is now briefly restated.

3.1 Logical representation of criteria


Let us consider the case of a unique fuzzy criterion C. The utility
function is then defined by its membership function Ilc ranging on a finite
Towards a Possibilistic Logic Handling of Preferences 321

scale L = {aO = 0 < 0.1 < ... < an = I}. C is equivalently represented by the
set of crisp sets CCl"' called aj-cut of ~c' and defined by:
1

Ca. = {u : ~C<u) ~ aj},


1

i.e., the set of possible choices or candidates having a degree of satisfaction


at least equal to aj. The greater aj, the smaller Ca" It can be checked that the
1
constraints N(Cai ) ~ 1 - aj-I hold for i=I, ... ,n, where N is the necessity
measure defined from 7f' = ~c. Thus C can be also interpreted in terms of
priority: the goal of picking a candidate in Cai has priority l-a.j_I, and the
larger the a-cut, the more important the priority; in particular it is
imperative that the chosen u has a non-zero degree of satisfaction, so Cal
has priority 1 (N(C~ ) = 1). Note that if
.... 1
C~.
....1
= Ca.1+ 1 then the constraint
N(Ca . 1) ~ 1 - aj is redundant, w.r.t. N(Ca .) ~ 1 - aj_I > 1 - aj and can be
~ 1

ignored.
This gives birth to a possibilistic knowledge base of the form K = {(ca"
1

1- aj-I), i = 1, ... , n} where ca' denotes the proposition whose set of models
1
is Ca" In the following, the notations ca. and Ca. are abridged into Cj and
1 1 1
Cj. Preferences are thus expressed in terms of sets of crisp (nested) goals
having different levels of priority. Clearly, 11K computed by (1) is equal to
~c. 7f'K(u) gets the highest value when u satisfies all the goals, and gets the
lowest value when u violates the highest priority goal. More generally, 7f'K(u)
is all the smaller as u violates g0als with higher priority. Decisions violating
goals with priority 1 have a level of acceptability equal to O.
This representation plays a basic role in the manipulations that we
perform on preferences in this approach.

3.2 Conjunctive and disjunctive forms


Conversely, a set of crisp goals (not necessarily nested) with different
levels of prioricy can always be represented in terms of a fuzzy set
membership function, applying (1 )-(2), as we are going to see on different
examples.
322 AIDING DECISIONS WITH MULTIPLE CRITERIA

Example 1: Hierarchical requirements.


In operations research, as well as in the database setting (e.g., Lacroix
and Lavency, 1987), requirements of the following form have been
considered:
tlC I should be satisfied, and among the solutions to C I (if any) the ones
satisfying C2 are preferred, and among satisfying both C I and C2, those
satisfying C3 are preferred and so on".
Cl> C2, C3··· are here supposed to be classical constraints (i.e., J.lCj = 0 or 1).
Thus, one wishes to express that C I should hold (with importance or priority
PI = 1), and that if C I holds, C2 should hold with priority P2' and if C I and
C2 hold, C3 should hold with priority P3 (with P3 < P2 < PI)' This can be
readily expressed by the possibiIistic propositional logic base
= {(cI' 1); (~l V c2' P2); (~l v ~2 v c3' P3)}'
K
The semantics of K obtained by applying (1)-(2) can be put under the
form
1I"K(U) = min(J.lCI (u), max(J.lclu), 1 - min(J.lCI (u), P2)),
max(J.lC3(u), 1 - min(JlCI (u), J.lclu), P3))' (3)

It is a weighted min-based aggregation which reflects the idea that we are


completely satisfied (1I"K(u) = 1) if C I, C 2 and C3 are completely satisfied.
We are less satisfied (1I"K(u) =1 -P3) ifC I and C2 only are satisfied, and we
are even less satisfied (1I"K(u) = 1 - P2) if only C I is satisfied.
A semantically equivalent form for K (Dubois et aI., 1994) can be
obtained by applying the possibilistic logic resolution rule, (.a v b, a.), (a v
c, P) I- (b V c, min (a., P)), recalled in Section 2.2. Namely K = {(cI' 1); (c2'
P2); (c3' P3)}' Indeed (3) can also be put under the form:

1I"K(u) = min(J.lCI(u), max(Jlclu), 1- P2)), max(J.lC3(u), 1- P3))' (4)

Thus the priorities can directly reflect a hierarchy in possibiIistic logic.


Expressions such as (4) or more generally (1) provide conjunctive
normal forms (i.e., it is a min of max). They can be turned into
disjunctive normal forms (max of min) and then provide a description
of the different classes of candidates ranked according to their level of
preference, as seen in the example below (all the candidates in a class
reach the same level of satisfaction).
Towards a Possibilistic Logic Handling of Preferences 323

Example 2:
Let us consider the following three criteria-based evaluation:
- if u satisfies A and B, u is completely satisfactory, and
- if A is not satisfied, solutions should at least satisfy C.
Such an evaluation function can be encountered in multiple criteria
problems for handling "special" cases (here situations '.vhere A is not
satisfied) which coexist with normal cases (here situations where both A and
B can be satisfied). It can be directly represented by the disjunctive form:
~D(u) = max(min(~A(u), ~B(u», min(~du), 1 - ~A(u), I-p» with p < 1.

The reading of this expression is easy. Either the candidate satisfies both
A and B, or ifit falsifies A, it satisfies C, which is less sattsfactory.
This expression of ~D obtained as the weighted union of the different
classes of more or less acceptable solutions can be transformed into an
equivalent conjunctive form like (4) or more generally (1); it can be checked
that this conjunctive form corresponds to the base K = {(avc, 1); C--'avb, 1);
(a, p); (b, p)}, where A, B, C are the sets of models of a, band c
respectively; this provides a logical, equivalent description of the evaluation
process in terms of prioritized requirements to be satisfied by acceptable
solutions. Note that in this knowledge base, the formula (b, p) can be
removed since it can be recovered from (--'avb, 1) and (a, p) using the
possibilistic resolution principle described in Section 2.2.

It is worth noticing that the clausal form corresponding to the


possibilistic logic base may be sometimes less natural for express:ng the
goals than the normal disjunctive form as shown by Example 2 above.
Example 1 illustrates the converse situation.
The normal disjunctive form provides a logical description of the
different subsets of solutions each with their level of acceptability. On the
contrary, a possibilistic logic base which can always be put under the form
of a conjunction of possibilistic clauses corresponds to a prioritized set of
goals.

3.3 Basic modifications of a set of goals


Discounting and thresholding are two elementary operations that can be
performed on a preference profile. Indeed the discounting of a preference
profile ~c' associated with a criterion C by a level of importance OE L
amounts in a qualitative setting to modifying ~du) into max(~du), 1 - 0)
for each u. It expresses that even if the candidate u is not at all satisfactory
324 AIDING DECISIONS WITH MULTIPLE CRITERIA
with respect to the initial criterion C (Ildu) = 0) the candidate is no longer
completely rejected with respect to the discounted criterion, and receives a
value which is all the greater as the level of importance of 0 is smaller, i.e.,
as the discounting is stronger.
Thresholding a preference profile by e amounts in a qualitative setting to
modifying Ildu) into the expression e~lldu) which is equal to 1 if Ildu) ~
e and is equal to J.1du) otherwise. In other words, as soon as the candidate u
reaches the satisfaction level e with respect to Ile, it is regarded as fully
satisfactory with respect to the thresholded criterion, otherwise the
satisfaction level remains unchanged.
These two operations are easy to perform on the representation of C in
terms of prioritized goals. Indeed,
• the importance weighting operation max(lldu), 1 - 0) translates into the
suppression of the most prioritary goals (Cj, 1 - ai-l) such that 1 - aj-I >
o. When 0 = 1, no modification occurs, while when 0 = 0 all the goals
disappear.
• the thresholding operation defmed by e ~ Ildu) (equal to 1 if Ildu) ~
and is equal to Ildu) otherwise) translates into the suppression of the
least prioritary goals (Cj, 1 - ai-l) such that aj > e. As in the previous
case, if e = 1 no modification occurs, while when e = 0 all the goals
disappear.

3.4. Logical aggregation of fuzzy preference profiles


More generally, the conjunctive aggregation of fuzzy (discounted or
thresholded) preference profiles can be interpreted in terms of conjunctions
of crisp goals having different levels of priority, thus providing an
expression of combined preferences in a possibilistic logic form.
The pointwise aggregation of two fuzzy preference profiles C and C'
defined by means of the min operation, computed as min(llcO, 1lC'(.», can
be easily interpreted in the prioritized goals framework. It correspo~ds to the
union of the two sets of possibilistic formulas {(Cj, 1- aj-I)} and {(cj,
1- aj-I)}. This is a particular case of the syntactic fusion of possibilistic
pieces of information (Benferhat et al. 1997b). It clearly agrees with (1).
Aggregation operations other than min can be also accommodated in a
symbolic manner. Indeed reinforcement and compensation operators, such as
the product and the average respectively, can also be interpreted in terms of
operations on prioritized goals. This applies as well to non-conjunctive
aggregations. Let (a, a) and (b, 13) be two crisp constraints with priorities a
Towards a Possibilistic Logic Handling of Preferences 325

and 13 and * be a non-decreasing aggregatIon operator such that 1*1 = 1. The


aggregation of (a, a) and (b, (3) is expressed pointwisely at the semantical
level by:
max(J.lA(u), 1 - a) * max(J.lB(u), 1 -(3)
which can be easily interpreted in terms of prioritized goals. As it can be
checked, this aggregation symbolically denoted by (a, a) * (b, (3) is
equivalent to the min-based conjunction of the following prioritized goals:
(a, 1 - «(1 - a) * 1 », (b, 1 - (1 * (1 -(3))), (a v b, 1 - (1 - a) * (1 -(3».
Note that the combination amounts to adding the goal avb to a level of
priority higher than the ones of a and b. Indeed, since * is an increasing
operation, 1- «(1 - a) * (1 -(3» is greater or equal to 1 - «1 - a) * 1) and
to 1 - (1 * (1 - (3». If * = min, this third weighted clause is redundant with
respect to the two others.
This result can be generalized to fuzzy preference profiles A and B. It can
be shown that A * B is equivalent to the conjunction of the following sets of
prioritized goals:
{(aiv bk, 1 - (1-ai-1) * (1 - 13k-I» for all (i, k)},
{(ai, 1 - (1-ai-I) * 1» for all i} and {(bk, 1 - (1 * (1-l3k-I») for all k}.

It should be emphasized that the translation of aggregation * into a


possibilistic propositional logic base is done at the expense of the
introduction of new levels in the scale. Indeed * is not closed on the finite
scale {aD = 0 < al <...< an = I} generally. Moreover, note that neither the
symmetry nor the associativity of * are required.
For instance, the goal base K in Example 2 can be retrieved by
combination of its syntactic components {(a, 1), (b, I)} and {(c, 1), (--,a, 1),
(..1, p)} for * = max, which thus would enable us to manipulate sets of
choices in a logical way. A similar remark applies for the goal base of
Example 1, for {(cl, I)}, {(c2, P2)}, {(c3, P3)} and * = min.

Example 3: (Moura-Pires and Prade, 1998)


Consider three preference profiles A, Band C where A and B are fuzzy
and C is non-fuzzy, but discounted by p. A is supposed to be thresholded by
8. Moreover (C, p) and B are supposed to be aggregated by a compensatory
operation, here the arithmetic mean (s * t = (s + t)/2). This can be formally
written as:

(8 ~ A) 1\ «C, p) * B)
326 AIDING DECISIONS WITH MULTIPLE CRITERIA

where 1\ stands for the min aggregation. In the example we use the
satisfaction scale {ao, aI, a2, a3, CX4, as} = {O, 0.2, 0.4, 0.6, 0.8, I} for A
and B and we take 0 = 0.8, P = 0.6. The problem can be then translated
under the form of a stratified possibilistic base. Namely, let Aj = Aaj +l'
e.g., Al = Aa2, (here the O.4-cut).
The fuzzy criterion A is encoded by
A = {(ao, 1), (aI, 0.8), (a2, 0.6), (a3, 0.4) (a4, 0.2)}.
Then (0 ~ A) corresponds to the following knowledge base:
KO~A = {(ao, 1), (aI, 0.8), (a2, 0.6), (a3, 0.4)}.
Now the base associated to (C, 0.6) * B is :
K (C, .6) * B = {(c, 0.3), (bO, 0.5), (bI, 0.4), (b2, 0.3), (b3, 0.2),
(b4, 0.1), (bo v c, 0.8), (bI v c, 0.7), (b2 v c, 0.6),
(b3 v c, 0.5),(b4 v c, 0.4)}.
Thus, the level cuts of the fuzzy set (0 ~ A) 1\ «C, p) * B) lead to the
possibilistic base:
{(ao ,1), (aI, 0.8), (bo v c, 0.8), (bI v c, 0.7), (a2, 0.6), (b2 v c, 0.6),
(bo ,0.5), (b3v c, 0.5), (a3 , 0.4), (bI, 0.4), (b4 v c, 0.4), (b2, 0.3),
(c, 0.3), (b3, 0.2),(b4, 0.1) }.
Viewing (0 ~ A) 1\ «C, p) * B) as a fuzzy constraint satisfaction
problem, this can be exploited for relaxing it into crisp problems
corresponding to the different level cuts of the above possibilistic logic base;
see (Moura-Pires and Prade, 1998). See also (Moura-Pires et aI., 1998).

4. Constraint-based specification of preferences


In Section 3, two ways of expressing preferences in a granular manner
have been discussed, namely as a set of prioritized goals (conjunctive form),
or by the generic description of sets of solutions reaching some prescribed
level of satisfaction (disjunctive form). In this section we consider a third
way of specifying preferences through constraints, which can be related to
the two previous ones. When they are not contradictory, the constraints
provide an implicit, usually incomplete, specification of a preference profile
(here assumed to be represented under the form of a possibility distribution).
Towards a Possibilistic Logic Handling of Preferences 327

4.1 Weak comparative preferences


The possibilistic framework can be useful for the elicitation of a
qualitative preference profile from a set of constraints specifying preferences
in a granular way. For instance, a preference in favor of a binary property q
with respect to -,q can be expressed by a constraint of the form
I1(q) > I1(-,q), (5)
which is equivalent to saying that there exists at least one decision value in
the set of models of q which is better than any decision value in the set of
models of -,q. Such a constraint-based specification of preferences has been
considered by Boutilier (1994) without however referring to the possibilistic
framework.
This is a rather weak manner of expressing the preference about q.
Indeed, due to the definition of a possibility measure, (5) expresses that the
most satisfactory candidate satisfying q, is preferred to the most satisfactory
candidate, (hence to all candidates), which does (do) not satisfy q.
Such a constraint can be easily made context-dependent. Indeed, the
requirement "if p is satisfied, q is preferred to -,q", can be expressed
by the constraint
TI(P /\ q) > TI(P /\ -,q). (6)
Note that the above preference in favor of q true over q false, in the
context where p is true, does not presuppose anything about the preference
w.r.t. q and -,q when p is false. The latter preference, if it exists, should be
specified by another constraint; if there is such a constraint, it might be
I1(-,p 1\ q) < I1(-,p 1\ -,q),
or I1(-,p 1\ q) > I1(-,p 1\ -,q),
or I1(-,p 1\ q) = I1(-,p 1\ -,q)
depending on the cases. The two constraints I1(-,p 1\ q) > I1(-,p 1\ -,q) and
(6) entail (5), but the converse does not hold generally. (5) only entails (6) or
I1(-,p 1\ q) > I1( -,p 1\ -,q), since I1( q) = max(I1(p 1\ q), I1(-,p 1\ q» and
I1(-,q) = max(I1(p 1\ -, q), I1(-,p 1\ -, q». A constraint stronger than (5) is
necessary for entailing both constraints; see Section 4.2.
More generally, a collection of such requirements gives birth to
possibilistic constraints, whose greatest solution 7r*, obtained using the
minimal specificity principle I , can be computed and represents a preference

I A possibility distribution 7r is said to be more specific (Yager, 1992) than

another possibility distribution 'It if and only if for each interpretation u we have 7r{u)
328 AIDING DECISIONS WITH MULTIPLE CRITERIA

profile agreeing with the requirements. The minimal specificity principle


expresses that any candidate is all the more satisfactory as it complies with
the constraints. However, there may exist other worth-considering selection
procedures of a particular possibility distribution satisfying the set of
constraints; this is open to discussion.
This approach is formally the same as the possibilistic treatment of
default rules. Indeed, a default rule "if p then generally q" is translated into
the constraint n(p 1\ q) > n(p 1\ -'q) which expresses that p and q true is
strictly more plausible than p true and q false. See (Benferhat et al., 1998)
for an overview.
In this section, we recall an algorithm which computes 7r* from a set of
constraints of the form (5) and (6). To this aim, we view a possibility
distribution 'If as a well-ordered partition2 (E1' ... , Em) of 0 such that:

VUE Ei, V u' E Ej' 7r{u) > 7r{u') iff i < j.


By convention, E1 represents the most normal states of the world. Thus, a
possibility distribution partitions 0 into classes of equally possible
interpretations.
Let:
E = {Ci: n(pj 1\ qi) > n(Pi 1\ -'qj}},
be a set of constraints of the form of (6). We denote by:

Cll = {(L(Ci)' R(Ci»: Ci E JJ}


the associated constraints on 0, where L(Cj) (resp. R(Ci» is the set of
worlds satisfying Pi 1\ qi (resp. Pi 1\ -'qi)' The pair (L(Ci), R( Ci» (where L
stands for left and R for right) is interpreted as a constraint saying that at
least one world in LCCj) is better than any world in R(Ci), and this is exactly
what n(Pi 1\ qi) > n(Pi 1\ ~i) means.
The ordered partition of 0 using the minimum specificity principle and
satisfying CII can be obtained by the following procedure (Benferhat et al.,
1998):

::;;1I'(u) and there exists at least one interpretation u' such that 1!'(u') < 1I'(u'). In other
words, 1C'is more informative, or more requiring than 11'.
2 i.e., {l = EJ u ... u Em, and for i ;9 we have Ei!l Ej = 0, moreover Ej ;t!0 for j ~.
Towards a Possibilistic Logic Handling of Preferences 329

a. m=O
b. While {) is not empty repeat c.l.-cA.:
~
b.l. m~m+ 1
b.2. Put in Em every model which is not in any R(Ci) of Cf),
b.3. If Em == 0 then f) is inconsistent.
b.4. Remove the elements of Em from {) ,
b.5. Remove from Cf) any pair (L(Cj), R(Ci)) containing elements of
Em-

The partition (El, ... , Em) of {) obtained by the previous algorithm is


unique. Many numerical counterparts to (Eb ... , Em) can be defined, for
instance
7r*(u) =(m+l-j)/m ifuEEj,j==l, ... ,m. (7)

Example 1 (continued)
In Section 3.3, the set of stratified goals Cl, C2, C3 was directly given.
However, such a stratification can be related to the possibility distribution
which can be selected from a set of constraints of the forms (5)-(6). For
instance if an agent expresses that he wants coffee, and if coffee is not
available he would like tea. This corresponds to the two following
constraints:

where Cl = coffee and C2 = tea. Let {) = {u1: cI'\c2' u2: Cll\-,cZ, u3: -,ClI\CZ'
u4: -,cll\-,c2} where u1: Cll\CZ reads u1 is the model which makes Cll\CZ
true, etc. We have:
Cf) == {(rub u2}, {u3' ud), ({u3}, {u4})}'
Applying the above algorithm leads to the following partition:
E1 ::::: {u1' u2} > E2::::: {u3} > E3::::: {u4}'
A numerical counterpart can be:
7r*(U 1) == 7r*(u2) = 1,
7r*(u3) == 2/3,
330 AIDING DECISIONS WITH MULTIPLE CRITERIA
'II*(u4) = 1/3.
When a set of constraints expressing weak comparative preferences is
inconsistent (step b.3), it reveals that for each constraint there does not exist
any "ideal situation", where the preference indeed takes place. To illustrate
this, let us take the following example:
i) "sea" is preferred to "not sea"
lI(s} > TI(-,s)
ii) "mountain" is preferred to "not mountain"
I1(m) > TI(-,m)
iii) "sea and mountain" is not preferred
TI(-,m v-,s) > TI(m 1\ s)
It can be checked that these three constraints are inconsistent. Indeed, for
the first rule there exist two pos8ible ideal worlds (expressed in the
language), namely "m 1\ s" and "-,m /\ s". "m 1\ s" cannot be considered due
to constraint iii). The choice of "-,m 1\ s" contradicts the second constraint.
However if we weaken both preferences i} and ii) by adding a specific
context to (i) and (ii) by casting the statement "sea is preferred to not sea" in
the context of flat land (-,m), and "mountain to not mountain" in the context
of being far from the sea (-,s); the set of constraints can now become
consistent as shown below
i)' TI(s 1\ -,m) > TI(-,s 1\ -,m)
ii}' TI(m 1\ -,s) > TI(-.In 1\ -,s)
iii)' max(TI(m 1\ -,s), TI(-,m 1\ s), TI(-,m 1\ -,s» > TI(m 1\ s).
An obvious solution is indeed TI(s 1\ -,m) = TI(-,s 1\ m) = 1, and II(s 1\ m)
= TI(-,s 1\ -,m} < 1.
It may also happen that, even by weakening the preferences by
contextualizing them with a situation expressible in the language, there is no
way of making the set of constraints consistent. This indicates that the
language is not expressive enough for specifying the "ideal" situation(s}.
Then new literals should be added. Thus, the extreme example of the two
constraints,
i) TI(p} > TI(-p}
ii} TI(jJ) > TI(p}
it can be mad~ consistent by adding a context to one of the constraints, e.g.,
i}' TI(p 1\ x} > TI(jJ 1\ x}
ii)' II( jJ) > II(P)
Towards a Possibilistic Logic Handling of Preferences 331

<=> max (I1(--p 1\ x), I1(--p 1\ -,x)) > max (I1(p 1\ x), I1(p 1\ -,x»
a solution to which is, for instance.
I1(--p 1\ -,x) ::: 1 ::: I1(p 1\ x) ::: 0.5, I1(p 1\ -,x) ::: 0 ::: I1(--p 1\ x).
These two examples point out that even if weak comparative preferences
constraint form a rather permissive manner of expressing preferences,
preferences which are not sufficiently specific can become inconsistent. This
inconsistency reveals a lack of details in the expression of the preferences.

4.2 Strong comparative preferences


Other types of constraints can be introduced; see Van der Torre and
Weydert (2001) for a detailed study. A stronger counterpart to (5) is:
~(q) > I1(-'q) (8)
where
~(q) ::: min u: u 1= q 1I(u)

is the guaranteed possibility function (Dubois and Prade, 1998). The


inequality (8) expresses that any candidate satisfying q (rather than at least
one in (5» is preferred to any candidate satisfying -'q. This is the strongest
way of expressing the preference of q over -'q. Again, the scope of (8) can be
reduced by making the constraint context-dependent under the form
~(p 1\ q ) > fI(p 1\ -'q).
If 0 ::: {pl\q, Pl\-'q, --Pl\q, --pl\~} (where pl\q also denotes the world
where p and q are true, etc.), ~(q) > I1(-'q) expresses that min( 1I(pl\q),
1I(--pl\q» > max(1I(pl\-'q), 1I(--p1\-'q).
It is stronger than the ceteris paribus principle (Doyle and Wellman,
1991; Boutilier et aI., 1999) which amounts to asserting that q is preferred to
-'q whatever the context, i.e., 1I(pl\q) > 1I(pl\~) and 1I(--pl\q) > 1I(--pI\-'q) if
o ::: {pl\q, pl\-'q, --Pl\q, --pl\-'q} (but the ceteris paribus principle does not
say anything on 1I(pl\q) w.r.t. 1I(--p1\~ for instance) The ceteris paribus
preference of q over ~ more generally entails I1(p 1\ q) > I1(p 1\ -'q) and
I1(-,p 1\ q) > I1(-,p 1\ -,q).

Clearly there are two other basic types of constraints, namely


L\.(q) > L\.(-'q)
and
TI(q) > L\.( -'q).
332 AIDING DECISIONS WITH MULTIPLE CRITERIA

However, they are weaker than constraints of types (8) and (5)
respectively. The latter one, weaker than II(q) > II(-'q), only expresses that
there exists a model of q which is preferred to a model of -'q; in other words,
it states that it is not the case that the (strong) constraint .:l(-'q) ~(q) holds,
which is indeed a weak piece of information. The constraint .:l(q) > .:l(~) is
weaker than .:l(q) > II(-'q) and means that all the models of q are preferred to
at least one model of -'q; in particular the least satisfactory candidates among
the models of q is more satisfactory than at least one candidate which is a
model of~.

4.3 Other methods for assessing preferences


As suggested by the following example, the constraint-based approach
can be useful for completing preference orderings which are implicitly
specified through both examples and general principles.

4.3.1 A motivating example


Let us consider the following situation with three criteria, say the levels
in mathematics (M), in physics (P), and in literature (L), and three candidates
A, Band C rated on the 6 level scale a > b > c > d > e > f:
M P L
A a b f
B f e a
C d c c
where M and P are supposed to have the same importance, greater than the
one of L, while the result of the global aggregation of the three criteria
should be such that the candidate C is preferred to A and A is preferred to
B3. This can be expressed by the following sets of constraints, where n(xyz)
denotes the level of acceptability of a candidate having grade x in M, y in P
and z in L (using an encoding of the grades x, y, z into the 6 level scale a > b
> c > d > e > f):
i) C is preferred to A and A is preferred to B. This is encoded by:
n(dcc) > n(abf) > n(fea)

3 This example has been recently used by Michel Grabisch and Marc Roubens (with a=IS, b=\6,
c=\5, d=14, e=12, f=1O) for illustrating the case where no weighted average aggregation
function can agree with both the proposed orderings between the candidates and the
respective importance of the criteria, while a Choquet integral (see, e.g., Grabisch et aI.,
1995) can represent the situation.
Towards a Possibilistic Logic Handling of Preferences 333

ii) M and P have the same importance. This is encoded by:


1t(xyz) = 1t(yxz) for all x, y and z
iii) P is more important than L.
1t(xyz) > 1t(xzy) for all x ify> z
iv) M is more important than L
1t(xyz) > 1t(zyx) for all y ifx > z
v) 1t is increasing W.r.t. x, y and z (the greater the grades, the better the
candidate ).
Constraint (i) reflects the example provided by the user, while the others
express general principles which should be applied to any tuples of grades in
M, P and L. Note that constraints (iii) and (iv) are strong ways of expressing
importance, since they are examples of ceteris paribus preferences. Recall
that 1t just encodes a ranking and 1t(xyz) is not an absolute value. Moreover,
the set of constraints (iv) (M more important than L) can be deduced from
the sets of constraints (ii: M and P same importance) and (iii: P more
important than L), as expected. Indeed, from (iii) we have:
1t(xyz) > 1t(xzy) for all x if y > z
using (ii) we have:
1t(xyz) = 1t(yxz) and 1t(xzy) = 1t(zxy)
which implies constraint (iv), namely:
1t(yxz) > 1t(zxy) for all x ify> z.
Moreover 1t(abt) > 1t(fea) can be deduced from (ii)-(v). Indeed, from (ii) and
(iii) we have:
1t(abt) = 1t(bat) > 1t(bfa) = 1t(fba)
and using (v) we have 1t(fba) ~(fea), and hence 1t(abt) > 1t(fea).
Such a family of constraints, as in the example above, defines a family of
1t-rankings compatible with the constraints. This family is non-empty if the
constraints are consistent, which is the case in the example. An instance of a
complete preordering between triples of grades, agreeing with the two
inequality constraints (i) in the example is provided by the leximin ordering
(e.g., Dubois et aI., 1996) defined by reordering the elements in the triples of
grades xyz increasingly by rank-ordering the triples which are thus obtained
in a lexicographic way (i.e., dcc > abf> fea, since after re-ordering we have
ccd> fba > fea). The other constraints pertaining to the relative importance
of criteria can then be used for breaking ties between triples which are
identical once reordered (e.g., aaf> faa due to (iv)).
Note that the constraint-directed approach only looks for a ranking
between triples of grades in M, P and L, without trying to derive this
ordering by means of some aggregation function to be determined in a given
334 AIDING DECISIONS WITH MULTIPLE CRITERIA

family (e. g., Choquet integral (Grabisch, 1996», as classical approaches do.
Let us emphasize that the interest of such an approach would be to obtain a
ranking of the situations directly from the specification of users' preferences,
without having to identify an aggregation function for the criteria. It also
enables us to check the consistency of the user requirements, and to restate
preferences as a set of stratified goals (which can be then checked, or
modified by the user). The development of such an approach also raises
computational issues which are not addressed here.

4.3.2 Handling exceptions


As suggested by the example above, the proposed approach can be useful
for completing preference orderings which are implicitly specified through
constraints expressed in terms of possibility measures. As advocated above,
such constraints can encompass both examples and general principles.
However, it may happen that a general principle has some justified
exceptions, e.g., 1t(xyz) > 1t(zyx) holds if x > z for all y except if y is
maximal (y = YM)' Then we are back to a nonmonotonic reasoning situation.
So we have to weaken (iv) into
v) if x > z there exists y such that 1t(xyz) > 1t(zyx).

Then, when there are no further constraints, applying the minimal


specificity principle to (v) and 1t(x YM z) ~1t(z YM x) amounts to require that
(iv) holds almost everywhere except for the y which are exceptions to (iv)
(here y = YM)'

5. Concluding remarks
This paper has advocated and outlined the use of possibility theory and
possibilistic logic in decision analysis for the representation and
combination of preferences. It has proposed a discussion of the expression of
qualitative preferences by providing sets of prioritized goals, or sets of
solutions reaching some given level of satisfaction, by specifying
preferences through constraints pertaining to goals, to examples or to
importance assesments. This approach shares some motivations and ideas
with the rough set-based model recently proposed by (Greco et aI., 1998),
where the indiscemibility relations underlying ordinary rough sets are
changed into dominance relations, for approximating preference relations
and getting sets of decision rules playing the role of a comprehensive
preference model. Links and differences between the two approaches are still
to be clarified.
Towards a Possibilistic Logic Handling ofPreferences 335

In connection with the approach proposed in this paper, one may think of
other lines of research. First, the logical framework does not only provide a
convenient representation tool, but also provides a basis for generating
explanations of interest for the user. Another topic of interest for further
research, briefly considered in (Benferhat et aI., 1999), is the revision of
preferences expressed as a stratified set of goals by a new input asking for
the incorporation of further preferences; see also (Ryan and Williams, 1997).
Lastly, another worth investigating issue, where a layered logic
framework may be useful, is the analysis of conflict between preferences.
Suppose that different preference profiles, expressing different points of
view, are to be combined symmetrically. Taking these different preference
profiles together very often creates inconsistencies (see, e.g., Felix, 1992).
The problem is then to determine what goals can be relaxed or put at smaller
levels of priority, taking advantage of the stratification of the preferences.
Methods developed for reasoning from stratified inconsistent propositional
logic bases may be very useful for that purpose: these methods are based on
the selection of particular consistent subbases, or on the search for pro and
cons arguments (Benferhat, Dubois, and Prade, 1996), or on the exploitation
of minimally inconsistent subsets (Benferhat and Garcia, 1998).

References
Bellman R., Zadeh L.A. (1970) Decision-making in a fuzzy environment. Management
Sciences, 17, 141-164.
Benferhat S., Dubois D., Prade H. (1996) Reasoning in inconsistent stratified knowledge
bases. Proc. of the 26 Inter. Symp. on Multiple-Valued Logic (ISMVL'96), Santiago de
Compostela, Spain, 29-31 May, 184-189.
Benferhat S., Dubois D., Prade H. (1997a) Nonmonotonic reasoning, conditional objects and
possibility theory. Artificial Intelligence, 92, 259-276.
Benferhat S., Dubois D., Prade H. (1997b) From semantic to syntactic approaches to
information combination in possibilistic logic. In: Aggregation and Fusion of Imperfect
Information, Studies in Fuzziness and Soft Computing Series, (B.Bouchon-Meunier, ed.),
Physica. Verlag, 141-161.
Benferhat S., Dubois D., Prade H. (1998) Practical handling of exception-tainted rules and
independence information in possibilistic logic. Applied Intelligence, 9,101-127.
Benferhat S., Dubois D., Prade H. (1999) Towards a possibilistic logic handling of
preferences. Proc. 16th Int. Joint Conf. on Artificial Intelligence (IJCAI-99), Stockholm,
Sweden, 31 July- 6 august, 1999, 1370-1375.
Benferhat S., Garcia, L. (1998) Dealing with locally-prioritized inconsistent knowledge bases
and its application to default reasoning. In Applications of Uncertainty Formalisms (T.
Hunter, and S. Parsons eds.), LNAI 1455, Springer, Berlin, pages 323-353.
Boutilier C.(1994) Toward a logic for qualitative decision theory Proc. of the 4th Int. Conf. on
Principles of Knowledge Representation and Reasoning (KR-94), Bonn, (J.Doyle,
E.Sandewall, P.Torasso, eds.), Morgan Kaufmann, 75-86.
336 AIDING DECISIONS WITH MULTIPLE CRITERIA

Boutilier C., Brafman R. I., Hoos H.H., Poole D. (1999) Reasoning with conditional ceteris
paribus preference statements. Proc. of the 15th Conf. on Uncertainty in Artificial
Intelligence (UAI99), (K.B. Laskey, H. Prade, eds.), Morgan Kaufmann, 71-80.
Doyle 1., Wellman M.P. (1991) Preferential semantics for goals. In Proc. of the 9th National
Conf. on Artificial Intelligence (AAAI-90), Anaheim, 698-703.
Dubois D., Fargier H., Prade H. (1996) Refinements of the maximin approach to decision-
making in a fuzzy environment. Fuzzy Sets and Systems, 81, 103-122
Dubois D., Farinas L., Herzig A., Prade H., (1997) Qualitative relevance and independence: a
roadmap. In Proceedings of the fiftheen International Joint Conference on Artificial
Intelligence (IJCAI-97), 62-67.
Dubois D., Lang J., Prade H. (1994) Automated reasoning using possibilistic logic: semantics,
belief revision and variable certainty weights. IEEE Trans. on Data and Knowledge
Engineering, 6(1), 64-71.
Dubois D., Le Berre D., Prade H., Sabbadin R. (1998) Logical representation and
computation of optimal decisions in a qualitative setting. Proc. AAAI-98, 588-593.
Dubois D., Prade H. (1987) Necessity measures and the resolution principle, IEEE Trans.
Systems, Man and Cybernetics, Vol. 17, pp. 474-478.
Dubois D., Prade H. (1998) Possibility theory: qualitative and quantitative aspects. In
Handbook of defeasible reasoning and uncertainty management systems. Vol. 1, pp. 169-
226, Kluwer Academic Press.
Felix R. (1992) Towards a goal-oriented application of aggregation operators in fuzzy
decision-making. Proc. of the Int. Conf. on Information Processing and Management of
Uncertainty in Knowledge-Based Systems (IPMU-92), Mallorca, July 6-10, 585-588.
Grabisch M. (1996) The application of fuzzy integrals in multicriteria decision making.
Europ. 1. of Operational Research, 89, 445-456.
Grabisch M, H. T. Nguyen, and E. A. Walker (1995) Fundamentals of uncertainty calculi,
with applications to fuzzy inference. Kluwer Academic.
Greco S., Matarazzo B., Slowinski R. (1998) Rough set theory approach to decision analysis.
In Proc. 3rd Europ. Workshop on Fuzzy Decision Analysis and Neural Networks for
Management, Planning and Optimization (EFDAN'98), (R. Felix, ed.), Dortmund,
Germany, June 16-17, 1998, 1-28.
Keeney R. and Raiffa H. (1976). Decision with Multiple Objectives: Preferences and Value
Trade-offs, Wiley, New York.
Lacroix M., Lavency P. (1987) Preferences: Putting more knowledge into queries. Proc. of the
13rd Inter. Conf. on Very Large Data Bases, Brighton, UK, 215-225.
Lang 1. (1991 a) Possibilistic logic as a logical framework for min-max discrete optimisation
problems and prioritized constraints. In Fundamentals of Artificial Intelligence Research
(FAIR'91), (P. Jorrand, 1. Kelemen, eds.), L.N.C.S. n0535, Springer Verlag, 112-126.
Lang 1. ( 1991 b) Logique possibiliste: aspects formels, deduction automatique, et applications.
PhD Thesis, Universite P. Sabatier, Toulouse, France, January 1991.
Lang J. (1996) Conditional desires and utilities - an alternative logical framework for
qualitative decision theory. Proc. 12th European Conf. on Artif. Intellig. (ECAI-96),
Budapest, Wiley, U.K., 318-322.
Minker 1., ed. (2000) Logic-based Artifcial Intelligence. Kluwer Academics Publisher,
Boston.
Moura Pires 1., Prade H. (1998) Logical analysis of fuzzy constraint satisfaction problems.
Proc. of the 1998 IEEE Int. Conf. on Fuzzy Systems (FUZZ-IEEE'98), Anchorage, Alaska,
May 4-9, 1998, 857-862.
Towards a Possibilistic Logic Handling of Preferences 337

Moura-Pires 1., Dubois D., Prade H. (1998) Fuzzy constraint problems with general
aggregation operations under possibilistic logic form. Proc. 6th Europ. Congo on Intellig.
Techniques & Soft Comput., Aachen, Germany, Sept. 7-10,1998, pp. 535-539.
Roy B., Bouyssou D. (1993) Aide Multicritere a la Decision: Methodes et Cas. Ed.
Economica, Paris.
Ryan J., Williams, M.-A. (1997) Modelling changes in preference: an implementation. ISRR-
027-1997, Dept. of Management, Univ. of Newcastle, NSW, Australia.
Schiex T. (1992) Possibilistic constraint sarisfaction problems or "How to handle soft
constraints?" In Proc. of the 8th Conf. on Uncertainty in Artificial Intelligence (UAI92),
(D.Dubois, M.P.Wellman, B. D'Ambrosio, P. Smets, eds.), Morgan Kaufmann, 268-275.
Spohn W. (1988) Ordinal conditional functions: a dynamic theory of epistemic states. In:
Causation in Decicion, Belief Change and Statistics, Vol. 2 , (W.L.Harper and B.Skyrms,
eds.), Reidel, Dordrecht, 105-134.
Tan S.-W., Pearl J. (1994) Qualitative decision theory. In: Proc. of the 12th National Conf. on
Artificial Intelligence (AAAI-94), Seattle, Wa., July 31 - Aug. 4, 1994,928-933.
Van der Torre L., Weydert E. (2001) Parameters for utilitarian desires in qualitative decision
theory. Applied Intelligence, to appear.
Williams, M.-A. (1994) Transmutations of know ledges systems. Proc. KR-94, 619-629.
Yager R.R. (1 992) On the specificity of a possibility distribution. Fuzzy Sets and Systems, 50,
279-292.
Zadeh L.A. (1978) Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and
Systems, 1, 3-28.
EMPIRICAL COMPARISON OF LOTTERY- AND
RATING-BASED PREFERENCE ASSESSMENT

Oscar Franzese
Oak Ridge National Laboratory, Oak Ridge, TN, USA;
iranzese@ornl.gov

Mark R. McCord
The Ohio State University, Columbus, OH, USA
mccord.2@osu.edu

Abstract: We investigate the performance of direct rating, probability equivalent, and


lottery equivalent assessment techniques for a set of 41 individuals in terms of
the ability of the techniques to reproduce indifference between two-criteria
outcomes previously judged to be indifferent. To compare the performance
before and after gaining familiarity with the techniques, we use data obtained
both at the beginning and at the end of the interview sessions. The results
show that the probability equivalent and lottery equivalent techniques
performed no worse, and generally better than the rating technique. These
results refute claims that lottery-based techniques are too complicated and too
unrealistic compared to simpler techniques to be used in MCDA preference
assessment. The results also show that all three techniques performed better
when using data obtained at the end of the session-after the individuals
gained familiarity with the techniques-and that the relatively complex lottery
equivalent technique performed as well as the other techniques when using
data obtained at the end of the session.

Key words: Preference assessment; Rating; Probability equivalent; Lottery equivalent

1. Introduction
In multi-criteria decision aiding (MCDA), different decision makers may
evaluate the criteria of a decision problem differently. Similarly, the same
decision maker may evaluate the criteria differently under different
circumstances. Such flexibility is an essential characteristic that
340 AIDING DECISIONS WITH MULTIPLE CRITERIA

differentiates MCDA from more traditional evaluation methods. However, to


be useful in aiding decision makers in difficult problems, the methods must
constrain this flexibility through both the evaluation logic of the method and
the parameters that represent the decision maker's preferences within the
framework of the evaluation logic. As such, there must be some assessment
of the preference parameter values that can subsequently be used in the
prescriptive decision aiding effort (McCord and de Neufville, 1983).
This preference assessment is often conducted by directly eliciting
information from the decision maker. Sometimes, the analyst may impose
parameter values as a first approximation. However, if the subsequent
analysis is to be useful, the analyst must impose values that the decision
maker would agree could represent his or her preference structure in the
context of the MCDA method. Similarly, conducting a sensitivity analysis
does not eliminate the need for determining representative preference
parameters; if the sensitivity analysis is to add insight, it must be conducted
with reference to meaningful and representative values of these parameters
(McCord and Leu, 1995; McCord, Franzese, and Sun, 1993).
Assessing preference parameters is perhaps most explicitly recognized
and rigorously developed in the multi-attribute expected utility theory
(MAEUT) approach to MCDA. Indifference statements involving "lotteries"
comprised of simple, but well-specified probability distributions over
possible outcomes are elicited from the decision maker in response to well-
defined questions posed by the analyst. The MAEUT evaluation logic is
used to transform the indifference statements into scaling coefficients and
levels of unidimensional utilities. Two common lottery-based techniques are
the probability equivalent and lottery equivalent techniques (Rogers, Bruen,
and Maystre, 2000; Law, Pathak, and McCord, 1998; McCord and de
Neufville, 1986). In the probability equivalent technique, the decision maker
compares a lottery and an outcome occurring with certainty. In the lottery
equivalent technique, the decision maker compares two lotteries, and there is
no reference to certainty. As such, the lottery equivalent technique avoids
difficulties associated with the certainty effect (McCord and de Neufville,
1985). However, it avoids these difficulties at the price of increased
complexity---comparing two lotteries, rather than a lottery and a certain
outcome-raising the question of whether the lottery equivalent technique is
too complex to reap its advantages.
We have occasionally heard individuals claim that even the simplest
lotteries are too difficult for a decision maker to understand and too artificial
to elicit meaningful responses. Our anecdotal experience is otherwise: We
have observed that when they gain experience in thinking through the
meanings of lotteries, analytical and motivated decision makers "appear" to
Comparison ofLottery- and Rating-Based Preferencel Assessment 341

grasp the intended meaning. Still, we agree that it could be advantageous to


avoid the increased complexity of lotteries in preference assessment, and we
recognize the attraction of simpler techniques.
There is some debate as to whether the utility functions used in MAEUT
can be used to represent strength of preferences (e.g., French, 1986; von
Winterfeldt and Edwards, 1986; Allais, 1979). If they could, information
needed to construct utility function parameters could conceivably be
assessed by rating an outcome on an interval scale, thereby eliminating the
need for the decision maker to consider lotteries in the preference assessment
phase. Rating on a scale with given endpoints is common in everyday life,
and such common usage makes it attractive to those desiring an "easy"
assessment technique. With an established connection between strength of
preference and utility functions, rating could also help in interpreting and
constraining responses to lottery-based assessment (McCord, Franzese, and
Sun, 1993). Rating is also representative of "direct" assessment techniques in
which preference parameters are directly assigned numbers without explicit
meaning given to the numbers. Therefore, having faith in a rating-type
technique is important both to MAEUT and to other MCDA methods.
Just as concerns have been raised about lottery-based methods, however,
we have felt that direct rating is arbitrary and vague. Specifically, since the
difference between, for example, a 6 and a 7 "on a scale of 0 to 10" is rarely,
if ever, made explicit, it seems impossible for a decision maker to provide a
rating that has meaning consistent with the underlying evaluation logic of the
MCDA methodology. This raises the question of whether the vagueness of
rating technique or the complexity of lottery-based techniques would cause
more difficulties in preference assessment. We also recognize that
preference parameters are not hard-wired in the decision maker's mind.
Rather, they would generally evolve during the necessary process of
"constructing," rather than "collecting," preference information and during
the subsequent MCDA effort (Fischhoff et aI., 1999; Payne et aI., 1999; Roy,
1996). As such, the preference assessment techniques should be considered
as tools to be used to progressively lead to preference statements, and the
performance of the techniques should be considered after individuals gain
familiarity with them.
With these issues in mind, we devised a controlled experiment to
investigate the relative performance of the probability equivalent, lottery
equivalent, and rating techniques, and to investigate the performance before
and after the individuals gained familiarity with the techniques. We compare
the performance in terms of the ability of the techniques to reproduce
indifference between two-criteria outcomes-the time and cost of obligatory
intercity trips-previously judged to be indifferent. Reproducing the
342 AIDING DECISIONS WITH MULTIPLE CRITERIA

indifference used in the test of performance requires transitivity, a constraint


on the evaluation logic that we feel would be desirable in any MCDA
method. Since we do not investigate the ability of the techniques to produce
"valid" parameters in the context of the evaluation logic of a specific MCDA
method, our comparisons are based on a condition that can be considered
necessary, but not sufficient for accepting the technique.
In the next section, we describe the conceptual basis of our experiment
and the design of its implementation. In the third section we present the
results that show that the lottery-based techniques outperformed the rating
technique. The improved performance was striking in the case of the
probability equivalent technique, and only slight in the case of the lottery
equivalent technique. Still, there was no evidence that, compared to direct
assessment techniques, lottery-based techniques are too complicated for
preference parameter assessment. The results also showed that the
performance of all the techniques improved as the individuals gained
familiarity with the techniques. In addition, after gaining familiarity with the
techniques, the lottery equivalent technique performed almost as well as the
probability equivalent technique. We discuss the implications of these results
in the final section.

2. Experimental design
2.1 Concepts
Individuals evaluated outcomes represented by the time and cost of
obligatory trips using the rating (R), probability equivalent (PE), and lottery
equivalent (LE) assessment techniques. The trips were to begin at 5 PM and
arrive at the destination at a specified time that ranged between 7 PM the
same evening and 3 AM the next morning. Therefore, the travel time
attribute had lower (To) and upper (T) bounds, respectively, of2 hours (i.e.,
depart at 5 PM, and arrive at 7 PM) and 10 hrs (i.e., depart at 5 PM, and
arrive at 3 AM). The trip cost attribute had lower bound Co of $20. We used
the technique presented in McCord and Franzese (1993) to fix the value of
the cost upper bound (Co). Specifically, for each individual i we elicited that
individual's value C/ such that:

*
(T = 10, Co = 20) ~ (To = 2, Cj• ), (1)

where "~,, represents indifference between the alternative on its left and that
on its right. In this way, C/ was the maximum dollar amount that subject i
was willing to pay to obtain a trip taking the minimum travel time
Comparison ofLottery- and Rating-Based Preferencel Assessment 343

considered (2 hours), rather than a trip costing the minimum amount


considered ($20) and taking the longest travel time considered (10 hours).
The indifference statement presented in Eq. (1) implies the testable
condition:

VE;m (10, 20) = VE;m (2, C;*), (2)

where VEt is individual i's "Value Equivalent" provided when using


assessment techniques m, m = R, PE, or LE. This condition formed the basis
of our empirical test of the three techniques.
In the Rating technique (m = R), we asked the subjects to provide values
of the two outcomes (T· = 10, Co = 20) and (To = 2, C· = C/) on a scale of 0
to 10, where the worst outcome considered (T =10, C· = C/) had a value of
o and the best outcome considered (To = 2, Co = 20) had a value of 10. We
denote these responses R,{10, 20) and R,{2, C/), respectively. We then
normalized these responses from the 0-10 to the 0-1 interval. Then, for the
Rating technique:

VE~(10 20) = R; (1 0, 20). (3a)


I ' 10'

VE~(2 C~) = R;(2, ct) (3b)


I ' I 10

For the Probability Equivalent technique (m == PE), the individual


expressed indifference between an outcome that would occur with certainty
and a lottery offering the best outcome considered (To = 2, Co = 20) with
probability P and the worst outcome considered (T = 10, C· = C/) with
complementary probability (1 - p). The certain outcome was alternately set
to (T· = 10, Co = 20) and (To == 2, C· = C/). Values pa,i and Pb,i of P were
sought that would cause the decision maker to express indifference in:

* (1- Pa,;)],
(10, 20) ~ [(2,20), Pa,;; (10, C;), (4a)

and

(2, C;* ) ~ [(2, 20), Pb,;; (10, C;* ), (1- Pb,; )], (4b)
344 AIDING DECISIONS WITH MULTIPLE CRITERIA

where [A, P; A', (1 - p)] indicates a lottery offering A with probability p and
A' with probability (1 - p). The indifference probabilities pa.; and Pb.; can be
shown to be constrained to lie between 0 and 1.0, and no normalization was
required. Then, for the PE technique:

VEiE (10,20) = Pa,;; (5a)

VE;PE (2, C;• ) = Pb,;' (5b)

In the Lottery Equivalent technique (m = LE), the individual expressed


indifference between two lotteries. One lottery was that used in the PE
technique-i.e., the lottery offering the best outcome considered (To = 2,
Co = 20) with probability denoted q, this time, and the worst outcome
considered (r = 10, C* = C*) with complementary probability (1 - q). The
other lottery was one offering the worst outcome considered (r = 10,
C· = C;) with probability (1 - s) and alternately outcomes (To = 2, C· = C;)
and (r= 10, Co = 20) with complementary probability s. In our experiment,
s was set at 0.675. (All probabilities were indicated graphically and
demonstrated by randomly generating cards on the computer screen; see,
Franzese, 1993). Values qa.; and qb.; of q were sought that would cause the
decision maker to express indifference in:

[(10,20),0.675; (10,C;), 0.325] - [(2, 20), qa,;; (10,C;), (1- qa,; )], (6a)

and

[(2, C;), 0.675; (10, C;), 0.325] - [(2, 20), qb,;; (10, C;), (1- qb,; )], (6b)

where the notation is as described above. The indifference probabilities qa.;


and qb.; can be shown to be constrained to lie between 0 and s = 0.675.
Therefore, we normalized these responses to the 0-1 interval, and for the LE
technique:

VE!-E(10 20) = qa,; . (7a)


I , 0.675'

VE!-E(2 C~) = qb,; . (7b)


I 'I 0.675
Comparison ofLottery- and Rating-Based Preferencel Assessment 345

2.2 Protocol
Ninety-one Civil Engineering graduate and undergraduate students at The
Ohio State University participated in the experiment. The data were
collected as a required exercise during a course module on transportation
demand modeling. The students were told that although there were no right
or wrong answers, the computer software could track parameters that would
shed light on how seriously they considered their responses. Moreover, they
were informed that the data could also be used to gain insights into
preferences for the time and cost of intercity trips-a topic of interest in the
course module. We used the PE and R techniques in eliciting the VE's for 21
subjects. For 25 other subjects we used the LE and R assessment techniques.
For the remaining 45 subjects we performed the assessments using the PE
and LE techniques.
The experiments were conducted using an interactive computer program
(Franzese, 1993). Approximately one week before the actual interview, we
conducted "warm-up" sessions with the subjects. In these preliminary
sessions we explained the experiment, stressed that there were no right or
wrong answers, presented questions similar to the ones that would be used in
the actual interview, and demonstrated the software.
During the actual experiments, the subject was given a description of the
choice scenario. He or she was asked to consider returning from a job
interview in New York, NY, to Columbus, OH, a 600 mile trip, on the
following Friday. The trip would depart New York at 5 PM. The subject was
then told that he or she would have to choose between two alternatives to
make the trip to Columbus. The alternatives were to be considered identical
in every aspect except travel time and cost (price). Arrival time would range
from an earliest time of 7 PM (i.e., 2-hr travel time) to a latest time of 3 AM
(i.e., lO-hr travel time), and cost would range from a minimum of $20 to a
maximum of C/o The subject was asked to imagine that he or she would not
be reimbursed for the cost of the trip.
Following this introduction, the data collection phase began. For
questions in the lottery-based methods and when eliciting C/ of relation (1),
the subject was shown two alternatives and asked to express a preference for
one, or an indifference between the two, by pressing one of three keys on the
computer keyboard. If the subject preferred one of the alternatives, the
software used the bracketing method (McCord and de Neufville, 1985) to
change either cost in relation (1) or probabilities p or q (and the
complementary probability) in the lotteries on the right in relation (4) or (6).
This process was repeated until the subject indicated indifference or until the
difference between two consecutive levels of the changing variable was less
346 AIDING DECISIONS WITH MULTIPLE CRITERIA

than some pre-specified limit. We used 3% of the response range as this


limit.
In the Rating technique we asked the subject to locate on a 0-10 scale the
outcomes (T" = 10, Co = 20) and (To = 2, C" = C;*), where (r = 10,
C" = C/) was to be considered to have a value of 0 and (To = 2, Co = 20)
was to be considered to have a value of 10. When first introduced, we
attempted to make the meaning of the rating more explicit by using an
interpretation based on exchanging outcomes. To elicit the rating response of
(T" = 10, Co = 20), for example, we emphasized that exchanging the
outcome with a rating of 0 -i.e., (T" = 10, C· = C;*)- for (r = 10,
Co = 20) would have saved the subject $( C/ - 20), since the arrival time was
the same for both alternatives. We then asked the subject if he or she would
be willing to pay more than this amount of $( C/ - 20) to exchange (T" = 10,
Co = 20) for the outcome with a rating of 10-i.e., (To = 2, Co = 20). If the
answer was yes, then (T· = 10, Co = 20) should be given a rating less than 5
on the 0-10 scale. If the answer was no, then it should be given a rating
greater than or equal to 5. After providing this interpretation, the subject was
asked to locate the outcomes (T· = 10, Co = 20) and (To = 2, C· = C;·) on the
0-10 scale.
The first question that was presented to all subjects was the one
represented in relation (1) in which we assessed the value of C;.. Then,
evaluations of the outcomes (10, 20) and (2, C/) were elicited using two of
the R, PE, and LE techniques as explained above. Next, other preference
information not reported here was elicited (see, Franzese 1993). Near the end
of the session, we elicited new evaluations of the outcomes (10, 20) and (2,
Ci ) using the same techniques used near the beginning of the session. We
concluded the interview by eliciting the value of C/ once again for reasons
described below. At any point in the interview, it was possible to begin the
current question again, return to the previous question, or quit the interview
altogether. A help screen was always available to remind the subject about
the meaning of the different elements on the computer screen, and one of the
authors was always present during the interviews to answer questions.
Figure 1 illustrates one of the screens of the interactive computer
program used to elicit the indifference points of the experiment. In
particular, the figure shows the assessment of Pb,; in relation (4b) for a
subject with a Ci = $115 previously elicited with relation (1). Since some of
the questions involved uncertain prospects (lotteries with two outcomes with
probabilities of occurrence P = 30% and 1 - P = 70% in the case illustrated in
Figure 1), the computer graphically presented these probabilities using two
sets of squares of different color. The number of squares in each set had
proportions P and 1-p to the total number (40) of squares. The subject could
Comparison ofLottery- and Rating-Based Preferencel Assessment 347

then press an assigned key to actuate a random drawing from this set of 40
squares. This "random drawing game" could be repeated as long as desired
and was provided to allow the individual to gain a better feeling for the
probability of occurrence of the outcomes considered.

Press D to UTAS
draw a card
UTILITY ASSESSMENT
• • • 00000
• • • 00000
• • 000000 John Doe
•• 000000
Which one do you prefer? •• 000000

Time Cost
Trip starts at: 5 PM
[7:00 PM; $115)
First Attribute
Name: Arrival Time
Units: hh:mm
1·p.=70%
Second Attribute
Name: Trip Cost
This combination OR this lottery
Units: Dollars

Figure 1. Illustration of an Assessment Screen from the Interactive Computer Program

3. Results
All 91 subjects completed the interview successfully. The average times
taken to answer questions with the R, PE, and LE techniques were 7.6
minutes, 7.3 minutes, and lOA minutes, respectively.
We assessed a subject's C· value at the beginning of the session. Equality
of the VE/s in Eq. (2) depends on Ci•• Therefore, at the end of the session we
elicited a second value of C· for subject i, which we denote Ce/. For further
analysis we considered only those subjects for whom C/ and Ce/ were
within 10% of each other. Specifically, we calculated the variable RELC· for
each subject i as:
348 AIDING DECISIONS WITH MULTIPLE CRITERIA

(8)

and considered only those subjects with RELC/ < 0.10. Of the 91 subjects
participating, 41 had such RELC/, with 28 providing the same C· values at
the beginning and end of the session (RELC/ = 0). (The values and other
data for all 91 subjects can be found in Franzese, 1993.) Of the 41 subjects
whose responses were retained for further analysis, 13 had provided
preference information using the Rand PE techniques, 12 had provided
preference information using the Rand LE techniques, and 16 had provided
preference information using the PE and LE techniques.
A systematic bias in the elicited value of C;* would affect the
comparisons of the R, PE, and LE techniques. If the elicitation procedure led
to a C;* that was less than (greater than) that which the individual would
wish to use to represent indifference, substituting the elicited value in
outcome (2, Cit) would make this outcome more desirable (less desirable)
than if the unknown value the individual wished to use for indifference was
used. Outcome (2, Cit) should then be preferred (less preferred) to outcome
(10,20), and we could no longer equate the VE;'s in Eq. (2).
To investigate the possibility that a biased elicitation of C;* invalidated
the assumed indifference between outcomes (10, 20) and (2, Cit), we
calculated the difference in the responses RDt for subject i when we used
technique m - m = R, PE, or LE - as:

RD;n =VE;n (2, C;) - VE;n (10,20), (9)

where VEt(2, C/) and VEt (10, 20) are the VE's of subject i obtained from
Eqs. (3), (5), and (7) for the appropriate technique m. We used a paired t-test
of the null hypothesis that the mean difference of the assessed VE's-i.e., the
mean of the RD's-was o. The alternative hypothesis was that the mean was
different from 0 (a two-sided test). Rejecting the null hypothesis in favor of
the alternative hypothesis would indicate that for some reason the subjects
were evaluating either (2, C;) or (10, 20) as more preferred. In Table 1 we
present summary statistics of the response difference RD using the different
assessment techniques and the results of the hypothesis tests. The null
hypothesis of mean RD equal zero could not be rejected for any of the three
techniques at the 90% confidence levels, either when using data obtained at
the beginning or at the end of the session. Of course, not rejecting the null
Comparison ofLottery- and Rating-Based Preferencel Assessment 349

hypothesis does not mean that the hypothesis of no systematic bias can be
accepted. Similarly, since the investigation was conducted across the set of
individuals, it cannot rule out the possibility that the elicited C;*'s were "too
large" for some individuals and "too small" for others. Still, the results of the
investigation increased our confidence that our objective, if somewhat
arbitrary criterion (RELC;* < 0.10) led to a set of data for which Eq. (2)
could be used to compare the three assessment techniques.

Table 1. Summary Statistics of Response Differences RD for the Three Assessment


Techniques Obtained with Data at the Beginning and End of Session

Rating PE LE
Be~innin~ End Be~innin~ End Be~innin~ End
Mean 0.1080 0.0640 0.0220 -0.0470 -0.0271 -0.0738
SD 0.3290 0.2548 0.2909 0.2526 0.4196 0.2346
N 25 25 29 29 28 28
T 1.6411 1.2561 0.4076 -1.0029 -0.3412 -1.6636
Alpha/2 0.0562 0.1106 0.3433 0.1622 0.3678 0.0539
Conf. Lev. 0.8862 0.7788 0.3133 0.6755 0.2644 0.8922

Our analysis is based on the assumption that the individuals were


reflecting preferences when responding to the assessment questions and not
"randomly pushing buttons." To investigate this issue, we formed the
absolute response difference ARDt for subject i from technique m - m = R,
PE, or LE. We defined this difference as:

ARDjm =1VErn (2, C;) - VErn (l0, 20) I, (10)

where VEt(2, C;) and VEt (10, 20) were determined from Eq. (3), (5), or
(7) for m = R, PE, or LE, respectively, and the vertical bars indicate absolute
value. If the group of subjects was randomly generating responses, any VE
between 0 and 1 would be equally likely to appear, and the distribution of
ARD values would be that of the difference of two random numbers
uniformly distributed between 0 and 1. The mean and variance of such a
distribution are 1/3 and 1/18, respectively (e.g., Larson and Odoni, 1981).
We tested the null hypothesis that the mean of the ARD's taken across
subjects could have been generated by independent random variables with
mean 1/3 and variance 1/18. The alternative hypothesis was that the mean
was smaller than 1/3 (a one-sided test). In Table 2 we present summary
statistics of the ARD data and the results of our tests on the mean of the
ARD's. For data obtained at the beginning of the session, we were able to
reject the null hypothesis of random generation with more than 95%
350 AIDING DECISIONS WITH MULTIPLE CRITERIA

confidence for the R technique and with very close to 100% confidence for
the PE technique. On the other hand, we could not reject with 90%
confidence that the LE responses were being generated at random at the
beginning of the session. However, for the data obtained near the end of the
session, we could reject the null hypothesis with almost 100% confidence for
all three techniques. That is, after gaining familiarity with the techniques, the
group of subjects could not be considered to have provided responses at
random. Rather, the individuals seemed to be considering preferences for the
times and costs of the alternatives.
Comparing the sample means of the ARD distributions also allowed us to
compare the techniques at the aggregate level. The sample means in Table 2
show that the PE technique performed best on average (lowest mean ARD)
when using data collected either at the beginning or the end of the session.

Table 2. Summary Statistics of Absolute Response Differences ARD for the Three
Assessment Techniques Obtained with Data at the Beginning and End of Session

Rating PE LE
Be~innin~ End Be~innin~ End Be~innin~ End
Mean (P) 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333
SD (P) 0.0471 0.0471 0.0438 0.0438 0.0445 0.0445
Mean (S) 0.2520 0.1920 0.1787 0.1338 0.2905 0.1596
N 25 25 29 29 28 28
Z -1.7253 -2.9981 -3.5330 -4.5582 -0.9605 -3.9003
Conf. Lev. 0.9578 0.9986 0.9998 1.0000 0.8316 1.0000
P: Assumed Population; S: Sample

The LE technique performed slightly worse than the R technique when using
data at the beginning of the session. However, when using data collected at
the end of the session, the LE technique outperformed the R technique, and
its average ARD was almost as low as that of the PE technique. The average
ARD was smaller when using data obtained at the end of the session than
when using data obtained at the beginning of the session for all three
techniques, suggesting that performance for all techniques improved after
the subjects gained familiarity with their use.
Since we assessed preference data from the same subject using two
techniques-either PE and R, LE and R, or PE and LE-we could also
investigate performance at the individual level. We computed the relative
response differences between techniques as:

RRD,!,k
I
= ARD'!' -
I
ARD~I ' (11)
Comparison ofLottery- and Rating-Based Preferencel Assessment 351

where ARDt and ARD/, respectively, are the absolute response differences
of Eq. (10) for subject i when the assessments were conducted using
techniques m and k-mk = PE,R; LE,R; or LE,PE.
In Table 3 we present summary statistics of the relative response
difference RRD using the different assessment techniques. (The sample sizes
were too small to expect to obtain traditional significance levels in formal
hypotheses tests.) Notice that the calculated mean RRD was always
negative. Therefore, the first technique m in the pair produced smaller
deviations on average than the second technique k in the pair. That is, when
considered at the individual level, the PE and LE techniques produced VE's
that were closer on average to the supposed equality than those produced
with the R technique, and the PE technique produced VE's that were closer
on average than those produced with the LE technique. These better "on
average" performances held for data obtained both at the beginning and at
the end of the session. We see from Table 3, however, that the superior "on
average" performance of the PE technique over the R technique was large
when using either the data obtained at the beginning or at the end of the
session. The superiority of the LE technique over the R technique was slight
when using either the data obtained at the beginning or at the end of the
session. The superiority of the PE technique over the LE technique was large
when using the data obtained at the beginning of the session but not when
using the data obtained at the end of the session.

Table 3. Summary Statistics of Relative Response Differences RRD between Pairs of


Assessment Techniques Obtained with Data at the Beginning and End of Session

PE-Rating LE-Rating PE-LE


Be~innin~ End Be~innin~ End Be~innin~ End
Mean -0.1592 -0.1420 -0.0209 -0.0048 -0.1647 -0.0001
SD 0.3439 0.1787 0.2606 0.2095 0.3148 0.1173
N 13 13 12 12 16 16
In Table 4, we present the number of individuals for whom one technique
performed better than another at the beginning and at the end of the
session-i.e., the number of negative, positive, and zero RRD's obtained
when using data obtained at the beginning and at the end of the session.
From this table, we see that the number of individuals for whom the PE
technique outperformed the R technique was almost twice as large as the
number of individuals for whom R outperformed PE at the beginning of the
session and three times as large at the end of the session. The number of
individuals for whom the LE technique outperformed the R technique was
slightly greater than the number of individuals for whom R outperformed LE
at the beginning of the session; however, at the end of the session, each
352 AIDING DECISIONS WITH MULTIPLE CRITERIA

outperformed the other for essentially the same number of subjects. Finally,
we see that the PE technique outperformed the LE technique more than
twice as often at the beginning of the session, but by the end of the session
each outperformed the other for essentially the same number of subjects.

Table 4. Number of Observations in which One Technique in the Pair Performed


Better/Worse/Same as the Other Technique

PE-Rating LE-Rating PE-LE


Beginn. End Beginn. End Beginn. End
1st Technique Better 8 9 7 5 11 8
1st Technique Worse 5 3 4 6 5 7
1st and 2nd Equal o 0 1

4. Discussion
The data retained for analysis could not be considered to have been
randomly generated for any of the three techniques. Rather, it appears that,
on average, the subjects providing these data considered their preferences for
time and cost when evaluating the outcomes. Rejecting data as being
randomly generated may be a minimal requirement, but we have seen little
evidence demonstrating that preference data elicited for use in MCDA can
pass this test. Passing this test argues against claims that individuals cannot
respond meaningfully to hypothetical questions, in general, and to lottery-
based questions, in particular. In fact, we were surprised to see that the
assessments from the rating technique were also able to pass such a test.
Of course, the subjects could have been anchoring their responses on
numbers that did not necessarily reflect preferences, and the statistics of
Table 2 (ARD) on the differences between the supposedly indifferent
outcomes demonstrate that there is still a lot of "noise" in preference
responses. Moreover, the data from more than half the subjects were
eliminated because they did not provide beginning- and end-of-session
values of C· that differed by less than 10%. (Investigating the performance
of these data would make for an interesting subsequent study.) We do not
suggest, therefore, that preference assessment is a straightforward task. Still,
it is noteworthy that the noise, as measured by the ARD, decreased on
average after the subjects gained familiarity with any of the techniques, i.e.,
when comparing results based on data obtained at the end of the session to
results based on data obtained at the beginning of the session. If one thinks
of "constructing," rather than "collecting" preferences, then a process in
which the decision maker gains familiarity with preference assessment
Comparison ofLottery- and Rating-Based Preferencel Assessment 353

should be encouraged, and it is reassuring to see that this familiarity led to


better performance.
The sample sizes we were able to obtain were small, but there was no
evidence that assessment using the common task of rating performed better
than assessment using lottery-based techniques. On the contrary, the
probability equivalent (PE) technique markedly outperformed the rating (R)
technique. The average ARD across subjects was smaller for the PE
technique than for the R technique when calculated both from data obtained
at the beginning of the session and from data obtained at the end of the
session. At the individual level, the Relative Response Difference (RRD)
measure and a count of the number of subjects for whom one technique
outperformed the other showed much better performance of the PE
technique than the R technique.
The lottery equivalent (LE) technique should be more cognitively
burdensome than the PE technique, since it involves the comparison of two
lotteries, rather than a lottery and a sure outcome. However, even this more
complex lottery-based technique performed as well as the R technique. The
R technique did outperform the LE technique according to the ARD measure
for data obtained at the beginning of the session. However, after gaining
familiarity with the techniques, the average ARD was lower for the LE
technique. Moreover, the various measures portraying performance at the
individual level (average RRD and the number of individuals for whom a
technique performed better) indicated the LE technique was no worse, and
perhaps better than the R technique.
The analysis presented here says nothing about the ability of the three
techniques to construct preference parameters that are valid in the context of
the evaluation logic of any given MCDA method. Still, the results would
tend to refute claims that lottery-based techniques used in MAEUT--even
the relatively complicated LE technique-are more difficult or unrealistic
than the supposedly simpler "direct" assessment techniques embodied by
rating. We do not find this result surprising: Considering whether one is
indifferent between two alternatives or prefers one of the alternatives would
seem to be more transparent than considering whether an alternative should
be rated a 6 or a 7, for example, on an interval scale.
The R technique might be considered "easier" than the lottery-based
techniques. As mentioned above, the average time taken to answer questions
with the R technique (7.6 minutes) was roughly equal to that taken to answer
questions with the PE technique (7.3 minutes). Both were markedly less than
the average time taken to answer questions with the LE technique (lOA
minutes). If the individuals were simply asked to supply a rating without
having to think about the exchange-of-outcome interpretation provided, the
354 AIDING DECISIONS WITH MULTIPLE CRITERIA

times for the "easier" R questions would likely have been much less. This
result would not be surprising, either, since individuals are much more
frequently asked to rate something "on a scale from X to Y" than to state an
indifference involving lotteries. However, MCDA is not intended to be easy.
On the contrary, the benefits from decision aiding are in great part derived
from using a methodology to arrive at a clearer understanding of the issues
of a decision problem and of the role of individual preferences in resolving
these issues. If this were an easy task, MCDA would likely not exist as a
discipline. Performance, rather than ease, should be the overriding criterion
when judging among different techniques.
We were somewhat surprised to see that the added complexity of the LE
technique did not lead to a marked decrease in performance as compared to
the PE technique. The statistics in Tables 2, 3, and 4 show that the PE
technique did, indeed, perform much better than the LE technique when
using the data collected at the beginning of the session. However, when
considering data collected after the individuals gained familiarity with the
two techniques, there was essentially little difference in their performance.
Again, more tests would be needed before one can say these techniques
perform equally well, even if performance is based only on the types of tests
considered here. Still, our results indicate that it may indeed be possible to
obtain the potential benefits offered by the more complex LE technique
when conducting MAEUT assessments and that there is no evidence that the
lottery-based techniques are any more difficult than direct assessment
techniques for MCDA preference assessment.

Acknowledgments
The authors gratefully acknowledge the valuable suggestions offered by
two anonymous reviewers. The second author acknowledges the value of
frequent discussions with Bernard Roy in helping him arrive at a better
understanding of MCDA and the context of the type of work reported here.
He also acknowledges the value of his several discussions with Martin
Rogers and his collaboration with Cathal Brugha and Michael Bruen in
helping form the context of this work. The views presented here are those of
the authors, however, and should not be taken to represent those of these
other researchers. This work was partially funded by National Science
Foundation grant #MSS-8657342.
Comparison ofLottery- and Rating-Based Preferencel Assessment 355

References
Allais, M. "The So-Called Allais Paradox and Rational Decisions Under Uncertainty,"
Expected Utility Hypotheses and the Allais Paradox. M. Allais and O. Hagen (eds.), D.
Reidel, Dordrecht, NL, 1979, pp. 437-682.
Franzese, O. Errors and Impacts of Preference Assessments in a Multiattribute Utility
Framework. Ph.D. dissertation, The Ohio State University, Columbus, OH, USA, 1993.
Fischhoff, B, N. Welch, and S. Frederick, "Construal Processes in Preference Assessment,"
Risk and Uncertainty, 19: 1-3, pp. 139-64, 1999.
French, S., Decision Theory: An Introduction to the Mathematics of Rationality. Ellis
Horwood, Chichester, UK, 1986.
Larson, R. and A. Odoni, Urban Operations Research. Prentice-Hall, 1981.
Law, A., D. Pathak, and M. McCord, "Health Status Utility Assessment by Standard Gamble:
A Comparison of the Probability Equivalence and the Lottery Equivalence Approaches,"
Pharmaceutical Research, 15(1), 1998, pp. 105-109.
McCord, M. R. and O. Franzese, "Empirical Evidence of Two-Attribute Utility on
Probability," Theory and DeciSion, 35, pp. 337-51, 1993.
McCord, M. R., O. Franzese, and X. D. Sun, "Multicriteria Analysis of Aeromedical Fleet
Expansion," Journal of Applied Mathematics and Computing, 54(2 & 3), pp. 101-29,
1993.
McCord, M. R. and A.Y.e. Leu, "Sensitivity of Optimal Hazmat Routes to Limited
Preference Specification," Information Systems and Operational Research, 33(2), pp. 68-
83, 1995.
McCord, M. R. and R. de Neufville, "Lottery Equivalents: Reduction of the Certainty Effect
Problem in Utility Assessment," Management Science, 32(1), pp. 56-60, 1986.
McCord, M. R. and R. de Neufville, "Assessment Response Surface: Investigating Utility
Dependence on Probability," Theory and Decision, 18, pp. 263-285, 1985.
McCord, M. R. and R. de Neufville, "Empirical Demonstration that Expected Utility Decision
Analysis is not Operational," Foundations of Utility and Risk Theory with Applications,
pp. 181-199, B. P. Stigum and F. Wenstop (eds.), D. Reidel, Dordrecht, NL, 1983.
Payne, J. W., 1. R. Bettman, and D. A. Schkade, "Measuring Constructed Preferences:
Towards a Building Code," Risk and Uncertainty, 19:1-3, pp. 243-70,1999.
Rogers, M., M. Bruen, and L.-Y. Maystre, ELECTRE and Decision Support: Methods and
Applications in Engineering and Infrastructure Investment. Kluwer Academic Publishers,
Boston USA, 2000, 208 pp.
Roy, B., Multicriteria Methodology for Decision Aiding. Kluwer Academic Publishers,
Dordrecht, NL, 1996, 292 pp.
von Winterfeldt, D. and W. Edwards, Decision Analysis and Behavioral AnalYSis. Cambridge
University Press, Cambridge, England, 1986.
RISK ATTITUDES APPRAISAL AND COGNITIVE
COORDINATION IN DECENTRALIZED
DECISION SYSTEMS

Bertrand Munier
GRID - CNRS, Ecole Normale Superieure de Cachan, France
munier@grid.ens-cachan.fr

Abstract: In decentralized decisions systems, coordination and efficiency encounter


major difficulties. Risk management systems are particularly important cases
in corporations with multiple plants. To solve the problem, it is argued, the
analyst needs to raise a cognitive representation question, in particular the
question of the criteria according to which the problem at hand is being
assessed in view of the whole organization. This, in tum, raises the issue of
how these criteria are evaluated by the different individuals. An example based
on a subset of the risk management system, namely the maintenance system in
nuclear power plants, is used throughout the text. The paper argues that
generalizing MAUT to rank dependent risk treatment is of utmost importance
in order to deal with such problems. One additional theorem is proved in that
perspective and an appropriate software reported upon and illustrated on an
example. Beyond the technical problems examined in the paper, the art to use
the decision analysis framework is discussed.

Key words: Cognition; Decision analysis; Industrial maintenance; Multicriteria decision


making; Rank dependent model; Risk management

1. Introduction
In group decision analysis, conflicts and fairness issues are not the only
types of problems to be dealt with. Indeed, one of the frequently encountered
problems in modem global corporations is the question of cognitive
coordination of individuals. Yet solving such coordination questions turns
out to be increasingly difficult because individuals have to work more and
more on common tasks while having different representations of the task,
indeed often different backgrounds on which they approach the task.
358 AIDING DECISIONS WITH MULTIPLE CRITERIA

The example on which this paper will rely all along refers to a subset of
the risk management system within modem corporations, namely to the
maintenance system in nuclear power plants, but a similar analysis could be
relevant to quality control and to many other types of multiple agents
systems as long as (i) risk considerations play an important role for the
system and (ii) the system's members display some heterogeneity in terms of
personal risk attitude and to some extent in terms of corporate culture, the
latter being often very important.
Typically, indeed, a risk management system entails specialists in
production processes, others in organizational reliability, still others in
workforce health safety, who all make risk assessments of their specific
problems and try to take 'self protection' steps to the best of their
knowledge. For example, in the production process, machine inspection is
reinforced, or maintenance made more focused on reliability. At the same
time, organizational routines undergo substantial changes as a result of more
stringent reliability requirements [Weick, 1987]. And, finally, at the 'end' of
the system, 'risk managers' deal with the question of how to finance the
residual risks (insurance, self-financing, market financing or combinations of
these possibilities) once all the different preventive measures have been
taken.
The problem raised by such situations to the higher management of the
firm is one of efficiency-driven coordination. Investing some amount of
resources in the reduction of a probability of default of a given equipment
should bring about "equivalent" results to the ones provided by such or
such other step taken at similar cost towards greater reliability of the
organization. A large part of executives agree with that view, but another
large part of the same executives admit their company does not perform the
necessary adjustments because they reputedly represent an 'infeasible' task.
How to compare so widely different actions, originating from so widely
different agents in the corporation ? Yet, the costs incurred in risk
management of modem technology are huge and must imperatively be
monitored.
Economists have a common practice of cost-benefit analysis to proceed
with such comparisons, based on the 'ability to pay' approach or one of its
numerous variants. But contingent valuation is cognitively too demanding to
be performed in cases similar to the ones just mentioned above. One is then
left with the common practice of using rules of thumb, emerging from past
experience and informal conversations between the risk manager and
engineers, to design risk management policies. Relying on government
regulations only looks like an alternative course. In fact, such regulations
raise in their turn the very same problem, for they usually fall on one or the
Risk Attitudes and Cognition 359

other category of employees. Therefore they cannot solve the coordination


and efficiency issue raised here, which is specific to each organization 1.
What could then solve such an issue would be a sufficiently consistent
way to evaluate in terms of costs and of different ends the possible moves
considered by engineers, risk managers or insurance specialists, and
organization designers. But two difficulties arise when trying to provide such
a decision support system. One difficulty is that these individuals use
different representation spaces for the outcomes of their decisions (the
cognitive representation problem) ; another difficulty is that these decisions
entail sometimes probability reductions regarding potential accidents,
sometimes reductions in adverse consequences of such accidents. Tradeoffs
to be considered are thus sometimes outcome x VS.outcome y, sometimes
outcome x vs. probability p, sometimes probability p vs. probability q. And
such probabilities vary in the whole range of the simplex, reaching quite
often very low levels2 • Such a diversity of tradeoffs can hardly be accurately
described by the standard decision analysis model, using the Bayesian-
Neumannian model of expected utility (EU), essentially because that model
is linear in the probabilities, which is nowadays known as way too restrictive
to describe actual behaviors. And, although the probabilities involved in risk
management are usually small, as has just been pointed out, it is important to
note that the observation reported here on EU is not only true for very small
probabilities, as has long been argued, but has indeed general validity
[Abdellaoui and Munier, I 994a, 1998]. In other words, we need a more
general model than EU to be able to assess differences between individuals
in terms of risk appraisal (the cognitive risk appraisal problem). The
cognitive representation problem is most severe when one includes all three
categories of corporate actors already mentioned. We will limit ourselves in
this paper to the coordination problem between engineers in different spots
of the corporation and, to some extent, between engineers and risk managers.
In section 2, we describe a way in which the representation problem can
be dealt with. In section 3, we establish how decision analysis [Keeney and
Raiffa, 1976] can be extended to meet the tradeoff requirements mentioned
above. In section 4, we provide a methodology to encode the necessary
functions. We then refer in section 5 to an example in the maintenance of
nuclear power plants, while section 6 concludes and opens some additional
perspectives to the framework presented in section 4.

I Indeed, they are worse, for they let employees believe that they take care of safety and keep
them from analyzing the effective risks incurred in the diverse parts of their activity, as
underlined in the remarkable 2000 Lloyds' Register Lecture [Brinded, 2000].
2 One extreme case is represented by the probability to experience a core meltdown in a
nuclear plant within a year: such a probability is of the 10-5 order.
360 AIDING DECISIONS WITH MULTIPLE CRITERIA

2. Solving the representation problem


Engineers view industrial reliability through the levels reached by
default rates of equipments. Risk managers think in terms of arbitraging and
financial cost and income. The representations these agents have are not
easy to connect - even between engineers - and it is nevertheless necessary
to design a way to coordinate them, as has been argued above, if we are in a
decentralized system (as most corporations are today).
We ask actors in the system to define what they consider to be
" meaningful dimensions" for the tasks they are supposed to fulfill. As each
criterion one can design in decision analysis is a function of one of the
variables contributing to define such an "meaningful dimension ", one can
proceed in two different, not necessarily exclusive, ways :
- First, we let the actors enlarge the "space of meaning" they have in
mind through open questionnaire and discussions and we look then for
some intersection of the evoked lists of dimensions. This is suggested by
the" value referral process" which has been used in negotiation analysis
[Shakun, 1975] and has been stressed more recently under different
forms by decision theorists [Winterfeldt & Edwards, 1986, ch.2 ; Keeney,
1996, ch.l].
If the intersection of these sets of dimensions is sufficiently large in the
sense where the three categories of actors admit that it contains an
acceptable approximation of the meaning to be given to the maintenance
system, the analyst can then directly construct criteria on the
corresponding axes.
- Second, one can investigate the existence of functions linking the
variables considered as meaningful by one category of actors to the
variables evoked by another category, or, alternatively, link each set of
variables evoked by some category of actors to some unique set of
variables. These functions reflect " influences " in the broad sense of R.
Howard [1990]. The tool to be used here is therefore the influence
diagram. As will be seen in the example given below, engineers are here
of a great help to the decision analyst in defining such functions in the
cases where they reflect a causal physical relation.
A set of criteria can then be defined on the set of variables selected by
either one or both of these procedures (which may be used as
complementing each other). These criteria have to be validated by the largest
possible number of actors in the maintenance system. Both methods
mentioned have the advantage of avoiding any arbitrary mathematical
selection of criteria by the analyst. Rather, some type of consensus is
Risk Attitudes and Cognition 361

formed, even though the actors are in the first place heterogeneous, as has
been stressed in the introduction to this paper.

Core damage frequency ~_ _

Component failure distnbutions

Alternatives

K-------------->lK------------------:>I K------------;>I

Impacts on Impacts on system Impacts on


co onents and ower unit attribute

Fig.I. Representation ofa scenario using an influence chart until impact on chosen attribute

To meet comprehensiveness and practicability requirements, one can


check that (i) 4 to 5 criteria at most are retained, once every criterion
redundant with any other one has been dropped, and (ii) that the set of
criteria thus defined has links with the largest possible set of 'values'
invoked by the different actors of the system. The choice of dimensions is
thus an art of the analyst which is of foremost importance.
The representation problem is thus " solved" in the instrumental sense
where we have tools which enable us to find a space in which to map the
"meaningful dimensions" of the heterogeneous actors of the system under
exploration into one common space with a limited number of dimensions
(fig.2). These few dimensions will be called hereunder attributes, as is
common practice in decision analysis3•

3 In economic theory a,s in traditional (prescriptive) decisions-aid, it is often stressed that


attributes should be « end objectives ». If, however, one wants to describe behavior, the
distinction between « enp » as opposed to « means » objectives is irrelevant. Note that
costs and safety are also objective ends in the prescriptive sense.
362 AIDING DECISIONS WITH MULTIPLE CRITERIA

Cbange in default
probabDity
Cbange In the probabDity
of melting down of the core
Safety In manipulation

Fineness of tuning

Change in avaDabiJity
of power: annual hours
of plant stalled
Change in effectiveness

Expenditures on
maintenance

Change in direct and


indirect annual costs of
operation....
Radiation exposures

Etc.
Maintenance Perceived consequences Dose-response Criteria or attnbutes

Fig 2. Mapping ofa space of variables meaningful to the engineers into a reduced space of
final common attributes. Example from the maintenance problem in a nuclear power plant.

The major question is then to use these criteria to identify potential


inconsistencies among decentralized agents. Such a use requires a
generalization of standard decision analysis.

3. Generalizing decision analysis: the rank dependent


risk appraisal
The importance of the " solution" to the representation problem can be
best explained by reference to our example of maintenance within a
decentralized organization supplying energy from multiple plants. This
importance lies in the fact that maintenance strategies within corporations -
whether publicly or privately owned - are most of the time of a qualitative
type, either" risk-based" or " reliability centered ", yet aiming at some type
Risk Attitudes and Cognition 363

of optimization4 . But experience suggests that it is very difficult to assert


whether the rules set in the corporation really imply behaviors of agents in
charge of applying them which result in global consistency. For example, the
main reason of such a global inconsistency in a multiple plants electricity
supply corporation stems from different risk-attitudes of engineers, who,
when confronted with a similar risky situation in different plants, will not
adopt the same decision. One has then to assess the risk attitudes of these
agents before being able to set some coordination plan.
From such a descriptive point of view, the expected utility model has
been widely questioned since the fifties [Allais, 1953] and some consensus
has emerged on the usefulness of the rank dependent model [Allais, 1988,
Quiggin, 1993]. One can indeed quote four reasons why the rank-dependent
model (RDM) is so attractive: i) it meets several converging intuitions; ii) it
provides a satisfactory compromise between simplicity and descriptive
performance requirements; iii) it contains several models already suggested,
including EU and all fuzzy expectation models abiding by first order
stochastic dominance; iv) it singles out in an intuitively meaningful way two
independent components of attitude towards risk, which is precisely what
matters here.
In the univariate case, RDM provides an evaluation of a lottery which, x
in the case of a discrete three events lottery with Xl < x 2 < X3 ' denoting by Pi
the probability of event i and by Ui the utility of Xi ' can be written as :
(1)

or, in the case of a continuous lottery with decumulative distribution Gu


of the utility u( x ) :
JO[Gu(T)]dT (2)
c
if we denote by C the range of possible consequences evaluated in
utilities. In all cases, the 90 function is such that 9(0) = 0,9(1) = 1,9'0> O.
The generalization of Decision Analysis to the family of rank dependent
models implies that preferences under risk can be expressed as a functional
of the form:

V(X\, ... ,Xn ) = JB(Gu(,r»dr (3)


c
The question raised by this functional is twofold:

4 Risk based methods were developed on Nuclear Regulatory Commission's request.


Reliability Centered Maintenance has been initially developed in the civil aviation
industry and then generalized to other industries [Moubray, 1991].
364 AIDING DECISIONS WITH MULTIPLE CRITERIA

a) How consistent is it with traditional multiattribute utility theory


(MAUT), or, in other words, can MAUT [Keeney and Raiffa, 1976]
be straightforwardly extended to the case envisioned in (3) above?
b) If yes, is it possible to elicit two functions, namely B(·) and
u(·) instead of only one as in traditional expected utility theory?
Leaving part b) of the question for the next section, we proceed to answer
part a). To that effect, we use the following strategy: As in the Expected
Utility framework, we show first that it is possible to decompose the
multivariate utility function in some aggregate of univariate utility
functions. Second, we explain why the probability transformation function
0(·) relative to the multidimensional random variable xcan be expressed as
a product of the n probability transformation functions Ok), each one
pertaining to the univariate random variables xl, .. ·, Xn .
Let us first define the notations we use and recall some definitions from
MAUT [Keeney and Raiffa, 1976] :
For each potential action, there exists, in the consequence space C, a
random vector X = (xl, ... , xn ), Xi denoting the random values of the Xj
attributes, i = 1,2, ... ,n. When one wants to distinguish between the subset Y
of X and its complement Yin X, the consequences vector x is decomposed
into (y,)1) = x. The symbol x* = (x)*, ... ,xt, ... ,x:') = (y*,y*) designates the
best consequences vector while Xo = (x~, ... , x?, ... , x~) = (yo, yo) represents the
worst. The multiattribute utility function is restricted by U(xo) = 0 and U(x*)
= 1. The function u;(x;) is the conditional utility function of the Xi attribute,
normalized by u,{x?) = 0 and u,{xt) = 1, i = 1,2, ... ,n.
With the above notations, the following standard definitions are given:
Definition 1: A set of Y attributes Y c X, is preferentially independent if the
judgements of preferences on the consequences differing only on the Y
dimensions do not depend on the values attached to Y.
Definition 2: A set of Y attributes Y c X is utility independent if the utilities
of the lotteries only differing on the Y dimensions do not depend on the
evaluations attached to Y.
Definition 3: The X), ... , Xu attributes are mutually utility independent if each
subset of {X), ... , Xu} is utility independent of its complement.
Definition 4: The X), ... , Xu attributes are additively independent if, in
X), ... , Xu the preferences on the lotteries only depend on the marginal
distribution of probability on the different attributes.
Risk Attitudes and Cognition 365

3.1 Decomposition of the multiattribute utility function


underRDM
Several authors have shown that the results which were developed
within the expected utility framework could be extended to several
generalizations of expected utility. Thus, the standard multi-attribute utility
results can be obtained under Cumulative Prospect Theory, under the
different versions of the Rank-Dependent Model and, in fact, under all
generalizations of expected utility theory which differ from EU only through
a non linear treatment of probabilities. For Weighted Utility Theory and
Skew Symmetric Bilinear Utility Theory [Fishburn, 1984a], multi-attribute
decomposition results were given by Fishburn [Fishburn, 1984b]. Multi-
attribute representations extended to Rank Dependent Utility and Choquet
Expected Utility were given in Dyckerhoff (1994) and Miyamoto and
Wakker (1996). Three theorems [Dyckerhoff, 1994] give the essence of the
results which have been obtained :
Theorem 1 (Dyckerhoff, 1994)
If the decision maker uses a rank-dependent evaluation of prospects as a
decision criterion, the multi-attribute utility function is decomposable into a
multiplicative form if each non-empty subset Y of attributes, Y c X, is utility
independent.
One can then write:
i=n 1
U(x) = {TI[1 + kkiui(xi)] -1}-
i=l k
where, Vi = 1,2, ... , n, k = u(x;, :X;0) and k is a constant parameter
j

solution of: 1 + k = ni=n

;=1
(1 + k k ; )

Theorem 2 (Dyckerhoff, 1994)


If the decision maker uses a rank-dependent evaluation of prospects as a
decision criterion, the multi-attribute utility function is decomposable into a
multi-linear form if each Xi, Xi E X, attribute is utility independent.
One can then write:
i=n i=n n
U(x) = L:kiUi(Xi) + L: L:kijUi(Xi)Uj(Xj)+
i=l i=l j>i
i=n n n
+ L. L. L:kijIUi(Xi)Uj(Xj)UI(XI)+ .. ·+k123 ... nUl(Xl)U2(X2)",Un(Xn)
i=l j>i I> j
366 AIDING DECISIONS WITH MULTIPLE CRITERIA

h
were vl= 1
\.-I. 2
" ... ,n,kj=u (. -0) ,
Xj,Xj k ij=u ( . . -0) and -0
Xj,Xj,xij xij deSlgnates
.
levels of all attributes except Xi and 0 .
Theorem 3 (Dyckerhoff, 1994)
If the decision maker uses a rank-dependent evaluation of prospects as a
decision criterion, the set of X and Y attributes are additively independent if
and only if the following two conditions are met:
i) e = id[o,lj and ii) U is decomposable under an additive form.
One can write in this special case:
;=n i=n
U(x) = LU(xpX;°) = L kjUj (Xi)
i=1 i=1

3.2 Decomposition of the probability transformation


function
In the one attribute case, assessing the probability transformation
function eo
can be done nonparametrically [Currim and Sarin, 1989,
Abdellaoui and Munier, 1996, Abdellaoui, 2000]. Specifically, [Abdellaoui,
Munier and Leblanc, 1996] compare differences between lotteries
(improvement in safety situations) to solve the problem. It also can be solved
parametrically by specifying first the probability transformation function.
But in the multiattribute case, both type of methodologies appear difficult,
either econometrically for a parametric encoding or cognitively for a non
parametric encoding.
In the expected utility framework, the stochastic independence of the
various attributes is a simplifying assumption, which makes computation
easy. In the RDM framework, this same hypothesis is much more important,
for it will allow us to avoid encoding the eo
function. Indeed, it allows to
consider this function as a composition of the en's functions, which will be
the only ones to require encoding5. It will thus ease considerably the
encoding procedure.
Beaudouin, Munier and Serquin [1999] prove the following theorem:

5 The ai'S functions are respective probability transformation functions of the attributes i.
Note that, if they were aU the same function, one could interpret them as perception
functions of the probabilities. In fact, they are not, which shows that they represent in fact
the way the individual is prepared to deal with risk in view of a given attribute. This is in
line with what sociologists like Slovic have shown: risks are not aU of interchangeably
dealt with by individuals, depending on which attribute they bear among other things.
Risk Attitudes and Cognition 367

Theorem 4 (Beaudouin, Munier, Serquin, 1999)


Let the consequence space C in a rank dependent utility model be the
Cartesian product of the attribute spaces Cj, i = 1,2, ... ,n. The rank dependent
utility of a multi-attribute lottery may be expressed as a multilinear
composition of the rank dependent utility of the one variable lotteries if and
only if the following conditions hold:
i) There exists on the decumulative probability distribution of each
attribute space Ci a continuous, non decreasing real function 9{)
satisfying: 9 i(0) = 0 and 9 i (1) = 1.
ii) Every single attribute i, i = 1,2, ... ,n is utility independent.
iii) Random variables Xi' i = 1,2, ... ,n are probabilistically indepen-
dent.
Note that, if we let f-l = () 0 P, where P denotes the probability P(U~ r),
we can consider the function f-l as a monotone set function f-l : 20. ~ [0,1].
Since the function f-l = () 0 P and the multi-attribute utility function U: C ~
[0,1] can be viewed, respectively, as a monotone set function and as a
positive f-l-measurable function, then
()(Gu(r)) = () 0 P(U~ r) = f-l(U~ r)
is a decreasing distribution function of the function U with respect to the
set function f-l.
Following Denneberg (1994), we can express, in such a situation, the
functional (1) as:
fUdf-l = f()(Gu(r))dr
C

Similarly, for the univariate (partial) utility functions,

Ci

The proof uses property vii of Denneberg's proposition 12.1 [Denneberg,


1994, p.147-148] and relies on the monotonicity and the continuity of the
();'s as well as on the stochastic independence of the x;'s [Beaudouin,
Munier, Serquin, 1999, p. 349-351].
Thus,
V(Xp···,xn) = IO(Gu(,r))dr
c
can be expressed as :
368 AIDING DECISIONS WITH MULTIPLE CRITERIA

n n
V = Iki fOi[GUj (.)]d. + I Ikij fOi[Gu/.)]d. fOj[Gu/.)]d. +
i=1 i=1 j>i

... + kl2...n fOI[GU1 (.)]d. f02[GU2 (.)]d .... fOn[Gun (.)]d.


which is an expression for the rank dependent utility of a multi-attribute
lottery in terms of the rank dependent utilities of the one-variable lotteries.
As continuity and monotonicity of the O;'s guarantee that everyone of the
B( Gu ) , s can be seen as the decumulative function of a countably additive
I

distribution, we can write [Wakker, 1990] that:

which is the decomposition of the global probability transformation function.


This formulation of the problem allows to compute the rank dependent
evaluation of multiattribute lotteries as a function of the rank dependent
evaluations of the univariate lotteries, each one computed separately. By
which method to make use of this result in order to estimate the expression
of the multi-attribute generalized expected utility mentioned above will now
be discussed in the following section.

4. SERUM6 : a software to encode multiattribute


preferences under rdm
A software can be designed to encode the probability transformation and
the utility function related to each attribute. SERUM [GRID and EdF, 1998]
goes through three different steps.

4.1 Encoding probability transformation functions.


In a first step, the different probability transformation functions O{), i =
1,2, ... ,n are encoded without resorting to partial utility functions. In order to
do this, SERUM uses the "Twins Method" developed by Abdellaoui and
Munier and used in Abdellaoui, Munier and Leblanc [1996].
In the Twins Method, the analyst asks an actor to compare pairs of
univariate risk reduction situations. To perform such a comparison, the actor
must be simultaneously" participant" and" observer". To cope with this
duality situation, (s)he is asked to compare the satisfaction obtained from the
risk reductions concerning two other actors (his 'twins'), A and B, who are

6 SERUM was first developed by GRID (CNRSIENS de Cachan) and Electricite de France, It
stands for 'Systeme d'Estimation dans Ie Risque des Utilites Multi-attributs',
Risk Attitudes and Cognition 369

supposed to have exactly the same system of preferences as his own. Twin A
benefits from a risk reduction which turns the lottery (X,p;O) into the less
risky lottery (X,q;O). Twin B faces a risk reduction which turns the lottery
(X,q;O) into the even less risky lottery (X,S;O). The actor is then asked
which of these twins is more satisfied by the risk reduction he experiences.
The outcome X is fixed, and the analyst varies the probabilities p, q, r, s until
the decision maker reveals indifference between two risk reductions. To start
with, p is set to 1 and s to O. If the decision maker acts according to rank
dependent utility, the indifference between the two risk reductions for
q = q~ implies the equality

From this indifference, the analyst derives the equality


() l{q;lIo) - () ,{O) = () 1{1) - () I{qrj or

() I{qt"j - 0 = 1 - () I{qtj. Thus:


• 1
()j(q\) ="2.
The same process, applied to the interval [O,q\*], yields a point q2* such
that () l{q2*) - () ,{O) = () I{q\*) - () l{q2*). This process continues until a
sufficiently wide range of probabilities is covered.
To elicit the probability transformation functions ~(.), i = 1,2, ... ,n,
through the Twins Method, we have therefore to assume that lotteries can be
mapped into some absolute interval scale, beyond the usual Von Neumann
and Morgenstern utility scale. Bouyssou and Vansnick [1990] have shown
that this assumption holds if and only if :
. 1 1 1 1
"ix,y E Cj , l =1,2, .... ,n, v.(x)- vj(-x +- y)= vj(-x +- y)- Vj(Y).
'2222
SERUM makes use of this assumption to elicit probability
transformation functions. The methodology should be used with care, as it
rests on a somehow difficult task for the subject. It is necessary, before to
proceed to the full investigation, to make sure the subject clearly understands
the questions. This requires a preliminary try out time for the experimenter.

4.2 Encoding partial utility functions


Once the latter functions have been elicited, SERUM enters into a second
phase, to elicit partial utility functions. It uses the lottery equivalent method
[McCord and de Neufville, 1986]. The method looks for an indifference
370 AIDING DECISIONS WITH MULTIPLE CRITERIA

(x;,p;xn - (x; ,q;xn with 0 :;t P :;t 1 and 0 :;t q:;t 1. According to RDM, one
can write
u.(x') = (}j(q)
I I (}j(p)
As in this phase of the analysis the probability transformation functions
0.0 have been already elicited we easily obtain a first point on the partial
utility function u,{Xj).
In fact, the results obtained depend on the way the indifferences such as
(x;,p;xn - (x; ,q;xn are obtained. Direct elicitation of an "equivalent
lottery" to a given reference lottery leads to inconsistent results. In fact, it is
a common finding that this specific valuation task is extremely difficult for
most subjects to perform and entails therefore many errors. The "Closing
in" method [Abdellaoui and Munier, 1994a] does not use direct indifference
elicitation, but rather choice tasks. These are much simpler to fulfill and lead
to less inconsistencies which could be found to be errors in a large part. It
has been therefore incorporated into SERUM.

4.3 Encoding the scaling constants.


The last phase of a SERUM session is devoted to the elicitation of the
scaling constants, the k;'s, ky's, , kij/s, etc. These are evaluated through the
traditional techniques used in the expected utility framework [Keeney and
Raiffa, 1976], which are straightforwardly applicable to the RDM case.

5. Application to a multiple nuclear power plants


operator
5.1 Prescriptive application
SERUM was used as a prototype to help make decisions in the final
phase of a reliability-centered maintenance process [Serquin, 1998]. In that
last phase, the question is to determine which efforts should be undertaken in
terms of maintenance among the several possibilities pertaining to
equipments which the two first phases have characterized as 'critical' for the
strategy of the corporation.
The ASG system was selected as one example of such 'critical'
equipments. It is a system which aims at providing a sufficient capacity of
cooling of the reactor under accidental conditions, consisting of a reserve of
demineralized water, of a pumping set, of a water degassing equipment and,
finally, of injection lines. The latter are particularly important, because they
also serve as an operations device, to the extent that they provide water to
Risk Attitudes and Cognition 371

the steam generators when they are started and when the reactor is stopped in
the middle of normal operations in order to push the system into a state of
cold stop.
Three strategies in preventive maintenance had been singled out:
A gammagraphic check every 5 years
A gammagraphic check every 10 years
No gamma graphic control at all.
Engineers have demonstrated for long periods competence in dealing
with this question, but all expressed doubts as to the efficiency of the choices
they made, for they had no way in weighing costs and benefits in this
activity. For that prototype study, organization specialists were temporarily
ignored. The idea was rather to design a workable system and test it on a
limited problem.
The OMF version of ReM used in the operator of power plants in
question stresses four general objectives:
1) Maintain safety of the production system at its best possible
level
2) Obtain the best possible annual availability of production
3) Decrease radiation doses to the lowest achievable level
4) Keep maintenance costs under control and decrease
intervention costs.
As ASG is outside the confinement enclosure building, general objective
Nr. 3 is here irrelevant. Besides, step 1 of the study turned out to be
relatively trivial in this limited case, and quickly converged on three axes of
meaning which were more or less equivalent appraisals of the three other
general goals. So, the study consisted mainly in designing the three
corresponding criteria and - more importantly - the decision analysis view
and its foundations, as reported in sections 2 to 4 above, and in encoding the
probability transformation function as well as the partial utility function for
each of the three criteria and for each individual of a sample considered
representative among the people in charge of maintenance" on the spot" in
several power plants. 20 people were thus investigated under an anonimity
guarantee. Each interactive SERUM session lasted for a little less than 100
minutes on average, which is some kind of maximum in experimental
research.
Availability was represented through the number of hours lost per year in
production. It varies between 0 and 600 hours.
Safety was represented through the factor by which core meltdown
probability is increased, due to degrees of default in the ASG system. That
factor was considered as varying between 0% and 1200%.
372 AIDING DECISIONS WITH MULTIPLE CRITERIA

Total maintenance cost (corrective and preventive maintenance) was


expressed in 103 F (KF) and considered as varying, for that very specific part
of equipment of the subsystem, between 0 and 1200 KF per year.
The next curves on figure 3 are an example of the graphical
representations obtained from a SERUM interactive session with an
individual subject.

SERUM: An Example

::L2J ::0 =[2]


ProbabIlity Ir'arlsfaTTBJon I'rI:lIdlIIIty Ir3rlsfomlItIon I'rI:lIdlIIIty Ir3rlsfomlItIon
(Loss In availability) (loss In ~ (oosIs II1C'E95e\

O~PlI 50'10
l5% 0,(PlI -50'10 ORJI -_
ZI'Io _ _

0"10 0"10 _ _ _ 1IIJ\\ 0"10


0"10 ZI'Io 50'10 l5% lCll'4 IJI'o 0"10 _50'10_10)%
P, P, P,

UJllty hroctIon IIIIIty fIrdIon IIIIIty hroctIon


(loss In availability) (Iossln~ (oosIs II1C'E95e\
1;5

u,(X1) '~[J
o.~

0.5
up:2)
o,~

o.s
U,llCl)
'~B
o.~

o.s
0;5 O;!!I U2I
0 0
0.00 18.00 36.00 5<,00 72,00 0.00 112,50 125,<Xl 181,!i) lOO.lD 00,00 J31,SO 625,00 912,50
X, X, x,

Scaling constans
k. I kz I kJ I kl2 i klJ I k13 I km
0,1425 I 0,9702 I 0,1425 I -0,9711 I -0,1425 I -0,9831 I 1,8416

Fig3. Results obtained from a SERUM session with a nuclear powerplant maintenance
engineer (see comments in the text).

It is interesting to consider the shape of the curves shown. On the first


line, we have the probability transformation curves, whereas on the second
line utility curves appear. On the bottom of the figure, scaling constants are
shown. The first column of curves deals with the first attribute, namely
losses in availability of plant's supply (utility is then in terms of hours lost
per year, i.e. it is a dis-utility, as the graph shows). The second column deals
with the second attribute, namely increase in the probability of core
meltdown (dis-utility is now expressed in terms of the multiplicative factor
of the" normal" state probability of core meltdown). The last column deals
Risk Attitudes and Cognition 373

with the third and last attribute, namely maintenance cost (utility is defined
in terms of 103 F).
Fairly general findings of the study appear already on this example:
Costs are dealt with by a large majority of subjects nearly like
expected utility, i.e. with a probability transformation curve
following the first diagonal of the unit square, whereas it is not
the case for the two other attributes.
Convex probability transformation functions (meaning a very
"cautious" or "pessimistic" treatment of probabilities by the
individual) appear more often for the safety attribute (core
meltdown) than for the availability issue and they never appear
for the cost issue. In other words, the shape of the probability
transformation curve expresses at least part of the individual's
psychological representation of the issue at stake: core
meltdown is subsumed under the " dreadful" and
" uncontrollable" risks, as social psychologists put it.
Of course, the shape of the probability transformation curve
does not have any specific correlation with the shape of the
utility function. Each of these curves expresses independently its
part of the attitude towards risk (whereas Neumannian theory
considers only the part linked to the utility function).
Once obtained for a given individual, these different functions allow to
rank the alternatives on a scale ranging between 0 and 1 (table 1). This
ranking represents what some specific engineer would have done if
confronted to the choices already mentioned and recalled in column one of
table 1.

Table J : Alternatives ranking by one of the subjects investigated

Alternative Attribute 1 Attribute 2 Attribute 3 Generalized


Gammagraphy MAUT Ranking
intervals score score score score
interval = 5 years .987 .681 .868 .766 1

interval = 10 years .975 .494 .881 .685 2

No gammagraphy .953 .398 .906 .643 3

Source: Serquin [1998], p. 284-285.

These results are obtained with a set of scaling constants emerging from
the investigation of one subject-engineer as : k\ = 0,4699, k2 = 0,9944, k3 =
0,3296, k12 = -1,1382, k\3 = -0,3296, k23 = -0,8566, k123 = 1,5306.
374 AIDING DECISIONS WITH MULTIPLE CRITERIA

Of course, other individuals take another stand (have different probability


transfonnation functions and different utility functions) and can rank
differently the options. Actually, some individuals rank them in an order
exactly opposite to the one shown here! Yet, all the engineers investigated
belong to the same corporation and are assumed by top management to be
acting the same way. Clearly, this reveals a source of inefficiency for the
corporation, as we argued in section 1. How then to proceed ? We come now
to what might be the most interesting and important use of the SERUM
encoding procedure.

5.2 Informational application : Cognitive coordination


One could think of a sophisticated way to produce the" right answer" by
some aggregation procedure of the risk attitudes found in the maintenance
system of the corporation. But many objections will necessarily be raised
against any such procedure. One can easily imagine that plants managers
would complain about arbitrariness or, at least, about artificial introduction
into the debate of infonnation that the analyst does not have. What then to
recommend to the general management of the corporation?
We feel that SERUM offers the possibility to give an anonymous account
and a detailed explanation to the general management about the meaning of
the different risk attitudes and the different shapes of curves found among
their maintenance staff: Once that explanation given, why would the general
management not make a decision about what ought to be the "reference"
risk attitude, after careful deliberation? This use of generalized decision
analysis amounts to a "facilitation session ", as psychologists call them,
between different hierarchical levels of the corporation. The difference with
standard facilitation sessions is simply that here the 'meeting' can remain
virtual and rest on what SERUM reveals for the different individuals. The
" reference" risk attitude detennined in such a way could then be justifiable
to anyone interested, provided the sample of actors in the system whose
preferences were elicited have been appropriately chosen. And, above all,
this reference risk attitude, once incorporated into a decision support system,
could serve as a coordinating benchmark to plant managers on different
spots7•
Just like strategic plans are a way to disseminate corporate culture and
stress cognitive coordination of the corporation agents through a common
" spirit ", spreading that" risk attitude reference" in the corporation will

7 Of course, freedom should still be left to plant managers to deviate from the reference
support system recommendations, to the extent that plant managers can have locally
specific informations not contained in the system.
Risk Attitudes and Cognition 375

enhance coordination through similar channels. Besides, making sure


maintenance choices in two different power plants are similar and obey a
deliberated common logic is important to maintain public trust.

6. Concluding remarks : decision aid or strategic


consistency?
Clearly, the coordination scheme presented here can be extended to more
heteregeneous actors than just engineers and risk managers. Sociologists do
not usually 'measure' organizational reliability of functioning. But they try
to learn about past incidents to design the organization in such a way that
similar incidents become as unlikely as possible to happen again. Through
cognitive mapping, we can measure the effectiveness of their efforts.
As to the thrust of the method used here, one can say that decision-aid
has been interpreted for a long time as the way to designate the rationally
"right" decision to the head of the corporation (substantive rationality in the
sense of Simon). We have referred above to the" prescriptive" use of
decision analysis. Yet, what this paper shows is that a descriptive use of it
might be as helpful, if not more. Decision-aid can then be interpreted as the
way to produce hidden information (and it is no simple detail!) in the plants
and not only let everybody think about it, but enable the top management of
the corporation to coordinate decentralized agents through reference
attitudes towards risk. We do not mean this as a way to impose a rational
reference upon agents throughout some "maintenance textbook" of the
corporation. We conceive of such references as a way to let corporate agents,
in the very sense of corporate culture, respond to unexpected situations and
non explicitly cared of contingencies in some consistent way. This seems
particularly relevant to nuclear power plant management. Yet, it is quite
clearly relevant to many more situations, provided risk matters and
individuals in charge have no natural or spontaneous way to effectively
communicate and coordinate.
Or course, the extraction of typical preferences under risk of decision
makers can also be important in arbitraging or solving conflict resolution
situations, for several reasons. This is another perspective in which
generalized decision analysis and some information extraction tool like
SERUM can be used. But this perspective requires then some additional
tools which we leave out here.
376 AIDING DECISIONS WITH MULTIPLE CRITERIA

References
Abdellaoui, M., 2000 « Parameter-free Elicitation of Utilities and Probability Weighting
Functions », Management Science. 46 (11),1497-1512.
Abdellaoui, M., Munier, B., 1994a « The 'Closing In' Method: An Experimental Tool to
Investigate Individual Choice Pattems Under Risk », in: B. Munier and MJ. Machina,
Models and Experiments in Risk and Uncertainty. DordrechtIBoston, Kluwer Academic
Publishers. p. 141-155.
Abdellaoui, M., Munier, B., 1994b « On the Fundamental Risk-Structure Dependence of
Individual Preferences under Risk: an experimental investigation », Note de Recherche
GRID nO 94-07.
Abdellaoui, M., Munier B., Leblanc, G., 1996, « La transformation subjective de faibles
probabilites face au risque: Ie cas de I'exposition aux rayonnements ionisants », Note de
Recherche GRID nO 96-04.
Abdellaoui, M., Munier, B., 1998, « The risk-Structure Dependence Effect: Experimenting
with an Eye to Decision-Aiding », Annals of Operations Research, 80,237-252.
Allais, M., 1953, « Le comportement de I'homme rationnel devant Ie risque: critique des
postulats et axiomes de I'ecole americaine », Econometrica. 11, 503-546.
Allais, M., 1988, « The General Theory of Random Choices in Relation to the Invariant
Cardinal Utility Function and the Specific Probability Function, the (U,e) Model: A
General Overview», in: B. Munier, ed., Risk. Decision and Rationality.
DordrechtIBoston, Reidel, 231-289.
Beaudouin, F., Munier B., Serquin Y., 1999, « Multi-Attribute Decision making and
Generalized Expected Utility in Nuclear Power Plant Maintenance », in : M. 1 Machina
and B. Munier, Beliefs. Interactions and Preferences in Decision Making.
BostonIDordrecht, Kluwer academic Publishers. p. 341-357.
Bouyssou, D., Vansnick, lC., 1990, « Utilite cardinale dans Ie certain et choix dans Ie
risque », Revue Economique. 41, 979-1000.
Brinded. M. A., 2000, Perception versus Analysis: How to Handle Risk. The 2000 Lloyd's
Register Lecture. London, The Royal Academy of Engineering,Spring.
Currim, S.H., Sarin, R.K., 1989, « Prospect versus Utility », Management Science. 35 (1),
22-40.
Denneberg, R., 1994, Non Additive Measures and Integrals. DordrechtIBoston, Kluwer
Academic Publishers.
Dyckerhoff, R., 1994, « Decomposition of Multivariate Utility Functions in Non Additive
Expected Utility Theory», Journal ofMulti Criteria Decision Analysis. 3, 41-58.
Fishburn, P.C., 1984a, « Dominance in SSB Utility Theory», Journal of Economic Theory.
34,130-148.
Fishburn, P.C., 1984b, « Multiattribute Nonlinear Utility Theory », Management Science. 30,
1301-1310.
Howard, R.A., 1990, « From Influence to Relevance to Knowledge », in: R.M. Oliver and
lQ. Smith, eds., Influence Diagrams. Belief Nets and Decision Analysis. New York,
Wiley. P. 3-23.
Keeney, R.L., 1996, Value-Focused Thinking:A Path to Creative Decision Making. Boston,
Harvard University Press.
Keeney, R.L., Raiffa, H., 1976, Decisions with Multiple Objectives: Preferences and Value
Tradeo./fs. New York, Wiley.
Risk Attitudes and Cognition 377

McCord, M., De Neufville, R.,1986, « Lottery Equivalents': Reduction of the Certainty


Effect Problem in Utility Assessment », Management Science, 32, 56-60.
Miyamoto, J.M., Wakker, P., 1996, « Multiattribute Utility Theory Without Expected Utility
Foundations », Operations Research, 44, 313-326.
Moubray, J., 1991, Reliability-centred Maintenance, Oxford, Butterworth-Heinemann
Munier, B., 1995, « Entre rationalites instrumentale et cognitive: contributions de la demiere
a
decennie la modelisation du risque », Revue d 'Economie Politique, lOS, 5-70.
Quiggin, J., 1993, Generalized Expected utility Theory, The Rank-Dependent Model,
BostonIDordrecht, Kluwer Academic Publishers.
Serquin, Y., 1998, Gestion scientifique de la maintenance des grands systemes: ['apport de
I 'aide a la decision par utilite multiattribut generalisee, doctoral dissertation, GRID, Ecole
Normale Superieure de Cachan.
Shakun, M.F., 1975, « Policy Making Under Discontinuous Change: The Situational
Normativism Approach », Management Science, 22, NT. 2, October.
Shakun, M.F., 1988, Evolutionay Systems Design,Policy Making Under Complexity and
Group Decisions Support Systems, Oakland, Holden-Day.
Teulier, R., 1997, « Les representations: mediations de Paction strategique », in: MJ.
Avenier, ed., La Strategie « cheminfaisant », Paris, Economica, 95-135.
Wakker, P., 1990, « Under Stochastic Dominance, Choquet Expected Utility and Anticipated
Utility Are Identical »,Theory and Decision, 29 (2), 119-132.
Weick, K.E., 1987, « Organizational Culture as a Source of High Reliability », California
Management Review, 29, NT.2, 112-127.
Winterfeld, D. von, Edwards, W., 1986, Decision Analysis and Behavioral Research, New
York, Cambridge University Press.
LOGICAL FOUNDATION OF
MULTICRITERIA PREFERENCE
AGGREGATION

Raymond Bisdorff
Centre Universitaire de Luxembourg, Luxemburg
bisdhein@pt.lu

Abstract In this chapter, we would like to show Bernard Roy's contribution to


modern computational logic. Therefore we first present his logical ap-
proach for multicriteria preference modelling. Here, decision aid is based
upon a refined methodological construction, that provides the family of
criteria with important logical properties giving access to the concor-
dance principle used for aggregating preferential assertions from mul-
tiple semiotical points of view. In a second section, we introduce the
semiotical foundation of the concordance principle and present a new
formulation of the concordance principle with its associated necessary
coherence axioms imposed on the family of criteria. This new method-
ological framework allows us, in a third part, to extend the classical
concordance principle and its associated coherence axioms imposed on
the family of criteria - first to potentially redundant criteria - but also to
missing individual evaluations and even partial performance tableaux.

Keywords: Multicriteria preference modelling; ELECTRE decision aid methods;


Concordance principle

Foreword
Let me thank beforehand the editors for having invited me to con-
tribute to this book in honour of Bernard Roy. When I obtained in
summer 1975 a three years NATO fellowship in Operations Research, I
got the opportunity to join, apart from several universities in the USA,
two European OR laboratories. One was directed by H.-J. Zimmer-
mann in Aachen and the other by Bernard Roy in Paris. Having made
my under-graduate studies in Liege (Belgium), I knew well the nearby
German city of Aachen and I decided therefore to preferably go to Paris
and join Bernard Roy at the newly founded Universite Paris-Dauphine.
380 AIDING DECISIONS WITH MULTIPLE CRITERIA

It is only later that I realized how important this innocent choice would
be for my scientific career. Indeed, I joined the LAMSADE, Roy's OR
laboratory, at a moment of great scientific activities. We may remem-
ber that 1975 is the birth year of EURO, the Federation of European
OR Societies within IFORS and more specifically the birth year of the
EURO Working Group on Multicriteria Decision Aid coordinated by
Bernard Roy so that I became an active participant in the birth of the
European School in Operations Research. One can better understand
25 years later that joining the LAMSADE at that precise moment made
an ever lasting positive effect on me. May Bernard recognize in this con-
tribution, a bit of the scientific enthusiasm he has communicated to all
his collaborators. Indeed, I rarely met any other person of such arguing
clarity when trying to match formal logical constructions with pragmatic
operational problems which often, if not always, appear uncertain and
fuzzy in nature. It is my ambition in this chapter to continue with this
tradition.
R.BisdorfJ, June 2001

1. Introduction
"Du point de vue de la connaissance, nous sommes capables de connaitre
une chose au moyen de son espece et nous sommes incapables de la nom-
mer si nous ne la connaissons pas; par consequent, si nous emettons une
vox significativa, c'est que nous avons une chose a l'esprit." (Umberto
Eco, Kant et l'ornithorynque [11, p.437])

In this chapter, we would like to show Bernard Roy's contribution to


modern computational logic. Indeed, his original logical approach to
preference modelling via the concordance principle may be seen as fruit-
ful attempt for answering from a logical point of view cognitive questions
such as: "'How do we know preferences" and "What will be if a prefer-
ential situation is believed to be true". Thus he has taken a somehow
orthogonal position with respect to main philosophical and mathemati-
cal logic, where attention has been more and more concentrated on the
direct relation between a statement and a state of the world. By concen-
trating his methodological work on this "knowledgability", he has come
to explore by what mental operations and semantic structures, a decision
maker is capable of understanding what is the meaning of preferential
situations and in particular of outranking ones. In this sense, he has
shown us the way of how to naturally enrich classical truth-functional
semantics with a semiotical foundation.
First we present the logical approach for multicriteria preference mod-
elling as promoted by Bernard Roy. Here decision aid is based upon a
refined methodological construction that provides the family of criteria
Logical Foundation of Multicriteria Preference Aggregation 381

with important logical properties giving access to the concordance prin-


ciple used for aggregating preferential assertions from multiple semiotical
points of view. Generally, these properties are discussed via represen-
tation theorems showing the kind of global preference models that it is
possible to construct from a coherent family of criteria (see [10, 9, 12]).
In this contribution we shall however concentrate on the logical founda-
tion of the concordance principle as revealed by the semiotics of Roy's
methodology.
In a second section, we will therefore introduce the semiotical foun-
dation of the concordance principle and present a new formulation of it
with its associated necessary coherence axioms imposed on the family
of criteria. A main result, a priori a negative one, will be to make even
more apparent the well known Achilles' heel of the concordance principle,
i.e. the necessarily numerical (cardinal) assessment of the importance
weights associated with the family of criteria.
But our methodological framework will allow us in a third part, and
this was our main motivation for undertaking this research, to extend
the classical concordance principle and its associated coherence axioms
imposed on the family of criteria - first to potentially redundant criteria,
- but also to missing individual evaluations and even partial performance
tableaux. These extensions, we hope, should help making the concor-
dance principle and thereby the logical approach to preference modelling
as promoted by B. Roy more convincing for applications in decision aid.

2. How to tell that a preferential assertion is


true?
In this section we present the constructive approach to multicriteria
preference modelling proposed by Bernard Roy (see [17, 18]). In order
to describe the preferences a decision maker might express concerning a
given set of decision actions, we consider essentially the multiple prag-
matic consequences they involve. On the basis of these consequences
we introduce a family of criterion functions for partial truth assessment
of desired preferential assertions, namely outranking situations. Aggre-
gating multiple partial truth assessments of these outranking assertions
will be achieved via the concordance principle. To give adequate results,
this concordance principle imposes necessary coherence properties on the
underlying family of criteria.
382 AIDING DECISIONS WITH MULTIPLE CRITERIA

2.1. Describing decision action's consequences


from multiple points of view
We assume at this place that in a given decision problem, a set A of
potential decision actions has been defined and recognized by the actual
decision maker. Our main interest goes now to describing the decision
maker's preferences concerning these decision actions. In our discussion
we restrain our interest to preferences expressed as pairwise outranking,
i.e. "to be at least as good as" situations on A.
In a constructive pragmatic way, Roy states that "every effect or at-
tribute characterizing a given decision action a E A which could interfere
with the operational goals or the ethic position of the decision-maker as
a primary element to elaborate, justify or transform his/her preferences
is called a consequence of a" (Roy [17]).
Definition 1. To speak of all possible such consequences of the decision
actions A before any formal decision-aid activity has been going on, Roy
introduces the concept of cloud of consequences denoted lI{A).
Modelling this cloud of consequences consists first in identifying el-
ementary consequences, i.e. semantically well recognized effects or at-
tributes with well defined and observable states describing the conse-
quences that would occur if a potential decision action a is going to be
executed.
A strong pragmatic commitment is taken here by Roy with respect
to what kind of consequences will contain the formal model of the cloud
of consequences. Indeed, no vague impressions, intuitions or beliefs are
supposed to be taken into account.
Being principally interested in capturing an adequate semiotical ref-
erence of preferential assertions, Roy restricts his attention to such ele-
mentary consequences that support a preference dimension.
Definition 2. A preference dimension c is an elementary consequence
such that the set of its possible states may be organized as a preference
scale E e, i.e. a total order (Ee,~) with the following property: consid-
ering two ideal decision actions a and b which may be compared exactly
with the help of two states e and e' of E e , then a and b are considered
indifferent iff e = e', whereas a is considered to be preferred to b iff
e > e'.

The complete set of preference dimensions on which all elementary


consequences of all the decision actions may be completely and opera-
tionally described is called the consequence spectra of the decision actions
and denoted r{A).
Logical Foundation of Multicriteria Preference Aggregation 383

Two important constructive implications may be outlined at this


point: - first, the cloud of consequences is split into a generally small
number of elementary consequences, well identified and recognized as
preference dimensions by the decision-maker; - secondly, each such ele-
mentary consequence gives support to some kind of independent pref-
erence assessment on the set A through a "toute chose pareille par
ailleurs" reasoning principle.
Summarizing, we notice that the elaboration of a consequence spectra
f(A) follows precise methodological requirements (see Roy [17, p.220j)
that are:

• an intelligibility principle: Its components must gather as directly


as possible all imaginable consequences such that the decision
maker is able to understand them with respect to each of the m
preference dimensions.

• an universality principle: The components must ideally cover all


preference dimensions that reflect fundamental and unanimous
outranking judgments concerning the set of all decision actions
in A.

2.2. Partial truth assessment of outranking


assertions
Following the exhaustive formal description r( a) of the individual
multiple consequences of a decision action a E A, Roy now introduces
the concept of criterion function.

Definition 3. A criterion function 9 : A ~ IR is a real-valued function,


defined on the set of potential decision actions A, which captures oper-
ationally the preferential description of a determined part f 9 (A) of the
consequence spectra, called the support of g. Such a criterion function
verifies therefore the following operational conditions:

• The number g(a) is defined iff the sub-cloud of consequences vg(a)


taken into account by the criterion function 9 is effectively evalu-
ated in a given sub-spectra fg(a).

• The decision-maker recognizes the existence of a significant pref-


erence dimension with respect to which two decision actions a and
b may be compared relatively to the only consequences covered by
384 AIDING DECISIONS WITH MULTIPLE CRITERIA

rg(a) and (s)he accepts to model this comparison as follows 1 :

g(a) + d '? g(b) ¢:} a8 g b,

where d E 1R+ represents a possible indifference threshold and 8g


stands for the semiotical restriction of an outranking relation 8 to
the sub-spectra rg(a) covered by criteria g.

Roy is using the concept of 'criterion' in the sense of a formal basis,


a model for supporting preferential judgments. For any two decision
actions a and b, a given criterion function 9 allows to warrant truth
or falseness of the global outranking assertion 'a 8 b' with respect to a
recognized part r g(a) of the consequence spectra covered by criterion g.

Definition 4. A family F of criteria constitutes a finite set of criterion


functions that cover the whole consequence spectra r(a), Va EA.
Evaluating all decision actions on such a family of criteria results in
a performance tableau T = (A, F), i.e. a two-way table representing
g(a) for each decision action a E A on each criterion function 9 E F.

Here, the term 'family' refers to the fact that the considered set of
criterion functions supports exhaustively the pragmatic preferences of
the decision-maker. More generally, we notice in Definition 3 that the
universal assertion 'a 8 b' is truth or falseness warranted from multiple
points of view depending on the decomposition of its cloud of conse-
quences into separated preference dimensions.

2.3. The concordance principle


A given performance tableau T = (A, F), involving a set A of decision
actions and a family F of criteria, allows a partial truth assessment of
pairwise outranking situations along all individual criterion. It is the re-
fined constructive methodology that gives the decision maker the ability
to clearly acknowledge such partial outranking situations on behalf of
the performances.
In order to aggregate now these partial outranking assertions, we need
to consider the significance, each individual criterion takes in the eyes of
the decision maker, for assessing the truth of the corresponding universal
outranking situation.

1 Neglecting in this definition a possible indifference threshold, Roy uses normally a single
implication. But we prefer to work with a double implication as it allows to capture at the
same time the semantics of the negated assertion.
Logical Foundation of Multicriteria Preference Aggregation 385
Definition 5. Let T = (A, F) be a given performance tableau and let
kg E Q+, measure numerically the significance, criterion 9 E F takes
in the eye of the decision maker with resect to the truth assessment of
the universal outranking situation. Let kF denote the universal closure
of the significance weights over the whole family of criteria, i.e. kF =
E9EF (kg). We denote F+, the subset of criteria that clearly support the
truthfulness of a given outranking assertion and we define the credibility
r( a S b) of the universal outranking assertion 'a S b' as follows:

r(aSb) = L k
(kg)·
gEF+ F

If r(a Sg a') ~ ~, 'a S b' is considered to be more or less true.


The credibility of the universal outranking situation is computed as
the sum of the relative weight of the subset F+ of criteria confirming
truthfulness of this assertion. If a majority of criteria is concordant
about supporting the given outranking situation, it may be affirmed to
be more or less true depending on the effective majority it obtained.
Following Roy, this logical concordance principle may be interpreted as
a voting mechanism in favour of the truth concerning a given outranking
assertion, each criterion g E F participating in the voting with a number
of voters equivalent to the amount kg of knowledge concerning the truth
assessment it supports.

2.4. Necessary coherence of the family of criteria


The performance tableau, representing a synthetic description of the
consequence spectra, thus appears as an essential, but also very difficult
step in a practical decision aid problem. Indeed, as pointed out by Roy
(see [17] p. 310), besides a cognitive problem of acceptance of the criteria
by the decision-maker, there are the following logical requirements to
respect when constructing a family of criteria:
• Exhaustivity of the family of criteria: All individual consequences,
out of r(a) and r(b) for two decision actions a and b and of rele-
vance for their mutual comparison in terms of preference or indif-
ference, have to be taken into account. This requirement takes its
origin in the universal closure of the relative significance weights
of the criteria over the whole family of criteria used in Definition
5.
• Cohesion between local preferences, modelled at the level of the
individual criterion, and global preferences modelled by the whole
386 AIDING DECISIONS WITH MULTIPLE CRITERIA

family of criteria: Global preference judgments must coherently


reflect themselves when transposed into individual criteria based
preferences. The decision maker recognizes a clear universal out-
ranking situation' as b' whenever the performance level of action
a is significantly better than that of action b on one of the criteria
of positive significance, performance levels of these actions staying
the same on each of the remaining criteria. This requirement guar-
antees separability of the individual preference dimensions which
in term allows the additive computation of the credibility degrees.
• Non-redundancy of the criteria: The family is minimal with re-
spect to both preceding properties. Again the importance, that
each criterion will take in the truth assessment of an outranking
situation via the concordance principle, is coherently measured
only if no redundant consequences are taken into account.
Definition 6. A family of criteria, verifying the exhaustivity, the cohe-
sion and the minimality requirement is called a coherent family.
Before discussing more thoroughly in Section 3 these coherence prop-
erties from a semiotical point of view, let us first turn our attention
to the logical denotation, the credibility calculus as resulting from the
concordance principle, transfers to outranking assertions.

2.5. Truth assessment by balancing reasons


"The rule for the combination of independent concurrent arguments
takes a very simple form when expressed in terms of the intensity of
belief ... It is this: Take the sum of all the feelings of belief which would
be produced separately by all the arguments pro, subtract from that
the similar sum for arguments con, and the remainder is the feeling of
belief which ought to have the whole. This is a proceeding which men
often resort to, under the name of balancing reasons.", (C.S. Peirce, The
probability of induction, [16]).

Inspired by Peirce's proceeding of balancing reasons as quoted above,


we may reformulate the concordance principle in the following way:
Definition 7. Let A be a set of decision actions evaluated on a coherent
family of criteria. Let S denote an outranking relation defined on A. For
all a, b E A, let F+ denote the subset of criteria in favour of the universal
assertion 'a S b' and F- = F - F+ the complementary subset in F. We
define the credibility r' (a S b) of assertion' as b' as follows:
Logical Foundation of Multicriteria Preference Aggregation 387

Following this definition, the degree of credibility of an outranking


assertion implements a rational function on A x A varying between -1
and 1. If r' (a S b) = 1 there is unanimity in favour of 'a S b' and if
r'(a S b) = -1 there is unanimity in disfavour of it. If r'(a S b) = 0 both
the reasons in favour and those in disfavour balance each other and there
appears no clear denotational result.
Definitions 5 and 7 are linked through the following proposition.
Proposition 1. Let r : A x A -+ [0,1] and r' : A x A -+ [-1,1] represent
the computation of the degrees of credibility of the outranking relation S
on A following Definition 5 respectively Definition 7. Then the following
relation holds between rand r' :

r' = 2r-l (1)


Proof. Equation 1 results immediately from the following development:

o
Proposition 1 evidently relies again upon the three properties of the
coherent family of criteria and it has an interesting logical corollary.
Corollary 1. Let A be a set of decision actions evaluated on a given
coherent family of criteria F and let r(a S b), Va, b E A denote the de-
gree of credibility of a pairwise outranking situation computed following
Definition 5.
• if r( as b) 2:: ~, then 'a S b' is considered to be more or less true,

• if r( as b) S; ~ then 'a S b' is considered to be more or less false and


finally,
• if r( as b) = ~ then 'a S b' is considered to be logically undeter-
mined.
Proof. The linear transformation of Equation 1, representing an order
isomorphism between credibility degrees rand r', gives a faithful trans-
formation from the truth denotation of Definition 7 to the truth deno-
tation of Definition 5, in the sense that

r(p) 2:: ~ {:} r'(p) ~ o.


In its truth denotation, the concordance principle is therefore isomorphic
to the balancing reasons proceeding. 0
388 AIDING DECISIONS WITH MULTIPLE CRITERIA

It is important to notice that the refutation of an outranking situation


'a S b' in case we observe its credibility to be below ~, does not neces-
sarily induce that the converse outranking situation, i.e. 'b S a' should
be automatically affirmed. On the contrary, even when we may observe
complete preorders on A on every single criterion, the concordance prin-
ciple commonly generates universal outranking relations on A that give
no complete preorders, even no partial preorders anymore, as no global
transitivity is formally implied by Definition 5.
The split truth versus falseness denotation, installed by the concor-
dance principle appears as a powerful natural fuzzification of Boolean
Logic (see Bisdorff [6]). Indeed, the algebraic framework of the credi-
bility calculus, coupled to its split logical denotation, allows us to solve
selection, ranking and clustering problems (see Bisdorff [3,5,8]) directly
on the base of a more or less credible pairwise outranking relation with-
out using intermediate cut techniques as is usual in the classic Electre
methods (see Roy & Bouyssou [18]).
As mentioned earlier, the concordance principle requires the assess-
ment of cardinal significance weights for all criteria. This requirement
represents a well known weak point when it comes to practical deci-
sion aid. Numerous theoretical and empirical efforts have therefore
been devoted to develop adequate methodologies for helping define these
weight coefficients for different preference aggregation methods (see Roy
& Mousseau [19, 15]), but few have considered the essentially semiotical
nature of the credibility calculus installed trough the balancing reasons
proceeding.

3. Semiotical foundation of the concordance


principle
In this section we will therefore explore in depth the relationship be-
tween a given family of criteria and its semiotical interpretation in terms
of the underlying cloud of consequences. Our approach closely follows
the classical measure-theoretical axiomatization of probability theory.
Where random events support the probability measure, we use semi-
otical interpretations to support the credibility calculus. The amount
of truth assessment knowledge carried by the family of criteria is thus
supported by the denotational semantics of the family of criteria with
respect to the given consequence spectra. In this way we axiomatize the
concordance principle in a measure-theoretical way and a new version of
the coherence axioms of the family of criteria is presented.
Logical Foundation of Multicriteria Preference Aggregation 389

3.1. Credibility versus state of belief


Indeed, following the Peircian discrimination between degree of cred-
ibility and state of belief ([16, p.175]) gives us a broader approach to
the problem of evaluating the credibility, a decision-maker should have
in the proposition that a certain action a outranks another action b on
behalf of a given performance tableau. Indeed, Peirce states that : "
to express the proper state of our belief, not one number but two are
requisite, the first depending on the inferred [credibility]' the second on
the amount of knowledge on which that [credibility] is based ,,2
When the exhaustivity of the family of criteria is given, a single degree
of credibility is solely sufficient for expressing our belief in a given out-
ranking assertion. But when such exhaustivity is not given, the second
number, the actual amount of knowledge used to assess the truthfulness
of this assertion becomes important.
What Peirce means in general here, refers to the fact that solely con-
sidering a relative credibility degree or ratio, is necessarily restricted to
the condition that an universal, i.e. constant amount of truth assessment
knowledge underlies all arguments.
To illustrate the point, we may indeed consider that the family of
criteria represents a global voting assembly with a certain number of
individual voters, each one representing one of the given criteria. This
assembly is split into sub-assemblies, each representing the preference
dimension modelled by one of the possible criteria.
Following this metaphor, the three basic requirements, a coherent
family of criteria has to meet in order to comply to the concordance
principle, may be understood as follows: - first, concerning exhaust iv-
ity, we have to assume that the union of all sub-assemblies completely
returns the global assembly. No significant voters concerning the truth
assessment of a given assertion are missing in the global assembly; -
secondly, to guarantee the necessary separability condition, all voters
must participate in at most one sub-assembly; - finally, the minimality
condition imposes that each partial point of view must be represented
by at most one sub-assembly.
Under these conditions, we may compute the credibility of a given
outranking situation simply by dividing the sum of positive votes col-
lected in each sub-assembly by the sum of voters of the global assembly.
The concordance principle thus appears as a weighted average of truth
assessments from multiple points of view.

2We have added the credibility term ([16, p. 179)).


390 AIDING DECISIONS WITH MULTIPLE CRITERIA

More fundamentally, the concept of significance, understood as the


amount of truth assessment knowledge modelled by a single criterion, a
coalition of criteria, or even by the exhaustive family of criteria appears
to be of utmost importance.
We now introduce an explicit axiomatization for measuring this knowl-
edge in the context of the multicriteria preference aggregation via the
concordance principle.

3.2. Basic semiotics for a logical credibility


calculus
Let A be a set of potential decision actions upon which a decision
maker M wishes to describe his outranking preferences S ~ A x A.
Let T(A, F) represent the performance tableau elaborated in a decision
aid process. In order to simplify our presentation we may assume that
each criterion-function g E F is modelling a different single elementary
preference dimension.
Let 'a S b' be an affirmative outranking assertion. What we have to
axiomatize, is the precise measurement of the amount of truth assess-
ment knowledge each preference dimension, identified in the consequence
spectra r(A), brings in.
Definition 8. We call referential evaluation of 'a S b' with respect to
the subset J C F of criteria, the interpretation of the pair (gJ(a), gJ(b))
of performances of decision actions a, b E A on the subset J of criteria in
terms of the practical significance of the concerned subset of elementary
consequences, for warranting truth or falseness of 'a S b'. We call semi-
otical reference and denote R(J), the result of a referential evaluation
restricted to a subset J of criteria.
In accordance with the universality principle of the construction of
the consequence spectra r(A) (see Section 2.1), we assume in the sequel
that a referential evaluation remains universally constant for any given
subset J of criteria over all possible pairs of decision actions 3.
Definition 8 installs the cognitive process of interpreting criterial per-
formances in terms of pragmatic consequences. In this way, we introduce
into the decision aid model the pragmatic goal of the decision maker as
well as his (her) subjective value system, i.e. his(her) proper subjective
preference judgments.

3The universality of the semiotical reference over all possible pairs of decision actions, as
assumed in Definition 8, is a highly problematic assumption from a cognitive point of view,
but due to space limitations, we do not discuss this issue within this contribution.
Logical Foundation of Multicriteria Preference Aggregation 391

Definition 9. We call reference family 'RF, the set of all semiotical


references R( J) for J ~ F associated with a referential evaluation of
an outranking relation through the family F of criteria. R( J) is called
an elementary reference if J is confined to a single criterion-function,
whereas we call empty reference, a reference R( J) such that its signifi-
cance is zero. Otherwise R( J) is called a composed reference. We call
exhaustive reference, a composed reference R( J) such that the signifi-
cance of the set of criteria J covers the whole consequence cloud.
It is important to notice that a semiotical reference in our sense is
different from the material states of the consequences we actually observe
in the performance tableau via the corresponding criterion-functions.
Here we are interested essentially in the significance of these states, i.e.
the semiotics4 of the relational formula 'a S b' in fact supported by the
pairs (gJ(a), gJ(b)) of evaluations. These formulas are seen, in the sense
of Peirce, as iconic signs for the presence or the absence of an outranking
situation between the corresponding pair of decision actions.
Definition 10. A finite set R = {Ri E 'RF / i = l..m} of m semiotical
references, such that Ri n Rj gives an empty reference Vi", j = l..m, is
called a mutually exclusive reference class.
If these references significantly cover the whole consequence spectra
r(A), i.e U~l Ri gives an exhaustive reference, we call this class an
exhaustive one. A mutually exclusive and exhaustive reference class is
also called a complete semiotical reference system.
We are now prepared for introducing the measure of the significance,
i.e. the amount of truth assessment knowledge carried by each possible
semiotical reference.
Definition 11. Let p denote an affirmative outranking assertion asso-
ciated with a given reference family 'RF.
w : 'RF -+ Q measures the amount of truth assessment knowledge
captured by each possible semiotical reference concerning the potential
truth of assertion p as a rational number verifying following structural
conditions:
1 VR E 'RF : w(R) ~ 0,
2 If R = {Ri E 'RF / i = 1 ... m} constitutes a mutually exclusive
reference class then w(R) = ~~1 W(Ri).

4Generally speaking, semiotics only apply to social interpretations of signs. But in accordance
with a Peircian approach, we may very well specialize semiotical considerations to local
cognitive contexts, here the social working context of the decision maker. In this way, we
explicitly introduce a social dimension into the decision aid model.
392 AIDING DECISIONS WITH MULTIPLE CRITERIA

For any semiotical reference, the w measure may be interpreted as its


absolute significance measure, not in the sense of the utility of the prag-
matic consequences referenced per se, as it is the case for instance in clas-
sic utility theory, but instead of the amount of truth assessment knowl-
edge the preferential argument, modelled by the criterion-functions, is
providing. The relative version of this significance measure is defined as
follows.
Definition 12. Let p denote an affirmative assertion associated with
a reference family R containing an exhaustive and mutually exclusive
reference class U of strictly positive universal measure w(U). We denote
k : R -+ Q, the relative version of the w measure:
w(R)
VR E RF : k(R) = w(U)'
k(R) represents the relative significance that reference R takes in the
truth assessment of assertion p. Thus k models a weight distribution
on all possible partial arguments concerning the truth assessment of
assertions p.
Proposition 2. The measure k on RF verifies the following conditions:
1 VR E RF : k(R) 2:: O.
2 the weight of any exhaustive reference U equals 1.
3 the weight of an empty reference equals O.
4 If R = {Rl' ... , Rm} constitutes a mutually exclusive reference class
then k(R) = E~l k(~).
Proof. All these conditions trivially follow from the defining properties
(Definition 11) and the normalization (Definition 12) of the w measure.
D
This last proposition gives us the necessary elements for reformulating
the concordance principle in terms of semiotical references.

3.3. Reformulating the concordance principle


Let p represent an affirmative assertion associated with a reference
family RF. We denote PIR the semiotical restriction of assertion p to a
given reference R E RF.
The semiotical restriction principle generates for any affirmative as-
sertion p, a family of partial assertions PIR, one associated with each
possible semiotical reference R E RF.
Logical Foundation of Multicriteria Preference Aggregation 393

Let us first concentrate on the truth assessment of such partial asser-


tions, that are restricted to elementary references.
Definition 13. Let P represent an affirmative assertion associated with
a reference family 'RF containing a set of elementary references.
If Re E 'R represents such an elementary reference, the degree of cred-
ibility r"(PIRJ of assertion PIll" is defined as follows:

r"(
PIRe
) = {I
0
if Re certainly confirms assertion PIll"
otherwise

Assertion PIRe is warranted to be:

true if r"(PIRJ = 1,
false if r"(Plll,,) = O.
Indeed, restricted to elementary preference dimensions, the construc-
tive methodology allows us to assume that the relative measure of signif-
icance of the argument restricted to an elementary reference, is 1, in the
sense that it is precisely the operational purpose of a criterion-function
to most clearly signify, under the principle "toute chose pareille par
ailleurs" , what of the two possible truth values (true or false) is actually
the casewhen looking at a given outranking situation.
Based upon these elementary references, we may now recursively de-
fine the degree of credibility of the universal assertion.
Definition 14. Let P represent an affirmative outranking assertion as-
sociated with a reference family 'RF supporting a weight distribution
k and let {Ri E RF / i = 1 ... n} denote a complete semiotical refer-
ence system. The degree r" of credibility of assertion P is given by the
following recursive definition:
n
r"(p) = L (k(~) x r"(PIRJ).
i=l

Assertion P is warranted to be:


more or less true if r"(p) > 1,
if r"(p) < X,
more or less false
logically undetermined if r"(p) = I.
Proposition 3. Definitions 13 and 14 above are identical to the classic
definition of the concordance principle (see Definition 5).
Proof. Indeed, Definition 13 implements the split of the family of cri-
teria into a subset F+ of criteria in favour of the universal assertion,
394 AIDING DECISIONS WITH MULTIPLE CRITERIA

and the complementary subset F- of criteria in disfavour. Definition


14 implements the balancing reasons proceeding, which we saw being
isomorphic to the classic concordance principle (see Proposition 1). 0

The elementary reference associated with each individual criterion g E


F allows a clear partial truth assessment. In case of mutual exclusiveness
and universal closure of the elementary references, universal outranking
assertions may be truth assessed through a weighted mean of credibilities
associc;.ted with the involved elementary consequences.
We may thus reformulate the coherence properties of the underlying
family of criteria.

3.4. Reformulating the coherence axioms of the


family of criteria
Aggregating the credibility degree for assertions with a composed ref-
erence, requires decomposing this reference into an exhaustive class of
mutually exclusive elementary references.

Proposition 4. Let A be a set of decision actions evaluated on a family


F of criteria. F is coherent (in the sense of Definition 6) only if it
provides each affirmative outranking assertion on A with a semiotical
reference family containing a set of elementary references which consti-
tutes a cOr.1plete system.

Proof. Indeed, Roy's coherence properties, i.e. exhaustiveness, cohesive-


ness ad minimality are all three implied by the fact that the elementary
references associated with each individual criterion-function constitute
an exhaustive and mutually exclusive reference class. 0

It is worthwhile noticing that Proposition 4 shows a single implication


from the conditions imposed on the semiotical reference family towards
Roy's coherence properties of the family of criteria. The semiotical con-
ditions appe8,r as antecedent conditions for a possible coherence of the
family of criteria whereas the latter formulate consequent conditions that
constrain, mainly via the cohesiveness axiom, the out coming universal
outranking relation.
We illustrate the semiotical foundation of the concordance principle
with the following didactic example5 .

5taken from Marichal [14, p. 192].


Logical Foundation of Multicriteria Preference Aggregation 395

Table 1. Students performance tableau

student la ca st
a 12 12 19
b 16 16 15
c 19 19 12

Table 2. Credibility of the pairwise outranking assertions

r(xSy) a b c
a 1.0 0,42 0,42
b 0,58 1,0 0,42
c 0,58 0,58 1,0

3.5. Practical example: Ranking statistic


students
Three students in a Mathematics Department, specializing in statis-
tics and denoted {a, b, c}, are to be ranked with respect to their compe-
tencies in the following subjects: linear algebra (la), calculus (ca) and
statistics (st). The performances of the students in these three sub-
jects are shown in Table 1. We suppose that in the eye of the assessor,
each subject appears as an elementary reference for assessing the truth
of his ranking assertions. We may notice here that the performance
tableau shows in fact two opposite rankings, one common for the two
pure mathematical subjects and one for statistics. Well, the two math
results clearly support the ranking a < b < c whereas the results in
statistics support the opposite ranking: a > b > c. Let us furthermore
suppose that the assessor admits the following significance weights for
these elementary references in the truth assessment of his (her ) global
ranking: Wl a = 0.29, Wca = 0.29 and Wst = 0.42.
Under the hypothesis that the three elementary references constitute
a complete semiotical reference system, we are indeed in presence of a
coherent family of criteria and we may compute the credibilities of the
pairwise outrankings shown in Table 2.
This valued global outranking relation clearly denotes the ranking
supported by the pure math subjects, a result that may for instance not
really convince the given assessor, a professor in statistics for instance.
Indeed, (s)he would perhaps more expect the best student in statistics
to come first. In this hypothetical case, the family of criteria would
not verify one or the other of the three coherence requirements, i.e.
exhaustiveness, cohesiveness and minimality. Following Proposition 4,
396 AIDING DECISIONS WITH MULTIPLE CRITERIA

we know now that an incoherent family of criteria implies in fact that


the criteria don't provide in this case a complete semiotical reference
system.
And indeed, let's suppose for instance that both statistics and calcu-
lus subjects present some overlapping with respect to their respective
significance! Indeed, calculus and statistics subjects are typically not
mutually exclusive with respect to their semantic content, at least in
a Mathematics Department. A student who gets very high marks in
statistics and relatively low ones in calculus presents therefore a some-
how ambiguous profile. The statistician would tend to extend the high
marks in statistics to the universal evaluation, whereas a pure mathe-
matician would rather have the reflex to extend the low mark in calculus
and linear algebra to his (her ) universal evaluation.
We investigate such typical cases of incoherences in the next Section
and show possible extensions to the concordance principle.

4. Extensions of the concordance principle


From the closing example of the previous section, we recognize that
possible origins for incoherences in the family of criteria may be the
following:

• Overlapping criteria: some elementary semiotical references are


actually not mutually exclusive, i.e. the corresponding criteria
appear to be partly redundant;

• Incomplete performance tableau: the set of elementary references


supported by the criterion-functions don't provide an exhaustive
reference class and/or we observe missing performances on some
criteria.

Besides these inconsistencies above, one may naturally question the pre-
cise numerical measurability of the relative significance of each criteria.
This is a well known weak point of the logical approach for multicrite-
ria reference aggregation and many work-arounds have been proposed
(see Mousseau [15]). We do not have the space here to discuss this
issue, therefore we postpone this topic to a future publication6 and con-
centrate now our attention first, on a situation where we may observe
partly redundant criteria.

6We have presented a purely ordinal version of the concordance principle in a communication
at the 22nd Linz Seminar on Fuzzy Set Theory on Valued Relations and Capacities in Deci-
sion Theory, organised by E.P. Klement and M. Roubens, February 2001 (see Bisdorff [7]).
Logical Foundation of Multicriteria Preference Aggregation 397

4.1. Pairwise redundant criteria


To illustrate the problem, we reconsider the evaluation of the statistics
students. Let's assume that the assessor admits for instance some 50%
overlap between the statistics and calculus subjects, i.e. 50% of the
truth assessment knowledge involved in the statistic performances is also
covered by the calculus performances. Numerically expressed, 50% x
0.42 = 0.21 of the weight of statistics, or the other side round, 84% x
0.29 = 0.21 of the weight of calculus is in fact shared by both arguments.
More anchored in statistics for instance, our assessor exhibits, for judging
this overlapping part, a tendency in favour of the very positive outcome
of the statistics test. Whereas a more pure mathematics oriented assessor
would consider first the less brilliant result of the same overlapping part
in the context of the calculus test, thereby motivating his (her ) more
sceptical appreciation of student a. Overlapping of elementary references
seems therefore introducing unstable and conflictuous relative weights of
semiotical references.
We now formally introduce this potential overlapping of semiotical
references.
Definition 15. Let p represent an affirmative assertion associated with
a reference family RF. Let Ri and Rj be any two references from RF.
We denote Rij = Ri n Rj the semiotical reference shared between Ri and
Rj.
Theoretically, any possible figure of overlapping criteria may be de-
scribed by the preceding formalism, but in practice we are only interested
in partly pairwise overlapping criteria.
Definition 16. If no semiotical reference may be shared by more than
two elementary references, i.e. overlapping between elementary refer-
ences is reduced to pairs of elementary references, we say that the family
of criteria is pairwise decomposable.
In a pairwise decomposable family of criteria, elementary references
may be split into pairs of mutually exclusive elementary references.
Adding to these shared references the exclusive part of each elemen-
tary reference, we obtain again a complete semiotical reference system,
i.e. a mutually exclusive and exhaustive reference class.
Proposition 5. Let RF be the reference family associated with a pair-
wise decomposable exhaustive family of criteria F. Let Rii, Vi = 1 ... n,
represent the exclusive parts of each elementary reference Ri E RF, i.e.
Rii = Ri - (Uih=l Rij). Then the pairwise decomposed reference class
R2 = {Rij : i, j = 1 ... n} renders a complete semiotical reference sys-
tem.
398 AIDING DECISIONS WITH MULTIPLE CRITERIA

Proof. Indeed, R2 constitutes a partition covering completely all given


elementary references:
n n n
UURij = U~, (2)
i=l j=l i=l
Vi -=I j , Rii n R jj = 0, (3)
V{i, j) -=I (k, l) , ~j n Rkl 0. (4)
0
We may now evaluate the pairwise decomposed weight distribution
supported by the new complete reference system R2.
Definition 17. Let ~ and Rj be two different elementary references
from RF supporting respectively W{Ri) and w{Rj) amount of truth as-
sessment knowledge concerning assertion p. The conditional weight co-
efficient
k _ w{~j)
jli - w{~)

captures formally the overlapping of reference Rj with respect to refer-


ence Ri.
Knowing thus the overlapping part between two elementary refer-
ences, we are able to compute the amount of truth assessment knowledge
shared between them. It is important to notice, following Proposition 5,
that such a decomposition of the elementary semiotical references returns
in fact an exhaustive and mutually exclusive reference class. Therefore
we are able to compute a relative weight distribution on the pairwise
decomposed elementary references.
Definition 18. Let F be a pairwise decomposable exhaustive family
of criteria and let R2 (see Proposition 5) represent the set of pairwise
decomposed elementary references. We denote kij, i < j the relative
weight associated with a shared semiotical reference Rij and kii' i =
l..n the relative weight associated with Rii, the exclusive part of each
elementary reference Ri. Let W{R2) represent the global amount of truth
assessment knowledge supported by the complete system R2. Formally,
Vi,j = l..n and i ~ j:

Wi - O=~:;t'i=l W{~k))
kii = w{R2)
(5)
W{Rij)
kij (6)
w{R2)
Logical Foundation of Multicriteria Preference Aggregation 399
In Table 3 we show the corresponding decomposition for the three
subjects underlying the evaluation of the statistics students under the
hypothesis that the calculus reference presents a 50% overlap with re-
spect to statistics reference. The marginal distributions ki. and k.j shown

Table 3. Example of relative pairwise decomposed truth assessment weights

topics la ca st U
ki 0,29 0,29 0,42 1,00
kij la ea st kt.
la 0,37 0 0 0,37
ea 0,10 0,265 0,365
st 0,265 0,265
k·.) 0,37 0,10 0,53 1.00

in Table 3 allow two different semiotical interpretations of the pairwise


decomposition of the elementary references, - the first more statistics
and - the second, more general mathematics oriented. A more statistics
oriented assessor could on the one hand adopt the weights k.la = 0.37,
k.ca = 0.10 and k.st = 0.53, with the consequence that the statistics
results would prevail in the global ranking. The math oriented assessor
on the other hand, could adopt the other limit weights, i.e. k la . = 0.37,
k ea . = 0.365 and k st . = 0.265 and thereby even more stress the ranking
shown by both the math subjects.
Possible ambiguous interpretations appear thus as a sure sign of par-
tial redundancy between criteria. Well, in order to stay faithful with
our decision aid methodology, we will promote, in the absence of other
relevant information, a neutral interpretation, situated in the middle
between both extreme ones. To do so, we first extend the credibility
calculus to pairwise shared references.
Definition 19. Let p represent an outranking assertion associated with
a reference family n containing a set of pairwise decomposable elemen-
tary references. If Rij E n represents a shared reference between elemen-
tary references ~ and Rj, the degree of credibility r(PIRij) of assertion
PIRij is given as follows:

r(PIRJ + r(PIRj)
r(PIRij) = 2
If both elementary references give unanimous results, either zero or
one, the resulting credibility will be the same as the credibilities of the
underlying elementary references. If they disagree, the degree of cred-
ibility of their shared reference part will be put to ~, i.e. the logically
undetermined value.
400 AIDING DECISIONS WITH MULTIPLE CRITERIA

Table 4. Outranking index from pairwise decomposable family of criteria

r(xSy) a b c
a 1,0 0,4 0,4
b 0,6 1,0 0,4
c 0,6 0,6 1,0

Now we may reformulate the general definition of the concordance


for an universal outranking assertion based on a pairwise decomposable
family of criteria.
Definition 20. Let p represent an outranking assertion evaluated on a
pairwise decomposable and exhaustive family of criteria F. The corre-
sponding pairwise decomposed elementary references are associated with
a relative weight distribution k ij .
The credibility r(p) of assertion p is computed as follows:

r(p) = L (kii' r(PIRJ) + L (kij · r(PIRij))


i ij:i<j

On the exclusive parts of the elementary references Ri, we keep the


standard two-valued credibility denotation, as introduced in Definition
13. On the shared references however, we take the mean of both elemen-
tary credibilities, as formulated in Definition 19.
Reconsidering the ranking of the statistics students, we may notice in
Table 4, that the original global outranking shown by the mathematical
tests has been slightly more stressed as was before (see Table 2). This
result, perhaps deceiving the statistics oriented assessor, is nevertheless
'logical' as we consider that a large part of the calculus test is now
interpreted in fact as a statistical test. Let us close this section by
showing that our extension is a compatible extension of the classical
concordance principle.
Proposition 6. The extended concordance principle of Definitions 19
and 20 is identical to the classic concordance principle (see Definition
5) if no overlapping is observed, i.e. if Ri n Rj = 0 Vi,j = Ln.

Proof. Indeed, in this case, the exclusive part ~i becomes identical to


the original elementary reference ~ and we recover completely Defini-
tions 13 and 14 of the concordance principle. 0
Finally, we may notice, that this extension of the concordance princi-
ple to pairwise decomposable families of criteria still requires an exhaus-
tive performance tableau. There exist however quite commonly decisions
Logical Foundation of Multicriteria Preference Aggregation 401
problems, where not all decision actions have been evaluated on all cri-
teria (see [4, 5]). The classical concordance principle does not admit
such missing evaluations. We now present a compatible extension for
handling such situations.

4.2. Incomplete performance tableaux


We have extensively published our approach to incomplete perfor-
mance tableaux (see Bisdorff [5, 8]) so that we may only briefly sketch
this topic in the sequel.
Our idea is that in the limit, if two decision actions a, b E A have not
been both evaluated on a given criterion function 9 E A, the credibility
r(a 8 g b) given to the outranking assertion 'a 8 b' must take the logically
undetermined value ~.
Now, the more a decision action is missing common evaluations with
all the others, the more its universal outranking with respect to all the
others, is tending towards a credibility of~. Formally, we adjust Def-
inition 13, giving the degree of credibility of an outranking situation
observed on a single criterion, as follows.
Definition 21. Let 'a 8 b' represent an outranking assertion evaluated
on a performance tableau involving a family F of criteria. For all 9 E F,
the degree of credibility r( a 8g b) of the outranking situation restricted
to the semiotical reference of criterion 9 is defined as follows:
I if (g(a),g(b)) confirms assertion 'a8 g b',
r(a 8g b) ={ ~ if g(a) and/or g(b) is undefined,
o otherwise.
Assertion 'a 8g b' is warranted to be:
true if r(a 8g b) = 1,
undetermined if r(a 8g b) = ~,
false if r(a 8g b) = O.
With this extension of the concordance principle, we weight the uni-
versal outranking index r( a 8 b) with the relative frequency of common
evaluations, and we add halve of the relatively missing evaluations as
confirming and the other halve as not confirming the given assertion.
This technique allows us to take into account at the same time spo-
radic missing evaluations, but also completely missing criteria. In the
latter case, all decision actions will compare on a missing criterion with
a credibility of ~, i.e. the logical denotation of the outranking assertion,
restricted to the missing criterion, will be undetermined for all couples
of decision actions.
402 AIDING DECISIONS WITH MULTIPLE CRITERIA

5. Conclusion
In this chapter, we have investigated the logical and semiotical foun-
dation of the concordance principle, i.e. the logical approach to multi-
criteria preference aggregation promoted since 1970 by Bernard Roy.
We first showed the denotational isomorphism which exists between
this concordance principle and the proceeding of balancing reasons as
promoted by C.S. Peirce. This result illustrates the split truth versus
falseness denotation installed by the concordance principle.
Taking furthermore support on the Peircian distinction between credi-
bility and state of belief concerning a preferential assertion, we propose a
semiotical foundation for the numerical determination of the significance,
i.e. the truth assessment knowledge carried by the family of criteria.
This approach makes apparent the semiotical requirements guarantee-
ing the coherence of a family of criteria.
Finally, based on these semiotical requirements, we propose an exten-
sion of the concordance principle in order to support pairwise overlapping
criteria and/or incomplete performance tableaux.

References
[1] R. Bisdorff and M. Roubens (1996), On defining fuzzy kernels from C-valued
simple graphs, in: Proceedings Information Processing and Management of Un-
certainty, IPMU'96, Granada, 593-599.
[2] R. Bisdorff and M. Roubens (1996), On defining and computing fuzzy kernels
from C-valued simple graphs, in: Da Ruan and al., (eds.), Intelligent Systems
and Soft Computing for Nuclear Science and Industry, FLINS'96 workshop, pp
113-123, World Scientific Publishers, Singapoure.
[3] R. Bisdorff (1997), On computing kernels from C-valued simple graphs. In Pro-
ceedings 5th European Congress on Intelligent Techniques and Soft Computing
EUFIT'97, vol. 197-103, Aachen.
[4] R. Bisdorff (1998), Bi-pole ranking from pairwise comparisons by using initial
and terminal L-valued kernels. In Proceedings of the conference IPMU'98, pp
318-323, Editions E.D.K., Paris.
[5] R. Bisdorff (1999), Bi-pole ranking from pairwise fuzzy outranking. Belgian Jour-
nal of Operations Research, Statistics and Computer Science 37(4) 53-70.
[6] R. Bisdorff (2000), Logical foundation of fuzzy preferential systems with applica-
tion to the Electre decision aid methods. Computers (3 Operations Research 27
pp 673-687.
[7] R. Bisdorff (2001), Semiotical Foundation of Muticriteria Preference Aggregation.
In E.P. Klement and M. Roubens (eds), Abstracts of the 22nd Linz Seminar on
Fuzzy Set Theory, pp 61-63, Universitatsdirektion, Johannes Kepler Universitat,
Linz, Austria.
[8] R. Bisdorff (2001), Electre like clustering from a pairwise fuzzy proximity index,
European Journal of Operational Research, (to appear).
REFERENCES 403
[9] D. Bouyssou and M. Pirlot (1999), Non-transitive decomposable conjoint mea-
surement. In N. Meskens and M. Roubens, Advances in Decision Analysis,
Kluwer, Dordrecht
[10] D. Dubois, H. Fargier, P. Perny and H. Prade (2001), On Concordance Rules
Based on Non-Additive measures: An Axiomatic Approach. In E.P. Klement and
M. Roubens (eds), Abstracts of the 22nd Linz Seminar on Fuzzy Set Theory, pp
44 - 47, Universitiitsdirektion, Johannes Kepler Universitiit, Linz, Austria.
[11] U. Eco (1999), Kant et l'ornithorynque, Grasset, Paris.
[12] H. Fargier and P. Perny (2001), Modelisation des preferences par une regIe de
concordance generalisee. In M. Parruccini, A. Colorni and B. Roy (eds), Selected
Papers from 49th and 50th meetings of the BURO Working Group on MeDA,
European Union, forthcoming.
[13] J. Fodor and M. Roubens (1994), Fuzzy preference modelling and multi-criteria
decision support, Kluwer Academic Publishers, Dordrecht.
[14] J.-L. Marichal (1999), Aggregation operators for multicriteria decision aid, PhD
Thesis, University of Liege, Belgium.
[15] V. Mousseau (1995), Eliciting information concerning the relative importance
of criteria. In P.M. Pardalos, Y. Syskos and C. Zopounidis (eds.), Advances in
Multicriteria Decsion Aid. Kluwer, Nonconvex Optimization and its Application,
vol. 5 pp 17-43.
[16] C. S. Peirce (1878), The probability of induction, Popular Science Monthly, in
J. Buchler (ed.), Philisophical writings of Peirce, Dover, New York, 1955.
[17] B. Roy (1985), Methodologie Multicritere d'Aide d la Decision, Economica, Paris.
[18] B. Roy and D. Bouyssou (1993), Aide Multicritere d la Decision, Economica,
Paris.
[19] B. Roy and V. Mousseau (1996), A theoretical framework for analysing the no-
tion of relative importance of criteria. Journal of Multicriteria Decision Analysis,
Wiley, vol. 5 pp 145-159.
v

APPLICATIONS
OF MULTI-CRITERIA DECISION-AIDING
A STUDY OF THE INTERACTIONS
BETWEEN THE ENERGY SYSTEM
AND THE ECONOMY USING TRIMAP

Carlos Henggeler Antunes


INESC and University of Coimbm, Portugal
cantunes@pombo.inescc.pt

Carla Oliveira
INESC, Coimbm, Portugal
coliv@pombo.inescc.pt

J oao Climaco
INESC and University of Coimbm, Portugal
jclimaco@pombo.inescc.pt

Abstract In this paper a study is presented which is aimed at modelling the inter-
actions of the energy system with the economy on a national level. For
this purpose a multiple objective linear programming (MOLP) model
is developed, which is based on an input-output (10) table where the
energy sectors as well as the relevant economic activity sectors are dis-
aggregated. The results provided by the MOLP model with a set of
data representative of the portuguese situation are analyzed using the
TRIMAP interactive environment.

Keywords: Energy system; Input-output analysis; Multiple objective linear pro-


gramming; TRIMAP

1. Introduction
The problem of energy dependence is a crucial issue, in the case of
Portugal, since, except for some renewable resources (such as hydro,
wind, solar and biomass) and some industrial by-products, all types of
primary energy are imported. In fact fossil fuel imports account for more
408 AIDING DECISIONS WITH MULTIPLE CRITERIA

than 70% of the total primary energy. Therefore it is very important


to provide planners and decision makers with well-founded information
concerning the problem of economic development constrained by limited
energy resources.
Input-output analysis has proved to be a very useful tool for planning
purposes enabling to take into account the interactions among different
sectors of an economy. An input-output table disaggregates an economic
system into a number of sectors, each of which producing a particular
type of output, with the output structure assumed to be fixed, and
no substitution between the outputs of the different sectors. In this
study it is used to model the interactions between the economy and the
energy sector on a national level, in order to determine the amount of
fuels needed for producing sectors (that is, in intermediate consumption)
or directly in final demand. Therefore, the amount of primary energy
to produce a good or service can be computed. In the framework of
input-output analysis, the use of fossil fuels is then associated with the
activity level of each sector to compute the resulting amount of emissions
of atmospheric pollutants.
In modern technologically developed societies, strategic decisions must
be made in an increasingly complex and turbulent environment, char-
acterized by a fast pace of technological evolution, changes in market
structures and new societal concerns. These problems inherently in-
volve multiple, conflicting, incommensurate aspects of evaluation of the
merits of alternative policies. Therefore, mathematical models for deci-
sion support become more representative of the actual decision context
if the distinct evaluation aspects are explicitly taken into account. So-
cial, environmental, technical and economical aspects must be explicitly
considered in mathematical models rather than aggregated in a single
economic indicator. However, the relevance of a multiple objective ap-
proach goes beyond the "realism argument" and it has intrinsically a
value-added role in the modelling process and in model analysis, sup-
porting reflection and creativity in face of a larger universe of potential
solutions (see also Roy, 1990, and Bouyssou, 1993).
Multiple objective models enable planning bodies and decision mak-
ers (DM) rationalizing the comparisons among different alternative solu-
tions, providing them with a better perception of the conflicting aspects
under evaluation and the ability to grasp the nature of trade-offs to
be made in the process of selecting satisfactory solutions. The DM's
preference structure is then of particular relevance, understood as the
construct on which the DM leans on for evaluating and selecting a sat-
isfactory plan from the set of nondominated solutions.
Interactions Between Energy System and Economy Using TRIMAP 409

Objective functions consistent with economical growth, citizen's well


being and energy conservation have been considered in the model. Con-
straints refer to balance of payments, gross added value, production
capacity, bounds on imports and exports, public deficit, pollutant emis-
sions, self-production of electricity, storage capacity and security stocks
for hydrocarbons. The MOLP problem considered in this study is based
on a static input-output model, with actual data from Portugal.
The interest and motivation of the study have been provided in this
introduction. In section 2, the TRIMAP interactive environment is de-
scribed. A MOLP model for energy planning based on input-output
analysis is presented in detail in section 3. Some illustrative results are
presented in section 4, which have been obtained by using TRIMAP
to provide decision support to a hypothetical DM in establishing ade-
quate energy-economy policies, based on the input-output MOLP model
supplied with actual data. Finally, conclusions are drawn in section 5.

2. The TRIMAP interactive environment


The development of TRIMAP interactive environment laid its founda-
tions on the conclusions drawn from studies in power generation expan-
sion planning regarding the difficulties associated with both generating
methods (computational and cognitive burdens) and some interactive
methods (inability to perform a strategic search and requirement of di-
chotomic responses before prior knowledge about the shape of the non-
dominated region).
TRIMAP is aimed at assisting the DM in a selective and progressive
search of nondominated solutions, by using an user-friendly Human-
computer interface environment. Using Roy's words (Roy, 1987), in
TRIMAP the interactive process is a constructive process rather than
discovering the optimum of any pre-existing implicit utility function,
where convergence is replaced by "creation ". TRIMAP aims to support
the DM in the learning of the characteristics of the nondominated region,
helping him/her to identify and progress towards the solution or set of
solutions which more closely correspond to his/her preferences. This is
done in a rather natural manner, promoting a selective and progressive
search based on a free exploitation, allowing backtracking, and avoiding
the exhaustive search of the nondominated region. A key issue is sup-
porting the DM in reducing the scope of the search and focussing on
the sub-region(s) of the nondominated region which better suit his/her
expectations, which are clarified and shaped throughout the process.
This approach generally begins by making a strategic search which leads
essentially to eliminate progressively the regions which reveal no inter-
410 AIDING DECISIONS WITH MULTIPLE CRITERIA

est and, in a second phase, to search in more detail the regions which
the initial screening revealed as potentially interesting. In this way the
interactive environment contributes to stimulate the DM to structure
and stabilize his/her own preferences by bringing him/her arguments
able to strengthen or weaken his/her own convictions (Roy, 1987, 1990).
TRIMAP is designed for problems with three objective functions, which
allows for the use of graphical tools that are suited for an open communi-
cation scheme, capitalizing on the DM's main strengths, namely pattern
recognition through visual inspection. A block diagram of TRIMAP is
depicted in figure 1.
TRIMAP combines three main procedures: weight space decompo-
sition, introduction of constraints on the objective function space, and
introduction of constraints on the weight space. Moreover, the introduc-
tion of constraints on the objective function values is translated into the
weight space.
The dialogue with the DM is made mainly in terms of the objective
function values, which is the type of information that does not place
an excessive burden on the DM. In general, the weight space is used in
TRIMAP as a valuable means for collecting and presenting the infor-
mation, and not as a way for eliciting the preferences of the DM. The
weight space is filled with the indifference regions corresponding to the
(basic) nondominated solutions already computed. Direct limitations
introduced on the variation of the weights or inferior bounds on the
objective values translated into the weight space are also displayed. A
2-dimensional projection of the objective function space, shows all non-
dominated solutions already computed and enables also to identify the
nondominated edges and faces. By comparing the two graphs (weight
space and projection of the objective function space) it is possible, by
simple visual inspection, to avoid the search of sub-regions of the weight
space which are of no interest.
In each step of the Human-computer dialogue (when a new phase of
computation is being prepared) the DM needs only to express some indi-
cations about the options to be considered in the progressive search for
nondominated solutions (region of the weight space to search, bounds
on the objective function values, etc.). The interactive process con-
tinues until the DM has gathered "sufficient knowledge" about the set
of nondominated solutions, not requiring irrevocable intermediate deci-
sions, until a final satisfying solution (or a set of solutions for further
screening) is made explicit in face of the DM's evolutionary preferences
(Climaco and Antunes, 1987 and 1989).
Interactions Between Energy System and Economy Using TRIMAP 411

Compu1ation of the efficient solutions vhich


optimize each objective function iIldividually

Compu1ation of the efficient solution vhich


minimiZes a veighted Tchoebychoeff distance 10 the
ideal solution

.B

.c

NumeIical infonnation:
fk, x i, LI, L2, L-, AIea(~)

Cuts of the feasible 1'1--..... Elimination of tIiangle subregions


polyhedron by imposing limi1ations:
• on the objective function values
• directly on the veights

Computation of the efficient solution ontinuous scanning


vhich minimiZes a Tchebycheff of efficient faces
distance 10 a reference point

~ DM consideIll the;~a

--
decision can be made based
on the infonnation ob1ained
END
~

Figure 1. Trimap block diagram

3. A Multiple Objective Model for Energy


Planning based on Input-Output Analysis
The classical approach of Leontieff is based on the construction of an
input-output table which represents the economic flows and is structured
in a way that provides a systematic view of all activities in a country
412 AIDING DECISIONS WITH MULTIPLE CRITERIA

or region (Leontieff, 1951). An input-output table disaggregates an eco-


nomic system into a number of sectors, each of which producing a par-
ticular type of output, with the output structure assumed to be fixed,
and no substitution between the outputs of the different sectors. The
inputs into each sector are assumed to be simple proportions of the level
of output of that sector, and the total effect of producing in several sec-
tors is the sum of separate effects (external economies or diseconomies
do not operate). This formalization of the economy has been used as a
valuable tool in national accounts as well in other domains, namely in
macro-economics. Therefore input-output analysis is of great interest in
the formulation of economic planning models, providing a detailed view
of macro-economic aggregates and economic flows.
Macro-economic policy is generally a resource allocation problem, in
which limited resources must be allocated to several inter-related eco-
nomic activities, in order to achieve specific goals consistent with con-
cerns of distinct nature, not only economical but also environmental and
social. Linear programming has been coupled with input-output analy-
sis to develop models for providing recommendations in macroeconomic
policy problems (Dorfman et al., 1987).
The application of 10 analysis to the energy system focuses on the
primary energy requirements of production and consumption in an econ-
omy, enabling to evaluate the embodied energy required to manufacture
a good or service. The fossil fuel use is associated with the level of activ-
ity (output) in each sector. This analytical structure is then extended
to account for emissions of air pollutants resulting from the burning of
fossil fuels. Since this analysis already incorporates the requirements of
primary energy for the economic activities, the carbon content of a fuel,
and hence its carbon dioxide production, is linked to its calorific value.
By using these coefficients (the amount of carbon dioxide produced per
unit of fuel consumed), total emissions from each sector and from the
economy as a whole may be determined.
The input-output table used in this model disaggregates the energy
sector components, allowing the distinction between primary and sec-
ondary energy sources. In this matrix, energy flows are considered in
physical units of energy (tons of oil equivalent - toes) and the remain-
ing flows are considered in monetary units (Portuguese escudos - Pte)
(figure 2).
The table organizes the economy in 44 activity sectors (21 economic
sectors and 23 artificial sectors used for distributing the output of the
oil refining sector and the by-products through the consuming sectors),
consisting of: a (44x44) matrix which represents the inter- and intra-
sector flows, 6 column vectors with the components of final demand
Interactions Between Energy System and Economy Using TRIMAP 413

R P I cc' --;;;;-
E "Ie cclf'
'" O"P
- m;,
E
A",
(wAu)
Am
(wAu)
~
(wAu)
An
(wAu)
"Ie.
(w-)
0 0
"".
(w)
'''P.
(w)

ARE 0 0 ARJ 0 0 0 0 0 ~
0
R
(wAu) (wAu)

p
APE 0 0 API ~er 0 0 ocr e"Pr II
(wAu)

AlE Am 0
(w-)

An
(w-)

"lei CC]* gcft/


(w)

ocl
(w)

e"Pl
- m.;
I
(\noAu) (wnAu) (wn-) (wnAm) (wn) (wn) (wn) (wn)
-
(wn)

Primary iDpuu
~
aa
(wnAu)

a..
0

0
0

0
au
(uo"")

aol
II ~
y ==
8+ =[]+8
~(pc + cc*+ gcff* + SIC + exp
(wnAu) (wn-) SIC'"" sc+. sc·

o"E 0 0 0,,1

Figure 2. Input-Output coefficients matrix

(private consumption, collective consumption, gross fixed capital forma-


tion, positive and negative stock changes and exports), 1 column vector
for the competitive imports (imports which have endogenous equivalent)
and 3 row vectors for the primary inputs (wages, net indirect taxes, op-
erating surplus). The sectors are classified as follows: energy sectors,
hydrocarbons, by-products and industrial sectors in broad sense. The
total output of each sector is represented by a decision variable. The
technical coefficients matrix is obtained from the transactions table of a
given year taken as the basis of the study (1995 in our case, since this is
the most recent year for which national statistical information suitable
for our model is available). This matrix shows the relationships among
the different sectors of the economy which are used to define coherence
constraints of the MOLP model.
The oil sector has several outputs which possess common input struc-
tures. Therefore a separate cost allocation is not possible within this
framework. In order to make the distribution of the several outputs of
the oil sector through all the activity sectors, its production becomes an
input of an artificial sector, which does not represent an actual produc-
tion but it is only created to enable the distribution of hydrocarbons.
Another issue refers to the production and consumption of by-products
not being taken into account in the national transactions matrix. How-
414 AIDING DECISIONS WITH MULTIPLE CRITERIA

ever, in the tables of self-production of electricity available in the energy


statistics, the consumption and the value of production of these by-
products are considered. In this sense, it is possible to take into account
the value of production of these by-products which is then added to the
value of the total output of its producing sector, and artificial sectors are
created to allow their distribution through all the economic activities.
The notation used in figure 2. is developed below, where u.e and u.m
denote energy units and monetary units, respectively.
The energy sectors and the industrial sectors are broadly classified as
follows (see figure 2):
-E, energy sectors (coal, oil, electricity, city gas and self-production of
electricity) j
-R, by-products used in self-production of electricity (incondensable
gases, hydrogen, black liquors, other by-products, pitch, coke oven gas,
coke gas and biogas) j
-P, hydrocarbons (crude oil, shale oil, propylene, LPG, gasoline, pe-
troleum, jets, diesel oil, fuel oil, naphtha, lubricants, bitumen, paraffin,
solvents and petroleum coke) j
-I, industrial sectors in the broad sense (Industry and Services).
-A, technical coefficients matrix for the base year.
-AEE, input-output coefficients describing sales of the output of one
energy supply sector to another energy supply sector (e.g., coal con-
sumption in electricity production).
-AER, input-output coefficients describing the contribution of the en-
ergy sectors for the production of by-products (e.g., amount of incon-
densable gases produced in an oil refinery).
-AEP, input-output coefficients which allow the disaggregation of hy-
drocarbons, enabling their distribution through all sectors of the econ-
omy.
-AEI, input-output coefficients describing the sales of the output of
the energy supply sectors to the industrial sectors (e.g., electricity con-
sumption in the services sector). Notice that hydrocarbon sales are not
taken into account in this matrix.
-ARE, input-output coefficients describing the consumption of by-
products in the energy supply sectors (e.g., the consumption of incon-
densable gases in self-production of electricity).
-ARI, input-output coefficients describing the consumption of by-
products in the industrial sectors (e.g., the consumption of black liquors
in the production of steam).
-ApE, input-output coefficients describing the distribution of hydro-
carbons through the energy supply sectors (e.g., the consumption of fuel
oil in thermo-electricity).
Interactions Between Energy System and Economy Using TRIMAP 415

-API, input-output coefficients describing the distribution of hydro-


carbons through the industrial sectors (e.g., the consumption of propy-
lene in the chemical industry).
-AIE, input-output coefficients describing the sales of the output of
the industrial sectors to the energy supply sectors (e.g., consumption of
paper in the electricity sector).
-AIR, input-output coefficients describing the contribution of the in-
dustrial sectors for the production of by-products (e.g., production of
incondensable gases by the chemical industry).
-All, input-output coefficients describing sales of the output of one
industrial sector to another industrial sector (e.g., consumption of agri-
culture products in the chemical industry).
-p is the private consumption vector for the base year.
-pc is the computed value for the private consumption.
-8.pc is the coefficients vector of the private consumption for the base
year (ratio between the consumption of a good and the total private
consumption of the base year).
-8.pcE is the coefficients vector of the energy private consumption.
-apcp is the coefficients vector of the hydrocarbons private consump-
tion.
-8.pcI is the coefficients vector of the industrial private consumption.
-cc* is the vector of the collective consumption.
-CCI* is the vector of the collective consumption of non mercantile
goods (of public administration, education and research, health, etc.).
-gcff* is the vector of gross fixed capital formation.
-gCffI* is the vector of gross fixed capital formation of the industrial
sectors.
-sc is the vector of stock changes.
-SCE is the vector of stock changes in the energy sector.
-SCp is the vector of stock changes in the hydrocarbon sector.
-SCI is the vector of stock changes in the industrial sector.
-exp is the vector of exports.
-eXPE is the vector of energy exports.
-expp is the vector of hydrocarbon exports.
-exPI is the vector of industrial exports.
_mc is the vector of competitive imports.
-m~ is the vector of competitive imports of energy.
-m[ is the vector of competitive imports of industrial products.
-x is the vector of total output for the economic activity sectors.
-y is the vector of final demand.
The symbol * denotes an exogenous component of the model.
416 AIDING DECISIONS WITH MULTIPLE CRITERIA

3.1. Objective functions


Employment - Emp. The level of employment, perceived as a
measure of social well-being, must be maximized.

maxJI = a~mpx
in which a~mp is the vector of the number of employees (per unit output
of each sector).

Private Consumption - pc. The private consumption in a country


is given by the residents' total consumption plus the consumption made
in the country by non-residents less the consumption of residents made
outside the country. The residents' total consumption is considered to
be linearly dependent on the gross available household income:

1 - k2 [ ka ( )( ,
pc= 1-kl 1- k5 1-tf aw x + k4a,oX ) -kasfaw, x 1
in which kl is a constant (non-residents' consumption in the country/total
private consumption), k2 is a constant (residents' consumption out of the
country /total residents' consumption), ka is the average propensity to
consume (household consumption/gross household disposable income),
t f is the average household income tax rate and sf is the social contri-
bution rate.
The objective function is

maxh =pc

Energy Imports. Energy imports must be minimized, given the


energy dependence of the country:

min fa = i/amE
in which ia is a vector of ones with convenient dimensions and mE is the
energy imports vector.

3.2. Model constraints

Model coherence constraints. The use of a specific good or service


(for intermediate consumption and final demand) can not exceed the
resources available (resulting from national production and competitive
imports) of that good or service:

x +m C - Ax - apcpc - sc+ + sc- - exp 2: cc* + gcff*


Interactions Between Energy System and Economy Using TRIMAP 417

Balance of Payments. The Balance of Payments (bp) has three


main components: the Current Accounts (balance of trade, income ac-
counts and current transfer accounts), the Capital Accounts (capital
transfers of non-produced and non-financial goods) and the Financial
Accounts (international transactions that involve the exchange of finan-
cial assets or liabilities).
Interest rates determine the capital flows in the short run, which can
be observed in the Financial Accounts. The interest rate, in a national
DM's point of view, is an exogenous variable (monetary policies are
driven by the Eurosystem). Therefore, Financial Accounts have been
considered as exogenous. In fact, the balance of trade is the only en-
dogenous component of the Balance of Payments:

bp = pe' exp - pm' mc + bp*


in which pe is the vector of average prices for exported energy products
(per toe), with convenient dimensions (physical units are converted into
monetary units for energy goods); in a similar way, pm is the vector of
average prices for imported energy products (per toe), with convenient
dimensions; bp* is the exogenous component of the Balance of Payments.

A constraint on the balance of payments is imposed in order to guar-


antee a certain level of external equilibrium:

bp 2: lbbp
in which lbbp is the lower bound for the balance of payments.

Gross Added Value - GAV (gav). The GAV is an important indi-


cator for economic planning purposes, enabling to quantify the resources
generated within the country.

gav = awx + a o x + atI


I I
x
in which aw, a o and at are coefficient vectors for wages, operating surplus
of each activity sector and net indirect taxes (linked to production),
respectively.

Gross Domestic Product - GDP (gdp). The performance of the


national economy is, in general, measured by its gross domestic product
(GDP).
The GDP can be determined according to three different definitions
(expense, product and income). The first two definitions (product and
income definitions are similar) are considered:
418 AIDING DECISIONS WITH MULTIPLE CRITERIA

gdp = pc + i~ cc* + i~gcff* + ps'(sc+ - sc-) + pe'exp - pm'mc

in which ps, by analogy with pm and pe, is the vector of average prices
for stock changes of hydrocarbons (per toe) and i1and i2 are vectors of
ones, both with convenient dimensions, and

in which atm is the vector of average tax rates for imports, and atg is
the average gross added value tax rate.
The intrinsic coherence of the model does not guarantee, however,
that the two definitions lead to the same results. Notice that the first
definition has two exogenous components, and that imports, exports
and stock changes are not connected to the model coefficients. In this
context, both have been explicitly considered.

Production capacity. The total production of each activity sector


has its production capacity as an upper bound, and a lower bound is
also imposed:

lb ::; x ::; ub
in which lb and ub are the vectors of lower bounds and upper bounds
for sector outputs, respectively.

Upper bounds on exports and imports. Since imports and ex-


ports are not connected to the model coefficients, upper limits for both
have been considered to prevent an over-specialization:

exp::; ub exp

in which ub exp and ubmc are the vectors of upper bounds for total
exports and imports, respectively.

Public deficit. According to European Union requirements the pub-


lic deficit (pd) shall not be higher than 3 % of GDP. The public deficit
(in accordance with public accounts) is obtained after subtracting the
public administration revenues (current revenues - direct and indirect
taxes, social security revenues and others; capital revenues - selling cap-
ital assets, capital transfers and others) to the public administration
Interactions Between Energy System and Economy Using TRIMAP 419

expenditure (current expenditure - public consumption, effective inter-


ests, subsidies and others; capital expenditure - acquisition of capital
assets and capital transfers).
Considering that indirect taxes (linked to production) less subsidies,
direct taxes and social security revenues are the only endogenous items
of the public deficit, it can be written:

-[t'[l!kS (~x + k4a~bex)] + tea~beX + s,~x + a~sx + ala~immc+


a2atggav] + pd* :::; O.03gdp

in which k5 is a constant ([gross household income-{wages+ entrepre-


neurial income)]/gross household income), k4 is the percentage of opera-
tional surplus obtained by the families (the most significant components
of gross household income are wages and entrepreneurial income), te is
the average entrepreneurial income tax rate, al is the percentage of im-
port taxes of the country, a2 is the percentage of gross added value taxes
of the country and pd* is the exogenous component of the public deficit.

C02 emissions. The impact of energy resources on the environment


is measured through the emissions of CO 2 (carbon dioxide) resulting
from the burning of fossil fuels.
A methodology based on the principles of combustion and composi-
tion of fuels is used to quantify the potential C02 emissions from various
energy sources. After primary energy requirements necessary for each
economic activity have been computed, the carbon content and C02 gen-
erated by unit consumption of the fuel are related to the fuel's calorific
value. Since the carbon content can be linked to the energy value of
the fuel, the total pollutant emissions are calculated by multiplying the
amount of fuel burned (proportional to the level of activity) by emission
factors expressed as mass of pollutant per energy unit consumed.
In order to obtain more realistic results the top-down methodology of
IPCC has been followed. The estimation process can be divided into six
steps (IPCC, 1996): 1) estimate consumption of fuels by fuel type; 2)
convert the fuel data to T J; 3) select carbon emission factors of each fuel
and estimate the total carbon content of fuels; 4) estimate the amount of
carbon stored in products (fuel used as a raw material or in non-energy
use); 5) account for carbon not oxidized during combustion (due to inef-
ficiencies in the combustion process); 6) convert emissions of carbon to
full molecular weight of C02. So, in a generic way CO 2 emissions may
be defined as:
420 AIDING DECISIONS WITH MULTIPLE CRITERIA

fo '( FCaepcpc10 -3 - SFnaepcpc )44


12

in which fo is the vector of fractions of carbon oxidized, F is the diago-


nal matrix of carbon emission factors (tC /T J), C is the diagonal matrix

of conversion factors (toes into TJ), AE = 0


AEE
[ ARE0
AER AEP
ARI
AEI 1
0 ApE 0 API
is a sub-matrix of A with coefficients of the energy use for each sector
(toes/output unit), S is the diagonal matrix of stored carbon factors, NE
is the matrix of non-energy use coefficients for each sector (T J / output

unit), a epc = [ apOE


apcp
1
is the vector of coefficients of private consump-

tion of energy and i~ is the molecular weight ratio of C02 to carbon.


naepc is the vector of coefficients of non-energetic use of energy (T J /total
consumption unit), and Ct denotes total emissions of C02 (Gg).
A constraint on the level of C02 emissions is imposed:

Ct ~ ubct
in which ubct is the upper bound for CO 2 emissions.

Storage capacity and security stocks for hydrocarbons. These


constraints guarantee that positive stock changes never exceed storage
capacity also imposing that negative stock changes are never below se-
curity stocks:

sc +
P - sc-P ~ ss* - e O*
in which e O* is the vector of initial stocks, sc* is the vector of storage
capacity and ss* is the vector of security stocks (the last two vectors are
both exogenous).

4. Some illustrative results using TRIMAP


This multiple objective model for energy planning based on a input-
output table has been supplied with statistical data published by INE
(National Statistics Institute) and DGE (General Directorate for En-
ergy), for the base year 1995, as well as estimates for the year 2000
(Oliveira, 2000). The MOLP model has 141 decision variables, 201 con-
straints and 3 objective functions. Some illustrative results are herein
described.
Interactions Between Energy System and Economy Using TRIMAP 421

The solution search process began with the computation of the non-
dominated solutions which optimize individually each objective function
(table 1). This information is useful to have a first overview of the range
of variation of the objective values within the nondominated region as
well as the main characteristics of (in principle) well dispersed solutions.
Figure 3 shows the weight space filled with indifference regions cor-
responding to 20 nondominated (basic) solutions. These solutions have
been computed in a selective and progressive way. The DM indicates the
region of the weight space where a new solution shall be computed. This
action results from the analysis of the information regarding the (basic)
nondominated solutions already known, whose indifference regions are
displayed.
For instance, the analysis of the objective function values of solutions
4, 9 and 10 may certainly lead the DM to opt for not computing new
solutions in the not yet searched region located between the indifference
regions corresponding to those solutions. In fact, the DM now knows
that for nondominated solution therein computed, the corresponding it
values would be better than in solution 9 but worst than in solution 4.
This selectivity of the search, which is enhanced by the interactive visual
information displayed on the weight space, contributes to minimize the
computational effort and the number of irrelevant solutions generated
during the exploitation of the problem. This type of assistance has
been particularly helpful in avoiding the search in the region between
solutions 2 and 5, in which a huge number of (basic) nondominated
solutions exists. Let us suppose that as a results of the knowledge on the
nondominated solution set acquired throughout the interactive process,
the DM wants to impose further limitations (reservation levels) on the
objective functions. The additional constraints it ~ 5 500 000 and
is ~ 19 950 000 restrict the search to regions above solutions 3 and 13,
and below solutions 4 and 5, respectively. The result of this feature of
TRIMAP, which is particularly useful values in order to reduce the scope
of the search (in a revocable manner) according to the indications of the
DM, is displayed in figure 4 (see Climaco and Antunes, 1987 and 1989
for technical details). New nondominated solutions computed by using
sets of weights within this region, which the initial screening revealed
as potentially interesting, are also displayed in figure 3. Let us consider
that after the computation of these 20 nondominated solutions the DM
finds them sufficiently representative of the universe of different policies.
Although several important economic indicators (GAV, GDP, etc.)
are computed in the framework of this model, allowing a detailed anal-
ysis of the characteristics of the solutions, the illustrative results herein
422 AIDING DECISIONS WITH MULTIPLE CRITERIA

Figure 3. Weight Space Decomposition

briefly described will be limited to their most relevant aspects. However,


a further level of insights into the solutions is provided by the model.
The objective functions have been re-scaled in order to avoid the ef-
fects of their different orders of magnitude in the decomposition of the
weight space.
The objective function values of those nondominated solutions are
presented in table 1. The level of employment is given in number of
workers, private consumption in 106 Ptes, and energy imports are ex-
pressed in toes (tons of oil equivalent).
Three main regions can be distinguished in the weight space corre-
sponding to solutions with different characteristics. In the lower left
part of the weight space (in which more attention is paid to the energy
imports objective function), comprising solutions 3 and 13, the values
of the three objective functions are similar. In solution 13 the value
of 13 is very close to the optimum of 13, whereas the other objective
functions do not vary significantly with respect to solution 3 (employ-
ment and private consumption only have positive variations of about
2% and 0.5%, respectively). In solution 3, which minimizes the level of
energy imports, both private consumption and employment achieve the
worst levels (12 378 740 and 5 285 947, respectively). The global output
Interactions Between Energy System and Economy Using TRIMAP 423

reaches the lowest level, denoting that the country is highly dependent
on the energy obtained externally. Although total energy imports are
at the lowest level, the structure of this solution possesses some char-
acteristics worth noting: the level of electricity imports is the highest
(476 512 toes) representing a difference of about 35% from the nearest
value obtained with the optimization of private consumption. This is
explained by the fact that electricity production and self-production of
electricity are highly hydrocarbon intensive and the level of electricity
imports compensates the imports of hydrocarbons that would be neces-
sary to produce the same amount of electricity. A significant difference
exists in the values of iI and h regarding solutions in this lower left
part of the weight space and solutions immediately above (14 and 6; for
which more attention is paid to employment and private consumption
objective functions).

Solution It h h
(employment) (private consumption) (energy imports)
1 5795435 13444893 20885591
2 5788730 13453733 20891 825
3 5 285 947 12378740 19356817
4 5770 183 13390334 20 002 774
5 5764580 13394417 19 999 096
6 5668910 13136696 19658520
7 5672 009 13147573 19664468
8 5748408 13343734 19 835 499
9 5754016 13358331 19863325
10 5751091 13347213 19841 769
11 5742511 13314886 19808328
12 5745124 13318433 19814536
13 5403214 12444103 19 383 808
14 5663068 13129939 19 653 483
15 5666981 13141724 19660118
16 5752297 13378746 19912 142
17 5681 873 13154143 19675466
18 5700706 13249027 19739093
19 5718186 13 219 655 19740774
20 5751662 13376254 19903953

Table 1. Objective function values for some selected nondominated solutions

In the region of the weight space above solutions 4 and 5, in which


less importance is given to the energy imports objective function, only
slight variations occur concerning the values of iI, h and fa. However,
in solutions within this region fa is worsened by about 8%, when com-
pared to its optimum. In solution 1, which maximizes employment, both
424 AIDING DECISIONS WITH MULTIPLE CRITERIA

private consumption (13 444 893) and energy imports (20 885 591) reach
high values. Metallurgy, construction, water/steam and electricity out-
puts reach their highest level in this solution. The level of production
of hydrocarbons is slightly below the one obtained with the optimiza-
tion of private consumption (as the result of lower outputs reached for
diesel-oil and naphtha). The corresponding level of self-production of
electricity is below the one obtained in solution 2. The lower level at-
tained for energy imports, when compared to solution 2, is due to the
imports of the following types of energy: electricity, crude oil, shale oil,
paraffin, solvents and petroleum coke. The structure of production of
by-products has more representative values for pitch, coke oven gas and
coke gas which are inputs of steam production. In solution 2, which max-
imizes private consumption, the level of employment is a reasonable one
(5 788 730) and energy imports reach the worst value (20 891 825). The
non-energetic outputs which have particularly high values (when com-
pared to solutions 1 and 3) are metal-electrical-mechanics, food, services
and city gas (notice that services are highly intensive in city gas). The
production of hydrocarbons reaches the top level (due to diesel-oil and
naphtha). The highest level of energy imports obtained in this solution is
the result of the level of imports of crude oil, shale oil, paraffin, solvents
and petroleum coke. The level of electricity imports is not the highest
obtained, but is nevertheless higher than the one reached with the opti-
mization of employment. The output of self-production of electricity is
the highest (385 572 toes). This fact is also consistent with the highest
outputs of black liquors, other by-products, biogas and hydrogen, which
are important inputs of self-production of electricity.
The other region which can be identified on the weight space, regard-
ing the main characteristics of the corresponding solutions therein, is
between solutions 14 and 6 and solutions 4 and 5. Solutions within this
region present slight changes between them for the values of the three
objective functions. Solution 10 is representative of the main character-
istics of the solutions comprised in this region, and it is a good candidate
for a compromise solution. In this solution the level of employment is
high (5 751 091), and private consumption (13 347 213) and energy im-
ports (19 841 769) achieve reasonable values. Hydrocarbon production
achieves levels between those attained in solutions 1 and 2 and solution
3. LPG, fueloil and naphtha reach output values below those of solutions
1 and 2 and above the one obtained in solution 3. A similar behaviour is
verified in the analysis of the non-energetic output. Nevertheless some
exceptions can be pointed out, namely for metal-electrical-mechanics
which reaches a level above solutions 1 and 2. The output of electricity
has the same value reached in solution 3 {minimization of energy im-
Interactions Between Energy System and Economy Using TRIMAP 425

f3~19950000

f1~5500000

Figure ./. Additional limitations on the objective functions

ports) and the value obtained for city gas is between those of solutions 1
and 2 and solution 3. Self-production of electricity is only 2% above the
value achieved in solution 3, and less 11% and 19% the value obtained
in solutions 1 and 2, respectively. The output of by-products is always
above the ones in solution 3 and below solutions 1 and 2. Energy im-
ports, concerning crude oil, shale oil, paraffin and petroleum coke follow
the same pattern. For the same reasons pointed out in solution 3, elec-
tricity imports reach the highest value (when compared to solution 1, 2
and 3), although in solution 10 it must be considered, in addition, that
the level of global output is higher than in solution 3.
Although the analysis has been made in terms of basic (vertex) solu-
tions only, these solutions convey sufficient information to broadly char-
acterize the policies corresponding to distinct regions in the weight space.
Nondominated edges and faces can also be identified by visual inspection
of the weight space decomposition: a nondominated face corresponding
to a point shared by several indifference regions, and a nondominated
edge corresponding to a common frontier between two indifference re-
gions. Moreover, nondominated solutions located on edges and faces can
be obtained by making a convex combination of basic (vertex) nondom-
426 AIDING DECISIONS WITH MULTIPLE CRITERIA

inated solutions. For instance, a nondominated face where interesting


compromise solutions are located is defined by vertex solutions 8, 10, 11
and 12 (see figure 3).

5. Conclusions
The energy sector is of great importance to the analysis of an economy
on a national level, because of its direct impact on final demand and the
production system, with consequences on the employment level, internal
provisions, international trade relationships and, in general, on social
well-being. A multiple objective model based on input-output analysis
has been presented, which is suited for the study of the energy sector in
the context of the economic system.
This study is aimed at outlining how the multiple objective input-
output model and the TRIMAP interactive environment can be used
to provide decision support to DMs and planners for evaluating poli-
cies taking into account macro-economic indicators, energy flows and
environmental impacts. The model is designed to accomodate for new
fuels, forms of electricity production and air pollutants. A description of
the main results achieved with some experiments using actual data has
been presented, namely paying attention to the characterization of the
distinct policies. The availability of statistical information has been the
main dificulty found in the experiments which have been carried out.
The TRIMAP interactive environment has been used to perform an
overview of the main characteristics of nondominated solutions. A pro-
gressive and selective search based on the weight space enabled to iden-
tify regions where solutions sharing the same features are located. This
approach overcomes the difficulties associated with generating methods
(by avoiding an exhaustive computation of nondominated solutions) and
some interactive methods (not requiring dichotomic responses before a
global knowledge of the nondominated solution set). The interactive
process is therefore associated with "creation" rather than "discovery"
of an optimal solution to an underlying preference structure (Roy, 1987,
1990).

References
Banco de Portugal, New Presentation of the Balance of Payments' Statistics, 1995.
(in Portuguese)
Bouyssou , D. Decision multicritere au aide multicritere?, Newletter of the European
Working Group Multicriteria Aid for Decisions, no. 2, 1993. http://www.inescc.pt
/ -ewgmcda/Bouyssou.html (in French)
Climaco, J., and Antunes C. H., TRIMAP - an interactive tricriteria linear program-
ming package, Foundations of Control Engineering, vo1.12, 101-119, 1987.
REFERENCES 427
Climaco, J., and Antunes C. H., Implementation of a user-friendly software package -
a guided tour of TRIMAP, Mathematical and Computer Modelling, vo1.12, 1299-
1309, 1989.
Dorfman, R., Samuelson, P., and Solow, R., Linear Programming and Economic Anal-
ysis, Dover Publications, New York, 1987.
INE, National Accounts, 1995. (in Portuguese)
IPCC, Revised 1996 IPCC Guidelines for National Greenhouse Gas Inventories Ref-
erence Manual. http://www.iea.org/ipcc/inv6.htm. 1996.
Leontieff, W., The Structure of the American Economy - 1919-1939, Oxford University
Press, New York, 1951.
Oliveira, C., A Multiple Objective Model for Energy Planning Based on Input-Output
Analysis, Dissertation of M.Sc. on Information Management in Organizations, Fac-
ulty of Economics, University of Coimbra, 2000. (in Portuguese)
Roy, B. Meaning and validity of interactive procedures as tools for decision making,
European Journal of Operational Research, Vol. 31(3), 297-303, 1987.
Roy, B. Decision-aid and decision-making, In C. Bana e Costa (Ed.), Readings in
Multiple Criteria Decision Aid, Springer Verlag, 17-35, 1990.
Steuer, R., Multiple Criteria Optimization: Theory, Computation and Application.
Wiley, 1986.
MULTICRITERIA APPROACH
FOR STRATEGIC TOWN PLANNING
The Case of Barcelos

Carlos A. Bana e Costa


Instituto Superior Tecnico, Technical University of Lisbon, Portugal
Department of Operational Research, London School of Economics, United Kingdom
carlosbana@netcabo.pt

Manuel L. da Costa-Lobo
Instituto Superior Tecnico, Technical University of Lisbon, Portugal
cesur@civil.ist.utl.pt

Isabel A. Ramos
University of Evora, Portugal
iar@uevora.pt

Jean-Claude Vans nick


University of Mons-Hainaut, Mons, Belgium
Jean-Claude. Vansnick@umh.ac.be

Abstract: Barcelos was one of the medium sized Portuguese towns selected to be
included in a governmental program aiming at restructuring the country urban
network. Each town had to prepare a "City Strategic Plan" as a previous
condition to get financial support for the implementation of its strategy. This
paper describes how a multicriteria methodology, enhanced with problem
structuring techniques, supported the construction of a strategy for Barcelos
in direct interaction with planners and the local elected politicians in a
decision-conferencing framework. This was a socio-technicalleaming process
that successfully implemented, in a strategic town-planning context, what
Bernard Roy defined as "decision aiding".

Key words: Decision aiding; Strategic planning; Decision conferencing; Mixing method

429
430 AIDING DECISIONS WITH MULTIPLE CRITERIA

1. Introduction: Problem definition


1.1 Background: Town planning context of the case
Modem town planning started in Portugal around 1940. From then
onwards its practice has been based on plans fixing rigid norms for land use.
In 1982, a new law prescribed a more flexible planning scheme (Municipal
Master Plan - MMP) covering all the municipal territory instead of the
previous town plans that included only the urban space. Despite the revision
of the MMP law in 1990, its implementation did not also really answer to
the uncertainty and ever-changing conditions of Development.
To face these difficulties of planning implementation, in 1994 the
government published legislation introducing the concept of "strategic
plan", to be developed as a complement ofMMP by the main medium sized
urban centres selected with a view to reinforcing the country's network of
regional "development poles". The objective was to give physical plans an
operative base for implementation, trying to co-ordinate them, setting
priorities and stimulating citizens' participation. The public authorities, in
connection with other initiatives to be negotiated with private partners, were
to launch a program of investments for the short and medium term. A
special program (the "PROSIURB" Program) was created, defining
conditions for governmental financial support for cities to implement their
strategic plans.
Barcelos was one of the medium sized towns chosen to perform an active
role as a development pole. It belongs to the urban system of the Oporto
Region (Fig. I). Barcelos is an urban centre within a municipal territory
(Fig. 2) of 112 thousand inhabitants and 379 square kilometres subdivided
into 89 "freguesias" (an administration level below the municipal one).
Barcelos is noted for the fertility of its land and the high dispersion of its
inhabitants in the territory. Many families supplement insufficient
agricultural revenues with domestic textile activities. The old urban centre
of Barcelos has about 15 thousand inhabitants, but the actual urban
agglomeration has about three times this population.
Multicriteria Approach for Strategic Town Planning 431

Figure 1. Location of Barcelos Municipality in Portugal and the regional context of the town

Figure 2. The Strategic City of Barce10s


432 AIDING DECISIONS WITH MULTIPLE CRITERIA

1.2 Framing the problem: Establishing the conditions for


intervention
In October 1994, a consulting company called "Espa~os e Redes" (E&R)
was chosen to conduct the studies for the Strategic Plan of Barcelos. In tum,
E&R asked CESUR - the Centre of Urban and Regional Systems of the
Engineering Institute (1ST) of the Technical University of Lisbon - for
methodological support. Our task was to aid the planners of E&R in
selecting the "best" program of policy actions or investment projects for the
development of Barcelos. However, Costa-Lobo (the planning leader of our
team) stressed that the intervention area of the Strategic Plan should not be
limited by the Urban Perimeter but enlarged to a so-called "Strategic City"
with an adequate dimension to guarantee the achievement of the purpose of
creating an operative development pole. E&R and the local politicians
accepted his suggestion of a planning-space of 40,000 inhabitants (Fig. 2).
E&R had already developed efforts to get a comprehensive
understanding of Barcelos reality and to get information from influential
groups, authorities and technicians of Barcelos. From this preliminary work,
they were able to identify some important concerns, to derive an extensive
list of projects and to estimate the respective implementation costs.
As diverse as stakeholders' concerns are, conflicts between their value
systems are expected. An effective approach to this problem requires
communication and compromise, as emphasised by authors either from
decision aiding or strategic planning arenas - see, for instances, (Roy, 1999)
or (Lorange, 1980), respectively. Our basic idea was to apply a Multicriteria
Decision Aid Methodology (MCDA) (Roy, 1996; Bana e Costa, 1990)
within an open system of direct interaction with the local politicians. The
case was for us an excellent opportunity to perform a real-world intervention
testing the adequacy of MCDA to answer urban strategic planning needs.
The challenge was in the design of a potential Multicriteria Approach to
Strategic Planning (MASP). Two pre-intervention key conditions should be
fulfilled, however:
1. The methodology should be so attractive as to get local elected
politicians to be willing to participate in the process as representatives
of the population and be prepared to be present in open discussion
sessions.
2. The methodology should be applied through workshops or decision
conferences organised in such a way that the effects of preferences
and choices taken by participants during the sessions would be
quickly reported in a friendly way. So that those effects could be
Multicriteria Approach for Strategic Town Planning 433

easily understood by all the participants, thus enabling collective


learning and the generation and debate of new ideas.
We conceptualised MASP as a socio-technical and learning process.
From the social viewpoint, to develop the process of strategic planning, it is
important to get a good relation between decision-makers, planners and
facilitators and a permanent open table for debates with all the other
partners. From the technological perspective, decision support systems
(DSS) can speed up the treatment of new data and this can be crucial to
support successful debates between all actors involved.
The merging issue is the adjustment of the language of technicians to the
actors' language, to guarantee an understanding among them. Facilitators
and planners must deal with these aspects. Acting as mediators, they have to
build the bridge between partners, having in view the generation of a
common language for learning and arguing about each one's preferences. In
a framework contributing to legitimate a decision on the strategy to be
adopted - a framework for "concertation", using the French designation of
Roy (1999). This also requires planners to develop "3rd solutions" (Costa-
Lobo, 1997; Bana e Costa and Costa-Lobo, 1999) with creativity, to
overcome conflicts. Decision Conferencing, originally proposed by Cam
Peterson (Phillips, 1984b, 1990; Watson and Buede, 1987) is the adequate
framework for developing that socio-technical process in view of achieving
"a shared understanding of the issues, a sense of common purpose and a
mutual commitment to action" (cf Phillips and Phillips, 1993).
At the time of the intervention, the Political Executive Board of Barcelos
was composed of nine (including the Mayor) local elected politicians from
three different parties. The ruling one had an absolute majority of five
members. In spite of that, all of them agreed to be partners in the study and
participate together as a group in the workshops. Incidentally, time pressure
to get an agreement, needed to conclude the Strategic Plan within the
deadlines established by the PROSIURB Program, might have had an
influence.

1.3 MASP modelling: Activities in a multicriteria


strategic planning process
The implementation of MASP was a constructive, interactive and cycling
socio-technical process of intertwined activities grouped in the three main
phases of Structuring, Evaluation and Elaboration of Recommendations, as
suggested by Bana e Costa (1992). The Structuring phase (S) comprised the
activities of: S 1) problem-structuring or definition of the problem, including
the set-up of time horizon and space planning boundaries (see above); S2)
434 AIDING DECISIONS WITH MULTIPLE CRITERIA

model-structuring, namely the identification of strategic objectives and


actions; and S3) impact assessment, i.e. qualitative analysis of options. The
Evaluation phase (E) consisted in: E 1) building a quantitative model of
values, E2) options-evaluation using the model, together with extensive
sensitivity and robustness analyses essential to guarantee that the model
built was "requisite" (Phillips, 1984a). Elaboration of Recommendations (R)
developed during all the decision-aiding process and consisted in the
substantive interpretation of the qualitative and / or the quantitative
evaluations in terms of guiding the behaviour of the group towards the
achievement of a commitment to action.
The ultimate objective of our intervention was to help to define a
strategy for Barcelos. The building of a "helping relationship" (Schein,
1999) started with the understanding and elucidation of stakeholders'
viewpoints and concerns and their discussion with politicians and planners.
Objectives and actions by which they could be achieved were identified and
structured. Sectors for planning intervention were defined and the
investment projects were arranged in action-packages in each sector (see
section 2.2). A "strategy" was considered to be any portfolio of packages
resulting from selecting one and only one package from each sector. The
elaboration of a final recommendation, consisting in the choice of one of the
very many strategies that could be formed, was held in a final decision
conference. An Equity cost-benefit model (Barclay, 1988; Goodwin and
Wright, 1998) was developed and explored during it, to help the search for a
satisfying relation between lower costs and higher benefits (see section 4).
Many process activities should take place before the final conference. A
model to evaluate the benefits of alternative strategies should be built
(section 3). "Benefits" are the degree to which each objective was achieved
by each action-package (section 3.4). To synthesise all benefits in an overall
benefit value required balancing advantages and disadvantages across
objectives. For this purpose, an additive value model (Keeney, 1992; Belton,
1999) was built during several workshops. The MACBETH approach (Bana e
Costa and Vansnick, 1999, 2000) supported the definition of a quantitative
value scale to measure the packages' attractiveness in each objective (see
section 3.2). The Swing Weighting technique (Von Winterfeldt and
Edwards, 1986) was used to derive weights appraising the relevance of each
objective and enabling to harmonise the partial values of each package
across all objectives (see section 3.3).
Due to the complexity of the case, a strict application of MCDA was not
enough to tackle all structuring activities, namely the identification and
organisation of objectives and actions. In light of this, we made use of
"problem structuring techniques" (Rosenhead, 1989), namely Cognitive
Multicriteria Approach for Strategic Town Planning 435

Mapping (Eden and Ackermann, 1998; see also Bana e Costa et al., 1999b)
and AIDA - the Analysis of Interconnected Decision Areas included in the
Strategic Choice approach (Friend and Hickling, 1987). Eventually, MASP
is a type of "multimethodology" (Mingers and Gill, 1997) named
"methodology enhancement" by Mingers and Brocklesby (1997). Actually,
we enriched the multicriteria methodology with contributions borrowed
from other decision support methodologies, under a single paradigm - the
learning paradigm - that guaranteed the theoretical and practical cohesion
and consistency of the process at all stages of development (Midgley, 1997).
(See brief introductions to Cognitive Mapping, Strategic Choice and
Decision Conferencing in (Eden and Radford, 1990, appendix).)

2. Structuring
2.1 Identifying and structuring objectives
Structuring started in the middle of November 1994 with a one-month
search of development priorities and their framing in the planning process.
Objectives and proposals in previous urban plans and policy documents
were collected and systematised in collaboration with E&R. The main
references were the 1977 Oporto Region Plan, the Master Plan of Barcelos,
the ongoing planning studies for the Urban Centre of Barcelos, and also the
political measures and programs settled by the Municipal Council.
We tried to put all the ideas together and check its coherence to see if
they would be able to promote Barcelos as a meaningful regional pole, as it
was required. This first observation allowed to detect some lack of basic
conditions (such as in technical infrastructures) essential to get the results
aimed at and to avoid unbalanced developments from the social viewpoint.
As said above, E&R had already discussed with stakeholders in Barcelos
their main concerns. We also conducted several individual interviews with
politicians. To make objectives and their relationships emerge, we made use
of a "question-answer" procedure - what do you think it is important to
achieve? Why and for what is this objective important? How can it be
achieved? (Keeney, 1994) - while hand-drawing a cognitive map.
The (individual) maps and the notes taken by the planners of E&R
during their interviews were analysed together, in a workshop that took
place on 21 December 1994. It revealed that the local politicians were
mainly concerned with local strategic objectives and were neglecting that
Barcelos could be an important developing pole for the Oporto region. The
list of strategic objectives so far identified was therefore far from being
exhaustive. To take also in consideration the regional scale of strategic
436 AIDING DECISIONS WITH MULTIPLE CRITERIA

planning (cl Filiatre, 1993), the planners added to the local development
objectives of the local politicians some other objectives having in view to
strengthen the regional structure. The politicians accepted those additional
strategic objectives as they developed a shared understanding of the
importance of a correct integration of Barcelos within the regional urban
network. Using features offered by the Decision Explorer DSS (Banxia,
1997), redundancies were eliminated and all objectives assembled in
clusters in the synthesis map shown in Fig. 3. The map has a "means-ends
network-like structure" (Belton et al., 1997) with arrows indicating that one
objective influences, leads or has implications for another.

Figure 3. An intermediate map in the process of identifying Barcelos' strategic objectives

In its subsequent discussion with the participation of the Mayor the map
was several times redesigned, until four coherent, comprehensive and logic
key Barcelos' strategic development objectives were identified:
1. To improve the quality of human settlements, to create conditions to
stabilise the population.
2. To develop human resources, as a way to face future challenges.
3. Strengthening Barcelos ' importance in the regional context.
4. To improve access to and circulation capacity within Barcelos, to
allow flexible land planning.
The strategic development objectives 1 and 4 were decomposed into
more operational objectives (most of them already identified in the maps).
Multicriteria Approach for Strategic Town Planning 437

Fig. 4 shows the final tree of objectives for Barcelos' development, as


validated by the local politicians of all political parties. The set of all the
bottom objectives in the value tree would be the "family of criteria" (Roy,
1996) for the evaluation of potential strategies.
- BARCELOS' STFIATEGICOEVELOPMENT BJE !YES
1---111. To improve (ho quality of human senJemont s. to craalo conditions to stablhse tho populallon
1 1 SiJSlalnebllrty
- 1 2 Planning of human s.eidsmf!'ntl:

1 - -__ 12 I Pfonnlng of housmg Oleos


1 - -__ 1 2 2 Location planning for Industty
1 - --11 12.3 Improvement of so.oal If\tr&S1I1Jctures
I 2.3.1 Leisure
. 2 Educallon
I 23
, - - - L J I 2.33 Heolth
I 2 .3 4 eo..".
I 23 S Socrol suPPO"
1 - -__ 1 2 4 Urban lachruC8J mfrastructurBs
' - - -__ 1.2 5 RehatniltatlOn of hlslOric aty centre

1 - -__ 2. To develop human re sources. os a woy 10 face lulute challe nge s


f--_ 3 St,ongthenlng eo'cclo~' impor1onco in 1h o reglonol conloxt

'----114. To improve acce ss to and circulation capacity within Barcelo s. 10 allow lIexibility land planning

f---_ 4 I IReg,onol
' - - -_ 4 12 Lo I

1 - -__ 4 2 IReg,onol
' - - -_ 422 Local

Figure 4. Value tree of Barcelos' strategic development objectives

2.2 Structuring the actions: Defining intervention sectors


and action-packages
Many of the actions identified were oriented to overcoming important
deficits in basic social and technical infrastructures. Others were policy
actions that constituted a set of essential strategic conditions to raise
Barcelos to the stage of a regional development pole, which was the main
purpose to achieve through the Strategic Plan. Nevertheless, it would be
nonsense to develop a strategy for the future without ensuring the
achievement of basic well being. A decision was taken to organise scenarios
combining both types of actions. The process revealed the lack of planning
strategic actions, which were therefore created, able to upgrade
438 AIDING DECISIONS WITH MULTIPLE CRITERIA

complementary effects and synergies for the development of the Oporto


Region.
The Strategic Choice approach stresses a common pitfall of municipal
management: to consider and evaluate a proposal in itself independently of
other proposals. The big mistake is to ignore that decisions in one area bring
results that are strongly connected with those in other areas. This shows the
importance of taking systems of actions, making coherent packages, instead
of considering isolate options and evaluations. Having this in mind, planners
and technicians of municipal departments characterised all the proposed
intervention actions (114 in total) and clustered them into 10 "decision
areas" designated by intervention sectors S 1 to S lOin Table 1. Some sectors
are basic, others strategic (mainly sectors S8, S9, and S 10), others have
elements of both. Secondly, the actions were classified, as shown in Fig. 5
for sectors S 1 and S2, accordingly to four priorities for implementation.

Table I. Intervention sectors


SI - Accessibility and transportation
S2 - Urban technical infrastructures
S3 - Social and technical-infrastructures to support economic activities
S4 - Leisure and sports infrastructures
S5 - Cultural and social infrastructures
S6 - Rehabilitation of the Old city Centre
S7 - Urban qualification
S8 - Education and training
S9 - Synergetic factors for Barcelos development as a "pole" in the Oporto Region
S I0 - Landscape framing / sustainability

Next, following an Analysis of Interconnected Decision Areas, it was


possible to identify and study interrelations between actions, their
compatibility and implementation dependencies, represented by arrows in
Fig. 5. Based on this analysis, the intervention priorities of the actions could
then be re-grouped in action-packages. As shown in Table 2 for sector S 1,
the packages are cumulative, meaning that the /h package includes all the
actions of the (j-ll package plus some additional ones of higher priority.
Finally, eight major interconnections between actions included in
packages of different sectors were identified - see the STRAD DSS (Friend,
1994) snapshot in Fig. 6. For later use (see section 4), they were registered
in a database, prepared by E&R, with individual records per package (41 in
total) with costs and entities involved.
The described activities, carried out to identify and structure policy
actions, were held during the whole month of January 1995.
Multicriteria Approach for Strategic Town Planning 439

Medium
ononIy

RORdS,.._-,· _. l."tiMd.J"1 ~ i~)


Road SYS ItH'n - 2 .... ptl,8l!.10

Rood S}'S IDm - 3-11 on 0

Ulban I~oad:ll - " pnoltSto (So.lih ewe e~'cf.llonhOS)


C.,CUla1Jon lind Troft.c Plan (PGCTE)
Conl,._ au. St.adon (deto'lcd Pron,
CtnlfDI Bus. Staciun (ConltNC.bOn,
Urben RoadJ 1" phtl'Se (Ot~CI Sl Cxt)
Utbln Roacb - 1'" ph 1(1 ($ JolO Av Ext)
ImpletnentBtJon d PGCTE - , to ph.;)$.e

HnI)lOmCnIGl-an d PGCTE t" pI'Iose

N_ .
WfUt Supply (flI;gh prMsure)

W.." S _ y N _ • •"pIt••• l_Ih'.u.",)


W_ Suw'Y t' """•• (ptolo<')
w..... Suw'Y N _ . 2"' ..... It.ew,\<.,.I-i;..)
~t b","~,~ ..... d.. TI"'t4t....u.t- ~ . PIa"
Sew8lg0 In!C!(ceptOt end r real~ Phln!

Irnptcn-.enI Rhon d PGOTAR _,01 poMMel


Implftl'nentiMlIOn d PeOl AA - 'l . ,*,MC

U~n Solid Wasto ManaQCment SVSlem Pl.on. (SGRSU)


Enrargemenl Md Rmlorill>Ot'l ~ 1!'I.e Lal'ldftl

Impl~I.;:I'Itc:it' 01 SGRSU

Figure 5. Analysis of intervention priorities and interdependencies between actions

Table 2. Action-packages in sector S 1 - Accessibility and Transportation

Actions Priority Packages


aJ.12 ROAD SYSTEM - 4th Phase (North Circ.) fourth PI.4
aJ.11 IMPLEMENTATION OF PGCTE - 2nd Phase fourth
~
al.IO ROAD SYSTEM - 3'd Phase (EN306 Var.) third Pl.3
al.9 IMPLEMENTATION OF PGCTE - I SI Phase third
al.8 URBAN ROADS _lSI Phase (South Circ. Barcelinhos) third
al.7 URBAN ROADS - lSI Phase (S. Jose Av. Ext.) third
~
al.6 CENTRAL BUS STATION second Pl.2
al.5 URBAN ROADS - lSI Phase (Olivenya St. Ext.) second
al.4 CENTRAL BUS STATION DETAILED PLAN second
al.3 CIRCULA TION AND TRAFFIC PLAN (PGCTE) second
al.2 ROAD SYSTEM - 2nd Phase second
al.I ROAD SYSTEM - 151 Phase (Bridge, new accesses) first r;T7
440 AIDING DECISIONS WITH MULTIPLE CRITERIA

- - - - - -- - - - - - - - - - -F-o-cu-s- - - - - - - ---- - -- -- - --- - - aJ

Figure 6. Direct major interdependencies between action-packages of different sectors.


(A link between packages Pki and Pk'i means that at least an action with priority i ~j in sector
Sk must be implemented before implementing package Pk'i')

2.3 Impact-assessment
An expert-based procedure was followed to appraise the packages'
effects. "Expert-panels" (Wenstap et al., 1997; Wenstap and Carlsen, 1998)
of planners and municipal technicians in each intervention area reflected on
how each package would improve the achievement of each (bottom)
objective and selected from Table 3 the reference impact-level that in their
opinion best appraises the perceived effect.

Table 3. Qualitative descriptor of impact-levels on the status quo (SQ)

Impact levels Description: Contribution towards a '"


VI direct, necessary, sufficient and complementary improvement of the SQ;
V direct, necessary and sufficient improvement of the SQ;
IV direct, necessary but not sufficient improvement of the SQ;
III direct improvement of the SQ;
II indirect but significant improvement of the SQ;
I indirect and tenuous improvement of the SQ.
o no improvement ofthe SQ,
Multicriteria Approach for Strategic Town Planning 441

From this process resulted the profile (a line of the Impact Table 4) of
qualitative impacts of each package in the full family of objectives. Note in
Table 4 that in general a package contributes to improve the status quo only
in a few objectives.

Table 4. Table of impacts


Objectives
Seeton Packages
11 121 121 1231 1232 1233 1234 1235 124 125 13 411 412 411 411 43
51 pl.l o ill 0000 o o 0 II 0 o o o o ill ill o
pl.2 o ~ 00 0 0 o o 0 II 0 o o o o ill ~ o
pl.3 o ~ 0000 o o 0 II 0 o o o ill IV V o
pl.4 o ~ 00 0 0 o o 0 II 0 o o o ill V VI o
52 p21 o 0000 o o III o 0 o o o 000 o
p2.2 II 0 00 0 0 o o ~ o o o o 000 o
p2.3 II ill 00 0 0 o o V o o o o 000 o
p2.4 II ~ 00 0 0 o o VI o I o o o 000 o
53 p3.1 o 0 000 o o 0 o 0 o o o 000 o
p3.2 000 0 o o 0 o III o o 000 o
p3.3 ill 0 0 0 o o 0 o III o o 000
pH I II ~ 0 0 0 o o 0 o III o o 000 II
54 p4.1 o 0 ill III 0 o o 0 o 0 o o o 000 o
p4.2 o 0 IV III 0 o o 0 o 0 o o o 000 o
p4.3 II10VIIIO o o 0 III 0 o ill o 000 o
p4.4 II ill 0 VI III 0 o o 0 III 0 o III o 000 o
55 p5.1 o 0 0 000 II o 0 o 0 o o 000 o
p5.2 o 0 0 0 0 V II o 0 o 0 o o 000 o
p5.3 o 0 0 IV V II ~ 0 o 0 o o 000 o
pH o II 0 0 ~ V II V 000 o 0 0 0 0 0
P5.5 o II 0 0 V V II V 000 o 0 0 0 0 0
56 p6.1 o 0 0 o ill 0 o III 0 JII 0 o o 0 o
p6.2 o ill 0 0 o ~ 0 o III 0 JII II 0 o o 0 o
p6.3 o IV 0 0 o IV 0 OIVIIillJIIO o o 0 o
p6.4 o IV 0 0 o ~ 0 0 V II ill ~ 0 0 0 0 0
57 p7.1 o o 0 o o o o o o o 0 o o o o o o
p7.2 o o 0 o o o o o o o 0 o o o o o o
p7.3 JII JII JII o o o o o ~ o 0 o o o ill o
p7.4 II JII ill III o o o o o V o 0 o o o JII o
p7.5 II IV JII ill o o o o o V o 0 o o o JII o
5S pS.1 II 0 o 0 o o o o o o JII o o o o o
pS.2 II 0 o 0 JII o o o o o II IIJ II o o o o o
pS.3 II 0 o 0 III o o o o o III JII II o o o o o
pS.4 II 0 o 0 III o o o o o IIJ ~ II o o o o o
59 p9.1 o 0 II 0 o o o o o o II 0 III o o o o o
p9.2 o 0 ill 0 o o o o o o III 0 IIJ o o o o o
p9.3 o 0 IIJ 0 o o o o o o IIJ 0 IV o o o o o
510 plO.1 IIJ o 0 o o o o o o o o o o o o
plO.2 JII II o 0 o o o o o o o o o o o I
plO.3 ~ II o 0 o o o o o o II 0 II o o o o
plO.4 IV III 0 0 0 0 0 0 0 0 IIJ 0 ~ 0 0 0 0 ill
442 AIDING DECISIONS WITH MULTIPLE CRITERIA

3. Evaluation activities: Building a value model


3.1 Foreword
In the Strategic Management literature (Thompson and Strickland, 1999,
chapter 8), to measure the attractiveness of options is often suggested to
direct rate the options in each strategic factor, to assign directly a numerical
"weight" to each one of these, reflecting its relative "importance", and then
to calculate the weighted attractiveness score of each option by multiplying
the option's rating on each factor by the factor's weight and summing up.
The pitfalls of this (unfortunately popular) evaluation method are well
known in the MCDA field. In particular, weights in the additive model
cannot be assessed by directly comparing factors in terms of their relative
importance, as additive weights correspond to the concept of trade-off: how
much would someone be willing to give up in a factor to achieve more on
another. Keeney (1992, pp. 147-148) calls it the "most common critical
mistake", and this is the reason why we have decided it is worthwhile to
give in this paper detailed attention to the weighting process (section 3.3).
In what concerns the direct rating of options, the problem has a different
nature. Of course, it can give meaningful results if conveniently conducted
(von Winterfeldt and Edwards, 1986; Belton, 1999). From the measurement
perspective (Vansnick, 1990), the issue is the construction of an interval
scale (i.e. a scale unique upon a positive linear transformation) on the set of
options, that quantifies their attractiveness (for someone). This requires
proper expression of cardinal value judgements, which is far from being an
easy task. Therefore, as said elsewhere (Bana e Costa and Vansnick, 1999)
we think that direct scoring of options often leads to unreliable and
semantically meaningless information. However, our basic conviction is
that, by means of an adequate interactive process based on verbal
judgements about the differences in attractiveness between options, it is
possible to aid a person to progressively evolve from ordinal to cardinal
preference information in view of constructing a requisite cardinal value
model. Conceived precisely for this purpose, MACBETH (Measuring
Attractiveness by a ~ategorical ftased ~valuation Technique) was used in
the case to score the packages in each criterion, as described in section 3.2.
MACBETH can also be used to weight criteria or objectives (see Bana e
Costa and Vansnick, 1997; Bana e Costa et ai., 1999a; Bana e Costa,
forthcoming). Nevertheless, given the hierarchical structure of the value tree
in Fig. 4, in this case we opted for the Swing Weighting Technique.
Multicriteria Approach for Strategic Town Planning 443

3.2 Measuring packages' attractiveness with MACBETH


To assign a value score to each package, measuring quantitatively its
attractiveness with respect to each objective, a 0-100 cardinal value scale
was developed upon the descriptor in Table 3 using MACBETH. Briefly
described, MACBETH is an interactive approach to guide the construction, on
a set X of stimuli, of a numerical (interval) scale that quantifies the
attractiveness of the elements of X for an evaluator (or group of evaluators).
The evaluator is asked to pair-wise compare stimuli and to give, for each
two stimuli such that the first is more attractive than the second, an absolute
qualitative judgement about the difference of attractiveness between them.
The answer can be "very weak", "weak", "moderate", "strong", "very
strong", or "extreme" difference of attractiveness (the MACBETH semantic
categories). When a certain judgement is inconsistent with previous ones,
MACBETH detects the problem and gives suggestions to overcome it (for
details see Bana e Costa and Vansnick, 1999). Cases of group judgemental
disagreement or hesitation can also be considered, by choosing more than
one category for a specific pair-wise comparison of options (rather than to
force an agreement on the choice of a single category). For a given set of
consistent judgements, the MACBETH DSS proposes a numerical scale cI> on
X (the MACBETH scale) that satisfies the following measurement rules:
Rule 1 (ordinal conditions): 'r;f x, y E X:
1.1. if x and yare equally attractive, then cI> (x) = cI> (y);
1.2. if x is more attractive than y, then cI> (x) > cI> (y).
Rule 2 (semantic condition): 'r;f x, y, w, Z E X, with x more attractive
than y and w more attractive than z:
if it results from the evaluator's judgements that the difference in
attractiveness between x and y is greater than the difference in attractiveness
between w and z, then
cI> (x) - cI> (y) > cI> (w) - cI> (z).
In Barcelos, the planners compared seven well defined and well-known
packages, each one selected from Table 4 among those with the same
impact. They systematically judged, in the light of the goal of the strategic
plan, the difference in attractiveness between each two of those selected
packages. The matrix of pair-wise judgements is shown in Fig. 7. For
example, the judgement "very weak" between impact-levels "I" and "0"
means that the selected package was considered to contribute very weakly to
the improvement of the status quo. The numerical scale proposed by
MACBETH is the left thermometer scale in Fig. 7. The validation of the scale
and its progressive transformation into a cardinal scale was done with the
444 AIDING DECISIONS WITH MULTIPLE CRITERIA

support of the MACBETH DSS, by visual comparing scale intervals between


impact-levels, as explained below.
As some distances were considered not to represent adequately the
respective attractiveness' differences, they were changed dragging
intermediate levels, one at each time, with the mouse, within the intervals
indicated by MACBETH. For example, Fig. 7 shows that the value score of
level II can vary between 8.70 and 17.3 9 keeping the other values
unchanged and without violating any judgement. The final value scale that
resulted from this interactive assessment is (see the right thermometer in
Fig. 7): value(O) = 0, value(Q = 2.5, value(II) = 10, value(III) = 40,
value(N) = 65, value(V) = 85 and value(VI) = 100. Based on it, the impacts
of the packages in Table 4 could be translated into value scores, measuring
their (partial) attractiveness in each (bottom) objective, as shown for Sector
S5 (Cultural and Social Infrastructures) in Table 5.

Figure 7. Building a value scale with MACBETH visual construction of the value scale
Multicriteria Approach for Strategic Town Planning 445

Table 5. Packages' values (Sector 5) and objectives' weights (see section 3.3)
Objective.
Sector Packag..
11 III III 1131 1131 1133 1134 1135 114 115 13 1 3 411 411 421 421 43
S5 p5.1 0 0 10 0 0 0 0 2 .5 0 0 0 0 0 0
p5.2 0 0 0 0 85 10 0 0 0 0 2 .5 0 0 0 0 0 0
p5.3 0 2.5 0 0 65 85 10 65 0 0 0 2.5 0 0 0 0 0
p5.4 0 10 0 0 65 85 10 85 0 0 0 2.5 0 0 0 0 0 0
p5.5 10 0 0 85 85 10 85 0 0 0 2.5 0 0 0 0 0
WeigbtsW·) 3.7 1.9 1.9 1.0 1.0 0.3 0.4 0 .4 6 .1 5 .6 14.7 11.0 24.0 4.9 4.4 11 .0 4.4 3 .3

3.3 Weighting objectives


The evaluation of the overall benefit from each package took place
during a half-day decision conference with the participation of local
politicians, senior municipal technicians and E&R and CESUR planners.
Two of us acted as facilitators, one in MCDA and the other in urban
planning. The meeting was held at Barcelos City Hall on 16 February 1995.
Fig. 8 shows the layout of the meeting room. (Note that the arrangement of
the environment can have a profound influence on the effectiveness of group
working - cf Hickling, 1990, Phillips and Phillips, 1993).

Politicians

F"',-=,;;;;:;..".".,.:;:~r=-=-......,...,~C"'l"'1l Facilitator
[J
(1]
::H(i ...
::~::: :::n:~Uwn
.-.. ::;::::-:--.--
.-

Municipal
technicians Planners

Figure 8. Layout of the conference room

The meeting started with a short description of the work done so far and
the validation of value-Table 5. Next, the MCDA-facilitator made a brief
methodological explanation about value trade-offs based on simple
examples and the group was invited to discuss the acceptability of the
principle of compensation behind an additive aggregation of value-scores. A
few comparisons of adequate combinations of impact-levels by pairs of
objectives served to validate the working hypothesis of additivity. Once
446 AIDING DECISIONS WITH MULTIPLE CRITERIA

established the value aggregation framework, all the participants understood


the necessity of "hannonising" in some way the packages' value scores
across objectives before summing them up.
A very interesting key conceptual issue was that, after some discussion,
all the politicians accepted naturally the interpretation of the intuitive notion
of relative importance of an objective as the importance of a "direct,
necessary, sufficient and complementary improvement" (swing, hereafter) of
the status quo on that objective. The facilitator started by projecting exhibit
1 (Fig. 9) and asking the group which of the two swings on "health
infrastructures" or on "leisure infrastructures" - a) or b), respectively, in
exhibit 1 - would be more attractive (if any) for Barcelos' development.
The Mayor was the first to answer and chose the leisure swing b). This
immediately provoked a comment from another politician, who said that she
was surprised in hearing the Mayor considering "the investment on sports
infrastructures more important than investing on hospitals!". The Mayor
replied: "Sorry, I did not say that! My reasoning is that it is much more
important a strong improvement on leisure rather than on health
infrastructures, because the present Barcelos' coverage on health care
facilities is very good and, on the contrary, there is an enonnous lack of
leisure facilities!" This dialogue very much contributed to make clear to
everybody the context-dependent nature of the notion of relative importance.
The facilitator used this example to test the adequacy of adopting a
swing-weighting questioning procedure. He asked: "Suppose the leisure
swing b) is worth 100 value units. Having this in mind, how much would
you value the health swing a)? If you feel uncomfortable with this question,
you can alternatively think in tenns of how many times the leisure swing is
more important than the health one." The Mayor suggested: "Something
between 3 and 4 times more important." Here, the facilitator remarked:
"This means that the health swing would value between 113 and 114 of the
leisure one, that is to say, the leisure swing valuing 100 value units, the
health swing would worth between 25 and 33.3 value units. All participants
seemed to understand this and after some discussion an agreement was
reached on 30 value units for the health swing (see exhibit 2 in Fig. 9).
Then, the three other social infrastructure objectives were introduced in
the weighting process. The five swings were first ranked and afterwards
rated by the group as shown in exhibit 3 (Fig. 9). Then, the same
questioning procedure was followed in establishing swing weights for the
bottom objectives that specify objective 4. Improve access to and
circulation capacity within Barcelos ... (see exhibit 4 in Fig. 9).
Multicriteria Approach for Strategic Town Planning 447

Exhibit 1 Exhibit 2
Direct, necessary, sufficient and complementary Direct, necessary, sufficient and complementary
improvement of the status quo on improvement of the status quo on

• • • •
Health Leisure Health Leisure
or or

2nd 1st
aj bj
30 100

~ ... ~~ ... ~~ ... ~~ ... ~~ ···0 ... ~


Status quo Status quo
Exhibit 3 Exhibit 4
Direct, necessary, sufficient and complementary Direct, necessary, sufficient and complementary
improvement of the status quo on improvement of the status quo on
Public transports Accessibility Road

• • • • • • • • • •
Education Leisure Culture Social S. Health Regional Local Regional local protection
or or or or or or or or

1st 1st 3rd 3rd 5th 2nd 3rd 1sl 3rd 5th
100 100 45 45 30 45 40 100 40 30

~~ (~ @~ €~ V ~il ~w @9 0 D
Status quo Status quo
Exhibit 5 Exhibit 6
Direct, necessary, sufficient and complementary Direct, necessary, sufficient and complementary
improvement of the status quo on improvement of the status quo on

• • • • • • • • • • •
1.2.1 1.2.2 1.2.3.1 1.2.4 1.2.5 1.1 1.2.4 1.3 2 3 4.2.1
or or or or or or or or or

4th 4th 3rd 1st 2nd 6th 5th 2nd 3rd 1st 3rd

30 30 50 100 90 15 25 60 45 100 45

0 Ii) Eli ~~ V 0:; ~ \ ( ( ~~


Status quo Status quo

Figure 9. Swing weighting process

In order to be sure that the participants were appraising the substantive


meaningfulness of swing weights, the facilitator noted that contributions to a
"direct" improvement of "regional accessibility" and to a "direct, necessary,
sufficient and complementary" improvement of "local accessibility" should
cause the same benefit for Barcelos, given the value scale in Fig. 7 and the
448 AIDING DECISIONS WITH MULTIPLE CRITERIA

weights in exhibit 4 (40x 100 = 100x40). Similar cross checks were made
within each of the sub-sets of objectives in exhibits 3 and 4.
The swing weighting process was then moved a level up on the value
tree, and questions similar to the above ones were asked for the objectives
on exhibit 5 (Fig. 9). The swing on objective (1.2.3.1) Improvement of
leisure infrastructures, one of the most attractive (bottom) swings on social-
infrastructures, was included in exhibit 5, instead of a complex swing on the
objective (1.2.3) Improvement of (all) social infrastructures, to ensure an
easy judgemental transition between the two levels. Finally, moving again
the process a level up, the swing weights in exhibit 6 (Fig. 9) were obtained.
The relative weights of all objectives (shown on the last row in Table V)
were computed by re-scaling the swing weights and were validated by the
politicians. They accepted, in particular, the weight for strategic
development objectives 1 and 2 together (37% + 11 %) to be (almost) equal
to the weight for objectives 3 and 4 together (24% + 28%). Substantively,
this means that, a strategy strongly improving simultaneously the (at that
time) quality of the human settlements and human resources would
contribute to Barcelos' strategic development the same as a strategy strongly
strengthening the (at that time) Barcelos' regional role and accessibility.

3.4 Measuring packages' benefits


Using the weights as scaling constants to harmonise scores across
objectives, the packages' scores and the objectives' weights were additively
combined in a spreadsheet model, to get a measure of the relative overall
attractiveness of each package. The sum of the overall scores of the full
packages of the ten sectors corresponds to 100% of benefit in terms of
Barcelos' strategic development, within a scope limited to the totality of the
proposed actions. Hence, dividing the overall score of each package by that
sum, the relative benefit of each package to improve the status quo could be
appraised in percentage of the maximum possible benefit (see Fig. 10).
The decision conference finished with an animated "what-if' discussion
of these results, supported by several sensitivity analyses on value scores
and weights performed with the spreadsheet model. A new decision
conference was scheduled to take place two weeks later with the same
participants.
In between the two conferences, a report of the weighting conference
was sent to Barcelos, together with the calling note for the next workshop
and a record prepared by E&R with several figures and tables synthesising
the results so far obtained and with costs and benefits of each action-
package. Moreover, we created with the EQUITY DSS a model for financial
Multicriteria Approach for Strategic Town Planning 449

resource allocation over sectors; the model included all the action-packages
(see Fig. 11), each one characterised by its cost and its benefit.

S£CfOR: • SECTOR :I SIECTOR)

S
o
~' .:P% .J,1%

po2 1 p2. 2
~t

p2.J
...

p2."
S D.03%
o p3' p3 2' 1'3.) p3"

nCTOA .f SECTOR ~

S 0,2% O,4'M. UI% 1.1" 1,.3%


o ".\.1 1'$.2 pS.l p$.. p5.5

SE CTOfill SECTOR I neTO R 10


17,1%

.o .':1.5··"::U'·ili·"!B!tl~id.._
11'10.1 11'10.2 p10.3 p10."

Figure 10. Benefit of each action-package as a percentage of the overall benefit of all actions

~
flle

Sector 1 pl .0 Pl.l pl .2 01 .3
St.5tut quo E:w:ecu:ion
Sector 2 .,2.0. 021 p2.2 .,2.3 oH
S'~Ut~O E:-:ecution F..
Sector 3 pJ.o. DJ.2 pJ.•
.
1)3. 1 1)3.3
S~us~1? ,_J~01 F..
Sector. ~ . o. I ~. 1 ~. 2 ~3 ~
SI4'tu$ quo Execution _I- Ful
Seclor 5 0)5.0. <>5.1 05.2 p5,3 OS.•
SI.&hJ"ClUO EIo:e-etlion
Seclor 8 p5.0. 06.1 06·2 p5.3 06.·
SIbtu$ quo Elf.ee~ion Ful
Sector T 07.0. p71 07.2 p7.3 pH
Slatus quo El<eCUl:ion
Seclor 8 08.0. 06.1 06.2 08.3 06.·
SI4tus <flO E..w<ion Ful
Sector t 03.0. D5.1 09.2 D5.3
Sl.otus qvo Ful
Sector 10. 01 0.0 010..1 01D.2
Slbl.us~ Execution

Figure 11. Snapshot of the main screen of EQUITY DSS showing the structure of the
intervention sectors and the respective action-packages
450 AIDING DECISIONS WITH MULTIPLE CRITERIA

4. Elaborating a final recommendation: Designing and


evaluating potential strategies for barcelos
On 3 March 1995, the final one-day decision conference was held. The
respective calling note defined its objective: to choose a strategy viewed as
the best compromise between costs and benefits for the development of
Barcelos. This was expected to result from the group interactive exploitation
of the EQUITY model. With Fig. 11 projected on the room-screen, the
MCDA-facilitator introduced the participants with the basic elements of the
model. He recalled that a "strategy" was to be operationally defined as any
portfolio formed by one and only one action-package from each sector, and
he referred that, although an exhaustive analysis of all possible strategies
would be impracticable, anyway this would not be the best path towards
effective analysis. Alternatively, the discussion could be focused on specific
strategies selected in the flow of the process, directly in EQUITY. He
selected with the mouse the "full" strategy, with 100% of benefit if 15 * 109
PTE were available to be invested. The planners of E&R showed that it was
unrealistic to implement all the actions, because the financial support
expected under the PROSIURB Program would be insufficient.
Then the planning-facilitator asked to select all the "execution" packages
(composed of all the actions already underway). He called this portfolio the
"tendency scenario" for obvious reasons. The table with costs and benefits
in Fig. 12 was exhibit (packages are numbered, in this table, 1 unit more
than in Table IV). It showed that the "tendency scenario" represents an
investment of 1.4 * 109 PTE for 18% of benefit, and the facilitator
emphasised that its implementation would imply a significant delay in
strengthening the regional importance of Barcelos and in actions for
landscape protection, according to the record prepared by E&R.
The cost-benefit window in Fig. 12 (where the point "P" represents the
"tendency scenario") was opened for discussion. Everybody seemed
surprised with strategy "B", which requires an investment similar to "P" but
would give an overall benefit of 45%. And, more important, it would
improve the status quo in terms of regional development and landscape
protection. One of the politicians noticed that strategy "B" would however
imply a decrease of investment in sectors S2, S3, S4 and S7, that is, fewer
projects to cover basic needs in infrastructures and urban qualification. The
Mayor commented: "That 45% minus 18% is a huge difference and shows
clearly the price we have been paying for years of so scarce financial
resources!" The discussion was on its way.
Multicriteria Approach for Strategic Town Planning 451

Sector COSTS
Packages Total Tota l
C P B
1 1 2 2 of5 2734 49
2 2 1 of5 5507 17 BENEFITS
3 2 1 of5 60 0
4 1 2 1 of5 3043 5
1000
........ ." .......
5 1 2 2 of6 106 2
6 1 2 2 of5 1719 49
7 1 2 1 of6 100 0
B 2 2 2 of5 250 3B
9 3 1 4 of 4 0 0
10 1 2 5 of5 700 16

Proposed package 14219 177

455
100000
B· package 11309
COSTS (0 to 148923)
C· package 2100 145

Figure 12. The tendency scenario

Note that the benefit associated with implementing a package in a given


sector is assumed to be independent of the benefit of any other package of
another sector in the EQUITY-type analysis (see, for details, Kirkhood,
1997; Bresnick et al., 1997). Therefore, interdependencies between actions
included in packages of different sectors, such as those prevIOusly identified
during model-structuring (see Fig. 6), could not be directly taken into
account with the EQUITY DSS. Consequently, each time a strategy was
under analysis, we should verify manually if it respected the packages'
interdependencies. It is easy to see that there is no problem for the
"tendency scenario", neither for the alternative portfolio "B" in Fig.12, but
the same is not necessarily true for other ones. (In particular for the readers
familiar with the "order-of-buy" EQUITY analysis (Barclay, 1988), let us
note that it violated some interdependencies.)
The facilitators then suggested that the participants decide upon the
minimally acceptable package above a threshold of activity in each sector.
Using the EQUITY model, several strategies were generated and their pros
and cons debated. After almost one hour of intense discussion, the Mayor
suggested that portfolio "P" in Fig. 13 was a good compromise between the
different viewpoints and arguments presented. This so-called "Mayor's
452 AIDING DECISIONS WITH MULTIPLE CRITERIA

scenario" requires a total investment of 10.1 * 109 PTE and achieves 81.7 %
of benefit.

Secl or COSTS
Packages Total Tolal
C P B
1 2 4 5 of5 15464 97
BENEFITS Weighted Preference Volu
2 1 3 2 of5 12685 31
3 3 4 4 0(5 11585 49 1000 fEr " ..••.
4
5
4 3
3 4 3
4 0(5
0(6
7003
7016 10
7
© ,o®
;.4",t P
6 4 4 5 of5 19643 136

..
7 4 6 5 of6 17900 64
500
8 5 5 5 of5 6500 107
9 4 4 4 0( 4 2900 146 :
10 5 5 5 of5 3600 171

Proposed package 817


0
0 100000
B· package 104298 943 COSTS (0 to 1 '18923)
C· package 65571 798

Figure 13. EQUITY analysis of the strategy proposed by the Mayor

In face of the "inefficiency" of the strategy, revealed by the EQUITY


analysis, the group started comparing the differences between it and the
efficient portfolio "B" in Fig. 13, which has a similar cost but increases the
benefit (94.3% against 81.7%). However, it was found that this alternative
portfolio would violate a consensual decision previously taken by the group:
to implement all the urban qualification actions (sector S7). A participant
then suggested selecting from each of the two portfolios under analysis ("P"
and "B" in Fig. 13) the packages with the highest priorities. This gave rise to
the so-called "enveloping strategy" (note in Fig. 14 that this strategy is
almost efficient) with 96.7% of benefit, 15% more than the Mayor's strategy
but also requiring about 2 * 109 PTE more of investment, therefore only
feasible with an optimistic financial support from the Central Government.
A commitment to action was finally reached: to present to the Central
Government a Strategic Plan based on the "enveloping strategy." Even if
this strategy involves the request for unrealistic financial support, the quality
of the decision aid process would permit a very robust justification of the
Multicriteria Approach for Strategic Town Planning 453

Plan. Moreover, the politicians agreed on keeping the "Mayor's strategy" as


the one to be implemented in a first phase. Then the Mayor closed the
meeting, manifesting his satisfaction to the facilitators by their success in
helping to reach a solution accepted by everybody.

4 5 6
51 pl .O pl .l pl .2 pl .3

52 p2.0 pl. l p2.2 pl.3

53 pl.l p3.4

54 p4 .3 p4.4

BE NEFIT S W e ighte d Prefe rence


55 p5.3 p5A p5.5

56
57
1000
.... @:T .... pS.3

pl .3
pG.4

p7.4
......--
58 p83 pS.4
500
59 : p9.3
:
pl O.3
pl0.4

0
0 100000
COSTS (0 to 148923)

Figure 14. The enveloping strategy

5. Conclusion
The Multicriteria Approach for Strategic Planning (MASP) followed in
Barcelos' strategic planning differs from the traditional planning models in
that it allowed us to deal both with the dynamic nature of the process and its
uncertainties, and with qualitative and subjective aspects related with actors'
value systems. Careful structuring was essential to identify strategic
objectives and actions. The requisite evaluation model built in direct
interaction with politicians allowed the appraisal of actions' attractiveness
in terms of the degree to which the objectives are achieved. Above all, the
process enabled the final choice to emerge naturally from the flow of the
interaction, and offered the politicians a well founded and justified strategy
for the future.
454 AIDING DECISIONS WITH MULTIPLE CRITERIA

The success of the case rests mainly on the constructive nature ofMASP:
it allowed planners to work together with politicians and to develop a shared
understanding of the issues, a sense of common purpose and a mutual
commitment to action.
Designing and implementing MASP successfully was, ultimately,
nothing but putting into practice, in a context of strategic town planning,
what Bernard Roy has been for so long wisely teaching us to be the decision
aid activity:
Decision aiding is the activity of the person who, through the use of
explicit but not necessarily completely formalized models, helps obtain
elements of responses to the questions posed by a stakeholder of a decision
process. These elements work towards clarifoing the decision and usually
towards recommending, or simply favouring, a behaviour that will increase
the consistency between the evolution of the process and this stakeholder's
objective and value system. (Roy, 1996, p. 10.)

References
Bana e Costa, e.A. (ed.) (1990), Readings in Multiple Criteria Decision Aid, Springer-
Verlag, Berlin.
Bana e Costa e.A. (1992), Structuration, Construction et Exploitation d 'un Modele
Multicritere d' Aide it la Decision, PhD thesis, Technical University of Lisbon.
Bana e Costa, C.A. (forthcoming), "The use of multicriteria decision analysis to support the
search for less conflicting policy options in a multi-actor context: case-study", Journal of
Multi-Criteria Decision Analysis - Special Issue: Multi-Criteria Decision Analysis and
Environmental Management.
Bana e Costa, e.A., Costa-Lobo, M.L. (1999), "How to help 'jumping into the darkness'?",
Journal of Multi-Criteria Decision Analysis, 8, I (12-14).
Bana e Costa, C.A., Ensslin, L., Correa, E.C., Vansnick, J.C. (1999a), "Decision support
systems in action: Integrated application in a multicriteria decision aid process", European
Journal of Operational Research, 113,2 (315-335).
Bana e Costa, C.A., Ensslin, L., Correa, E.C., Vansnick, J.C. (l999b), "Mapping critical
factors for the survival of firms: A case-study in the Brazilian textile industry", in G.
Kersten, Z. Mikolajuk, A. Yeh (eds.), Decision Support Systems for Sustainable
Development, Kluwer Academic Publishers, Dordrecht (197-213).
Bana e Costa, e.A., Vansnick, J.C. (1997), "Applications of the MACBETH approach in the
framework of an additive aggregation model", Journal of Multi-Criteria Decision Analysis,
6,2(107-114).
Bana e Costa e.A., Vansnick J.C. (1999), "The MACBETH approach: Basic ideas, software
and an application" in N. Meskens, M. Roubens (eds.), Advances in Decision Analysis,
Kluwer Academic Publishers, Dordrecht (131-157).
Bana e Costa, C.A., Vansnick, J.e. (2000), "Cardinal value measurement with MACBETH",
in S.H .. Zanakis, G. Doukidis, C. Zopounidis (eds.), Decision Making: Recent
Developments and World-wide Applications, Kluwer Academic Publishers, Dordrecht
(317-329).
Multicriteria Approach for Strategic Town Planning 455

Banxia Software Ltd (1997), Decision Explorer User Manual, Glasgow.


Barclay, S. (1988), A User's Manual to EQUITY, London School of Economics, London.
Belton, V. (1999), "Multi-criteria problem structuring and analysis in a value theory
framework", in T. Gal, T. Stewart, T. Hanne (eds.), Multicriteria Decision Making,
Advances in MCDM - Models, Algorithms, Theory, and Applications, Kluwer Academic
Publishers, Dordrecht (12-1-12-32).
Belton, V., Ackermann, F., Shepherd, I. (1997), "Integrated support from problem structuring
through to alternative evaluation using COPE and V.I.S.A", Journal of Multiple Criteria
Decision Analysis, 6, 3 (115-130).
Bresnick, T.A., Buede, D.M., Pisani, A.A., Smith, L.L., Wood, B.B. (1997), "Airborne and
space-borne reconnaissance force mixes: a Decision Analysis approach", Military
Operations Research, 3,4 (65-78).
Costa-Lobo, M.L. (1997), "Sharing responsibilities", presented at the Joint Seminar
IsoCaRP/AESOP 'Planning for the 3rd Millennium: From Knowledge to Action', Ascona,
Switzerland.
Eden, C., Ackermann, F. (1998), Making Strategy: The Journey of Strategic Management,
SAGE Publications, London.
Eden, C., Radford, J. (eds.) (1990), Tackling Strategic Problems: The Role of Group Decision
Support, SAGE Publications, London.
Filiatre, J.P. (1993), "La planification strategique regionale: quelques concepts
methodologiques", Revue d'Economie Regionale et Urbaine, I (161-168).
Friend, J. (1994), STRAD, the Strategic Adviser (User's Manual), Stradsplan Ltd, Sheffield.
Friend, J., Hickling, A. (1997), Planning under Pressure: The Strategic Choice Approach (2nd
Edition), Butterworth-Heinemann, Oxford.
Goodwin, P., Wright, G. (1998), Decision Analysis for Management Judgement (2nd
Edition), John Wiley & Sons, Chichester.
Hickling, A. (1990), "'Decision spaces': a scenario about designing appropriate rooms for
group decision management", in C. Eden, J. Radford (eds.), Tackling Strategic Problems:
The Role of Group Decision Support, SAGE Publications, London (169-177).
Keeney, R.L. (1992), Value-Focused Thinking: A Path to Creative Decisionmaking, Harvard
University Press, Cambridge, MA.
Keeney, R.L. (1994), "Creativity in decision making with value-focused thinking", Sloan
Management Review (33-41).
Kirkwood, C. (1997), Strategic Decision Making: Multiobjective Decision Analysis with
Spreadsheets, Duxbury, Belmont, CA.
Lorrange, P. (1980), Corporate Planning: An Executive Viewpoint, Prentice Hall, New York.
Midgley, G. (1997), "Mixing methods: Developing systemic intervention", in J. Mingers, A.
Gill (eds.), Multimethodology: The Theory and Practice of Combining Management
Science Methodologies, John Wiley & Sons, Chichester (249-290).
Mingers, J., Brocklesby, J. (1997), "Multimethodology: towards a framework for mixing
methodologies", OMEGA, 25, 5 (489-509).
Mingers, J., Gill, A. (eds.) (1997), Multimethodology: The Theory and Practice of Combining
Management Science Methodologies, John Wiley & Sons, Chichester.
Phillips, L.D. (I 984a), "A theory of requisite decision models", Acta Psychologica, 56
(29-48).
Phillips, L.D. (1984b), "Decision support for managers", in H. Otway, M. Peltu (eds.), The
Managerial Challenge of new Office Technology, Butterworths, London (80-98).
456 AIDING DECISIONS WITH MULTIPLE CRITERIA

Phillips, L.D. (1990), "Decision analysis for group decision support", in e. Eden, J. Radford
(eds.), Tracking Strategic Problems, SAGE Publications, London (142-150).
Phillips, L.D., Phillips, M.e. (1993), "Facilitated work groups: Theory and practice", Journal
of the Operational Research Society, 44, 6 (533-549).
Rosenhead,1. (ed.) (1989), Rational Analysis for a Problematic World: Problem Structuring
Methods for Complexity, Uncertainty and Conflict, John Wiley & Sons, Chichester.
Roy, B. (1996), Multicriteria Methodology for Decision Aiding, Kluwer Academic
Publishers, Dordrecht.
Roy, B. (1999), "Decision-aiding today: What should we expect?", in T. Gal, T. Stewart, T.
Hanne (eds.), Multicriteria Decision Making, Advances in MCDM - Models, Algorithms,
Theory, and Applications, Kluwer Academic Publishers, Dordrecht (1-1-1-35).
Schein, E.H. (1999), Process Consultation Revisited - Building the Helping Relationship,
Addison-Wesley, Reading, MA.
Thompsom Jr., A.A., Strickland III, A.J. (1999), Strategic Management: Concepts and Case
(Eleventh Edition), IrwinlMcGraw-Hill, Boston.
Vansnick, J.C. (1990), "Measurement theory and decision aid", in e.A. Bana e Costa (ed.),
Readings in Multiple Criteria Decision Aid, Springer-Verlag, Berlin (81-100).
Watson, S.R., Buede, D.M. (1987), Decision Synthesis: The Principles and Practice of
Decision Analysis, Cambridge University Press, Cambridge.
Wenstl'Jp, F., Carlsen A.1. (1998), "Using decision panels to evaluate hydropower
development projects" in E. Beinat, P. Nijkamp (eds.), Multicriteria Analysis for Land-
Use Management, Kluwer Academic Publishers, Dordrecht (179-195).
Wenstl'Jp, F., Carlsen, AJ., Berglang, 0., Magnus, P. (1997), "Valuation of environmental
goods with expert panels", in J. Climaco (ed.), Multicriteria Analysis, Springer-Verlag,
Berlin (539-548).
von Winterfeldt, D., Edwards, W. (1986), Decision Analysis and Behavioral Research,
Cambridge University Press, Cambridge.
MEASURING CUSTOMER SATISFACTION
FOR VARIOUS SERVICES USING
MULTICRITERIA ANALYSIS

Yannis Siskos
Technical University of Crete. Greece
siskos@hercules.ergasya.tuc.gr

Evangelos Grigoroudis
Technical University of Crete. Greece
dsslab@dias.ergasya.tuc.gr

Abstract: Quality evaluation and customer satisfaction measurement is a necessary


condition for applying continuous improvement and total quality management
philosophies. This justifies the need for developing modem operational
research and management tools, which will be sufficient enough to analyse in
detail customer satisfaction. The original applications presented through this
paper implement the MUSA method, a preference disaggregation model
following the principles of ordinal regression analysis. These applications
concern customer satisfaction surveys from the public and the private sector as
well, and they are selected in such a way so that can indicate the contribution
of multicriteria analysis to the quality evaluation problem. Furthermore, the
presented analyses demonstrate in practice the implementation process of
satisfaction measurement projects in different types of business organisations.

Key words: Customer satisfaction; Preference disaggregation; Ordinal regression;


Multicriteria analysis

1. Introduction
Customer satisfaction is one of the most important issues concerning
business organisations of all types, which is justified by the customer-
orientation philosophy and the main principles of continuous improvement
of modem enterprises. For this reason, customer satisfaction should be
measured and translated into a number of measurable parameters. Customer
458 AIDING DECISIONS WITH MULTIPLE CRITERIA

satisfaction measurement may be considered as the most reliable feedback,


considering that it provides in an effective, direct, meaningful and objective
way the clients' preferences and expectations. In this way, customer
satisfaction is a baseline standard of performance and a possible standard of
excellence for any business organisation (Gerson, 1993).
The aim of this paper is to present original customer satisfaction
evaluation projects in different business organisations from the public and
the private sector. Ibe objectives of the customer satisfaction surveys are
focused on the assessment of the critical satisfaction dimensions, by means
of qualitative questions, and the determination of customer groups with
distinctive preferences and expectations.
The methodological approach is based on the principles of multicriteria
modelling, while the preference disaggregation MUSA (MUlticriteria
,S.atisfaction Analysis) method is used for data analysis and interpretation.
The paper consists of six sections. Section 2 is devoted to the
contribution of multicriteria analysis to the customer satisfaction evaluation
problem, while an analytical presentation of the MUSA method is discussed
in section 3. The next two sections present five original customer satisfaction
surveys in public services (post office, university department) and the private
sector (mobile phone service provider, airline company, and fast food
company). Finally, section 6 presents some concluding remarks, as well as
future research in the context of the proposed methodological approach.

2. Customer satisfaction and multicriteria analysis


Although, extensive research has defined several alternative approaches
for the customer satisfaction evaluation problem, all these proposed models
and techniques, so far, adopt the following main principles (Grigoroudis,
1999):
a) The data of the problem are based on the customers' judgements and
should be directly collected from them.
b) Customer satisfaction measurement is a multivariate evaluation problem
given that customer's global satisfaction depends on a set of variables
representing service characteristic dimensions.
c) Usually, an additive formula is used in order to aggregate partial
evaluations in a global satisfaction measure.
Based on these assumptions the customer satisfaction evaluation problem
can be formulated in the context of multicriteria analysis, assuming that
client's global satisfaction depends on a set of criteria or variables
representing service characteristic dimensions (Figure 1).
Measuring Customer Satisfaction 459

Figure 1. Aggregation of customer's judgements

The preference disaggregation MUSA method is an ordinal regression


based approach (Jacquet-Lagreze and Siskos, 1982; Siskos, 1985; Siskos and
Yannacopoulos, 1985) in the field of multicriteria analysis. The method is
used for the assessment of a set of marginal satisfaction functions in such a
way that the global satisfaction criterion becomes as consistent as possible
with customer's judgements. Thus, the main objective of the MUSA method
is the aggregation of individual judgements into a collective value function.
The MUSA method assesses global and partial satisfaction functions y"
and Xi" respectively, given customers' judgements Y and Xi (for the i-th
criterion). The ordinal regression analysis equation has the following form:

(1)

where the value functions y" and Xi" are normalised in the interval
[0,100], n is the number of criteria, and bi is a positive weight of the i-th
criterion.
In several cases, as presented in Sections 4-5, it is useful to assume a
value or treelike structure of criteria, also mentioned as "value tree" or
"value hierarchy" (Keeney and Raiffa, 1976; Keeney, 1996; Kirkwood,
1997).

3. The MUS A method


3.1 Basic inference procedure
The MUSA method proposed by Grigoroudis and Siskos (2001) infers an
additive collective value function y" and a set of partial satisfaction
460 AIDING DECISIONS WITH MULTIPLE CRITERIA

functions X j' . The main objective of the method is to achieve the maximum
consistency between the value function '" and the customers' judgements
" . Based on the modelling approach presented in the previous section and
introducing a double-error variable (see Figure 2), the ordinal regression
equation becomes as follows:

Y' = IbjX 'j -u+ +u- (2)


j;1

where Y' is the estimation of the global value function y', and u+ and u-
are the overestimation and the underestimation errors, respectively.

100 +---------.. -------.. ------------------------------------------------.----------------------------------------,..

(J+
J
y'm
(J.-
J

y']

y
o L---~--_+--~~--+_--~---4--~

Figure 2. Error variables for the j-th customer

In order to reduce the size of the mathematical program, removing the


monotonicity constraints for y' and X the following transformation j' ,

equations are used:

for m=1,2, ... ,a-1


(3)
for k=I,2, ... ,~ -1 and i=I,2, ... ,n

According to the aforementioned definitions and assumptions, the basic


estimation model can be written in a linear program formulation, as follows:
Measuring Customer Satisfaction 461

j=1

subject to

i=1 k=1 m=1

(4)
m=1

i=1 k=1

Zm ~ 0, W ik ~ 0 'r;j m,i,k
+>0 , (Jj->0
(Jj _ _ for j=I,2, ... ,M

where M is the size of the customers sample, n is the number of criteria, and
xl, yj are the j-th level on which variables X; and Yare estimated.
The preference disaggregation methodology includes also a post
optimality analysis stage in order to overcome the problem of model
stability. The final solution is obtained by exploring the polyhedron of
multiple or near optimal solutions, which is generated by the constraints of
the previous linear program. This solution is calculated by n linear programs
(equal to the number of criteria) of the following form:

LW
cxl-i

[max}F' = ik fori=l, 2, ... , n


k=1

under the constraints (5)


F ~F' +f
all the constraints of LP (4)

where f is a small percentage of F*.


The average of the solutions given by the n LPs (5) may be taken as the
final solution. In case of non-stability, this average solution is less
representative, due to the large variation among the solutions of LPs (5). A
more detailed discussion about post optimality analysis in ordinal regression
modelling is given in Jacquet-Lagreze and Siskos (1982).
462 AIDING DECISIONS WITH MULTIPLE CRITERIA

3.2 Satisfaction indices


The assessment of a performance norm may be very useful in customer
satisfaction analysis. The average global and partial satisfaction indices are
used for this purpose and are assessed through the following equations:

1 ~
IS m ·m
=-~py
100 m=!
(6)
= -Ip; x;
1 k 'k •
= 1,2, ... ,n
CX;
S; for 1
100 k=!

where Sand Si are the average global and partial satisfaction indices, and pm
and p/ are the frequencies of customers belonging to the ym and x/
satisfaction levels, respectively.
It can be easily observed in equation (6) that the average satisfaction
indices are basically the mean value of the global and partial satisfaction
functions. So, these indices can give the average level of satisfaction value
globally and per criterion.

3.3 Demanding indices


The shape of global and partial satisfaction functions can indicate
customers' demanding level. The average global and partial demanding
indices, D and Di respectively, are assessed through the following equations
(see Figure 3):

for a> 2

(7)

for Cci > 2 and i = 1,2, .. .,n

where a and a; are the number of satisfaction levels in global and partial
satisfaction functions, respectively.
It should be mentioned that these indices are normalised in the interval
[-1, 1] while the following possible cases hold:
Measuring Customer Satisfaction 463

a) D =1 or D =1: customers have the highest demanding index.


j

b) D = 0 or D =0: this case refers to "neutral" customers.


j

c) D = -lor Dj = -1: customers have the lowest demanding index.

y'a

IOO(m-I) 'm
-y a

IOO(m-I) a~./
a-I

y'm
y
y'

Figure 3. Calculating average demanding indices

These indices represent the average deviation of the estimated value


functions from a "normal" (linear) function. The average demanding indices
can be used for customer behaviour analysis, and they can also indicate the
extent of company's improvement efforts: the higher the value of the
demanding index, the more the satisfaction level should be improved in
order to fulfil customers' expectations.

3.4 Action diagrams


Combining weights and average satisfaction indices, a series of action
diagrams can be developed (Figure 4). These diagrams indicate the strong
and the weak points of customer satisfaction, and define the required
improvement efforts. Each of these maps is divided into quadrants,
according to performance (high/low) and importance (high/low) that may be
used to classify actions:
a) Status quo (low performance and low importance): Generally, no action
is required.
b) Leverage opportunity (high performancelhigh importance): These areas
can be used as advantage against competition.
c) Transfer resources (high performance/low importance): Company's
resources may be better used elsewhere.
d) Action opportunity (low performancelhigh importance): These are the
criteria that need attention.
464 AIDING DECISIONS WITH MULTIPLE CRITERIA

Transfer resources Leverage opportu nity


(high pcrrormance!low imporunce) (high perfonnanc:elbigh imi'O""'<~ )

Status quo Action opportu nity


(low perfonnanccllow importance) (low perfonnancCl1ligh imponanoe)

,,-_Lo. w_...1 ,,-_


1 H.igh_...1
IMPORTANCE

Figure 4. Action diagram (Customers Satisfaction Council, 1995)

In several cases, it is useful to assess the relative action diagrams, which


use the relative variables b; and S; in order to overcome the assessment
problem of the cut-off level for the importance and the performance axis.
The normalised variables b; and S; are assessed as follows:

for i = 1,2, .. . ,n (8)

where band S are the mean values of the criteria weights and the average
satisfaction indices, respectively.
This way, the cut-off level for axes is recalculated as the centroid of all
points in the diagram. This type of diagram is very useful, if points are
concentrated in a small area because of the low-variation that appears for the
average satisfaction indices (e.g. case of a high competitive market)
These diagrams are also mentioned as decision, strategic, perceptual, and
performance-importance maps (Dutka, 1995; Customers Satisfaction
Council, 1995; Naumann and Giel, 1995), or gap analysis (Hill, 1996;
Woodruff and Gardial, 1996; Vavra, 1997), and they are similar to SWOT
analysis.
Detailed presentation of the mathematical development of the MUSA
method may be found in Grigoroudis and Siskos (2001), and Siskos et al.
(1998).
Measuring Customer Satisfaction 465

4. Evaluating customer satisfaction in public services


4.1 Application to a post office
4.1.1 Satisfaction criteria
The assessment of a consistent family of criteria representing customers'
satisfaction dimensions is one of most important stages of the implemented
methodology. This assessment can be achieved through an extensive
interactive procedure between the analyst and the decision-maker
(company). In any case, the reliability of the set of criterialsubcriteria has to
be tested in a small indicative set of customers.
The hierarchical structure of customers' satisfaction dimensions is
presented in Figure 5 and indicates the set of criteria and subcriteria used in
this survey. The main satisfaction criteria include:

Figure 5. Hierarchical structure of satisfaction dimensions

personnel (skills and knowledge, responsiveness, behaviour, etc),


product/service (pricing, variety of provided services),
access (location and internal disposition of stores, working hours, waiting
time, etc), and
credibility (on time delivery, confidence and responsibility, etc).
466 AIDING DECISIONS WITH MULTIPLE CRITERIA

4.1.2 Global satisfaction analysis


Company's customers seem to be quite satisfied with the provided
service, given that the average global satisfaction index is almost 90%.
Moreover, criteria satisfaction analysis shows that customers are quite
satisfied according to the total set of criteria (average satisfaction indices 80-
91 %). According to the results presented in Table 1 and Figure 2, the
following remarks can be made:

Table I. Global satisfaction results

Criteria Weight Average satisfaction index Average demanding index


Personnel 5.48% 91.07% -27.03%
Product/Service 5.73% 88.90% -30.19%
Access 80.96% 89.82% -90.12%
Credibility 7.83% 80.91% -48.88%
Global satisfaction 89.20% -76.41%

Personnel
• Access

Product/Service •

Credibility

Low High
RELATIVE IMPORTANCE

Figure 6. Relative Action diagram (global satisfaction level)

The "Access" criterion is the most important one, with a significant


weight of almost 81 %. This can be justified by the fact that the
satisfaction survey was not conducted to the total clientele of the
company, but it was oriented only to the customers visiting the stores.
Although the average satisfaction indices for all criteria are relatively
high, it seems that there is a significant potential for further
Measuring Customer Satisfaction 467

improvement, given the high competitive conditions in the market (new


private companies offering express mail services).
Customers do not seem to be demanding according to the total set of
criteria.
The action diagram shows that there are no critical satisfaction
dimensions requiring immediate improvement efforts. However, if
company wishes to create additional advantages against competition, the
credibility criterion should be improved.

4.1.3 Criteria satisfaction analysis


The criteria satisfaction analysis confirms the conclusions of the previous
section. In general, the company's performance is quite high in almost all
satisfaction dimensions, which are considered important by the customers.
This fact justifies the satisfactory level of the distinctive satisfaction indices.
On the other hand, however, there are several areas where the company has
significant margins for improvement.
The detailed results of Table 2 indicate the following points:
Company's competitive advantages seem to be personnel's
responsiveness and behaviour, variety and prices of provided products
and services, waiting time and company's responsibility.

Table 2. Criteria satisfaction analysis

Average satisfaction Average demanding


Subcriteria Weight
index index
SkillslKnowledge 8.5% 74.6% -53.2%
Responsiveness 26.0% 92.5% -64.2%
Behaviour 39.5% 93.2% -2\.3%
Understanding 26.0% 91.8% -69.2%
Variety 50.0% 86.2% -92.0%
Pricing 50.0% 91.6% -92.0%
Location of stores 10.0% 61.8% -20.0%
Internal disposition
Working hours
4.6%
2.5%
38.9%
71.5%
.
-13.3%

Service system troubles 3.2% 88.5%


Waiting time 79.7% 96.9% -89.9%
Responsibility 77.1 % 80.5% -94.8%
On time delivery 14.1 % 89.4% -71.5%
Delivery frequency 8.8% 70.9% -54.4%
• criteria with 2-level ordinal satisfaction scale

Although "Access" criterion is the most important strength of the post


office, there are several aspects of this particular satisfaction dimension
468 AIDING DECISIONS WITH MULTIPLE CRITERIA

with large improvement margins (like working hours, location and


internal disposition of stores). Customers seem less demanding in these
subcriteria, and thus, improvement efforts may have an immediate
impact.

4.2 Application to a university department


4.2.1 Criteria assessment
The case of measuring satisfaction in a university department can also be
considered as an internal service quality evaluation process (Siskos et al.,
2001). This application refers to a public and business administration
department. Although it is focused on students' satisfaction, department's
global evaluation should be oriented to all academic personnel (professors,
administrative personnel, etc), as well as to external evaluators (business
organisations, community, etc).
The main set of students' satisfaction criteria used in this particular
survey consists of:
1. Academic personnel: this criterion refers to the educational skills and the
knowledge of the academic personnel, their communication and
collaboration with students, as well as the number of professors in the
department.
2. Educational process: this criterion includes all aspects of the educational
process like provided textbooks and notes, students evaluation process,
educational approach chosen for each course, etc.
3. Syllabus: this criterion refers mainly to the number of the provided
courses, the ability to adapt course of study to students' needs, etc.
4. Labour market: this criterion refers to the vocational rehabilitation of the
graduated students (carrier services after graduation, adaptation of
courses to labour market needs).
5. Administration: secretariat, academic advisor, administration service
process, etc.
6. Additional services: this criterion includes the additional services that are
provided to the students, like library, tutorial courses, subscription to
journals and Internet services, audio-visual equipment, computer labs,
etc.
Additional analysis has also been conducted, based on a detailed
hierarchical structure of evaluation dimensions proposed by Siskos et al.
(2001) for the case of a university department.
Measuring Customer Satisfaction 469

4.2.2 Global satisfaction analysis

The average global satisfaction index is relatively low (61%) mainly


because students are not satisfied from the opportunities offered to the labour
market (26%), the syllabus (26%), and the provided administrative service
(39%) as shown in Table 3.

Table 3. Criteria satisfaction results

Criteria Weight Average satisfaction index


Academic personnel 15% 72%
Educational process 29% 83%
Syllabus 15% 26%
Labour market 13% 26%
Administration 11% 39%
Additional services 17% 77%
Global satisfaction 61%

Additionally, the form of the global satisfaction function indicates that


students are not particularly demanding (Figure 7). On the other hand,
students seem to be quite satisfied according to the criterion of educational
process (83%), which is also the most important satisfaction dimension
(weight 29%). Finally, it should be noted that although the rest of the criteria
have higher satisfaction indices (72-77%) compared to the global satisfaction
level, they appear to have significant improvement margins.

100~----------------------------------~---,

75

50

25

0.0
O+-------,-------,--------r-------r------~

Unsatisfied Moderately Satisfied Very satisfied Completely


satisfied satisfied

Figure 7. Global satisfaction function (added value curve)


470 AIDING DECISIONS WITH MULTIPLE CRITERIA

4.2.3 Segmentation satisfaction analysis


The main aim of this particular analysis is to determine students' clusters
with distinctive preferences and expectations in relation to the total set. The
discriminating variables that have been used for identifying special groups of
students are the year of studies, the sector of studies, and the average grades.
The most important distinctive results relate to the segmentation
according to the year of studies. The results of this analysis are presented in
Tables 4-5, and reveal the following:
Globally, 3rd and 4th year students are very dissatisfied from the
university department. These students are the main reason for the low
global satisfaction level.
1sl and 2nd year students are less demanding, and thus, they have
relatively higher average satisfaction index.
The satisfaction level of the 1sl year students is higher compared to the
other groups according to almost all of the criteria.
The academic personnel, the syllabus, and the provided administrative
and additional services have the lowest satisfaction level for 3rd and 4th
year students. As students are closer to graduate, they seem to be more
demanding at these particular satisfaction dimensions.

Table 4. Global satisfaction analysis per year of studies

Year of studies Average satisfaction index Average demanding index


78.6% -55.3%
72.1% -44.8%
24.5% 50.3%
35.2% 27.7%

Table 5. Average partial satisfaction indices per year of studies

Criteria 151 year 2nd year 3rd year 41h year


Academic personnel 61% 65% 59% 33%
Educational process 67% 64% 29% 72%
Syllabus 62% 91% 31% 18%
Labour market 91% 26% 25% 21%
Administration 70% 59% 31% 22%
Additional services 73% 66% 13% 40%

All these results can be explained by the way the course of studies is
implemented:
a) During the lSI year, students are basically taught elementary subjects
(mathematics, sociology, etc).
Measuring Customer Satisfaction 471

b) At the beginning of the 3rd year, students have to choose the sector of
studies they will follow. This will affect in great extent all of their next
choices.
Segmentation satisfaction analysis is performed through the
implementation of the MUSA method in each student's cluster separately.
For this reason, the fitting and the stability level of the results may vary
causing a problem of "inconsistency" when trying to compare global with
segmentation analysis results. In this particular application, the problem
mainly concerns the average satisfaction indices due to the high error level
in the global satisfaction analysis (the global set is less homogenous than the
segments of students).

5. Evaluating customer satisfaction in the private


sector
5.1 Application to a mobile phone service provider
5.1.1 Satisfaction criteria and survey conduct
The implementation of the MUSA method includes a preliminary
customer behavioural analysis in which, the assessment of the set of
satisfaction criteria is made as presented in the previous applications.
In this particular case, customers were asked to evaluate/express their
satisfaction according to the following criteria:
1. Stores (network expansion, location and appearance of stores, etc).
2. Service in stores (personnel, service processes, working hours, waiting
time, etc).
3. Service by the call centre (personnel, service processes, waiting time,
etc).
4. Products/Services (variety, mail phone, customer service, roaming, and
additional info services)
5. Pricing (mobile phone device, fixed rate, prices per type of services, etc).
6. Image (technological excellence, credibility, ability to satisfy future
needs, etc).
7. GSM network (expansion, signal, communication quality, and
disturbances).
8. Customer loyalty services (phone device replacement, lower fixed rates,
etc).
The presented customer satisfaction survey took place in two retail stores
of the business organisation located in different areas. The survey was
conducted within summer 2000 in a randomly selected customer sample.
472 AIDING DECISIONS WITH MULTIPLE CRITERIA

5.1.2 Global satisfaction analysis


The average global satisfaction index is not relatively high (79.1%),
given the high competitive conditions of the mobile phone service sector.
Moreover, Table 6 shows that customers are quite satisfied according to the
service provided in stores and to offered loyalty services, while lower
satisfaction indices appear for the rest of the criteria (60%-79%). The most
important criteria seem to be "Loyalty services" (24.9%), "Service in store"
(19.1%), and "Products/Services" (11.4%). This justifies the relative fair
value of the global satisfaction index. Customers are more satisfied
according to the most important criterion and less satisfied on the
dimensions that seem to playa less important role to their preferences.

Table 6. Global satisfaction results


Criteria Weight Average satisfaction Average demanding
index index
Stores 9.1% 74.1% -12.3%
Service in store 19.1% 88.4% -58.0%
Service by the call centre 8.2% 60.6% -2.7%
Products/Services 11.4% 79.2% -29.6%
Pricing 9.1% 67.5% -12.3%
Image 9.1% 74.5% -12.3%
GSMnetwork 9.1% 70.2% -12.3%
Customer loyalty services 24.9% 85.9% -67.8%
Global satisfaction 79.1% -45.7%

Service in store


Customer loyalty services
Product/Service

Image

Stores •
GSM network •

Pricing

Service by the call• centre

Low High
RELATIVE IMPORTANCE

Figure 8. Relative action diagram


Measuring Customer Satisfaction 473

The action diagram shows that there are no critical satisfaction


dimensions requiring immediate improvement efforts, as presented in Figure
8. However, if company wishes to create additional advantages against
competition, the criteria with the lowest satisfaction index should be
improved. These improvement efforts should focus on service by the call
centre, pricing, GSM network, stores, and company's image.

5.1.3 Segmentation satisfaction analysis


The most important results from customer satisfaction segmentation
analysis are mainly focused on the discrimination of the total clientele to
customers having or not having previous experience with other competitors.
As shown in Table 7, experienced users are more satisfied according to
almost all of the criteria set. The higher demanding level that appears in this
particular customer group may confirm this result (Table 8).

Table 7. Average satisfaction indices per segment of customers


Criteria Experienced users Inexperienced users
Stores 75.5% 70.6%
Servi ce in store 84.7% 73.8%
Service by the call centre 63.0% 54.6%
Products/Services 75.0% 72.8%
Pricing 64.1% 68.9%
Image 89.5% 73.2%
GSM network 68.2% 9\.0%
Customer loyalty services 70.4% 75.8%
Global satisfaction 79.3% 79.3%

Table 8. Average demanding indices per segment of customers


Criteria Experienced users Inexperienced users
Stores -14.2% -4.4%
Service in store -36.0% -4.42%
Service by the call centre -1.1% 3.0%
Products/Services -14.2% -8.5%
Pricing -7.6% -8.5%
Image -25.5% -8.5%
GSMnetwork -5.2% -77.9%
Customer loyalty services -14.2% -34.5%
Global satisfaction -46.9% -42.5%

The detailed results of the previous segmentation analysis reveal the


following:
474 AIDING DECISIONS WITH MULTIPLE CRITERIA

- When company's effmts are oriented to customers with previous


experience from other mobile phone service providers, the advantages
appearing in "Image" and "Service in stores" criteria should be used.
- On the other hand, the company should take advantage of the "GSM
network" criterion, when its efforts are oriented to new customers with
no experience in mobile phone services.
- In any case, company's improvement efforts should include "Pricing"
and "Service by the call centre". These criteria appear relatively low
satisfaction indices in both customer segments.
The variant level of homogeneity between the global set and the
customer segments causes also in this case an "inconsistency" problem when
trying to compare global with segmentation analysis results (see also §4.2.3).
A detailed discussion on how to deal with possible implementation problems
of the MUSA method (e.g. modifications of the LP formulation) is presented
by Grigoroudis and Siskos (2001).

5.2 The case of an airline company


5.2.1 Preliminary analysis
The application presented in this section refers to a pilot customer
satisfaction survey for an airline company. Passengers on board and
customers visiting airline's agencies have both participated in this survey.
The set of main satisfaction criteria consists of:
1. Tidiness (delays, booking system, timetable, etc).
2. Service (personnel, service time, waiting time, etc).
3. Pricing (ticket price and special discounts)
4. Credibility (safety of trip, baggage claim, damages, etc).
5. Comfort/Service quality (seats on board, quality of food, additional
services, etc).

5.2.2 Global satisfaction analysis


The main results of the MUSA method are presented in Table 9, from
where the following points raise:
- Globally, customers are quite satisfied from the provided service (global
average satisfaction index 88.6%).
- Nevertheless, based on the high competitive conditions of the market,
there are significant improvement margins for several satisfaction
dimensions.
Measuring Customer Satisfaction 475

Table 9. Global satisfaction results

Criteria Weight Average satisfaction Average demanding


index index
Tidiness 36.3% 94.2% -89.0%
Service 5.3% 70.6% -22.8%
Pricing 4.4% 58.0% -9.0%
Credibility 6.6% 76.0% -34.3%
Comfort/Service quality 47.3% 90.1% -91.5%
Global satisfaction 88.6% -51.1%

The highest satisfaction indices appear for the criteria of "Tidiness" and
"Comfort/Service quality", 94.2% and 90.1% respectively. Also,
customers seem to give higher importance to these criteria.
The rest of the criteria have a low level of importance for the customers
(4.4-6.6%), while the performance of the company is rather modest
(average satisfaction indices 58-76%).
Regarding the improvement efforts for the airline company, an inspection
of the action diagram (Figure 9) reveals that there is no particularly critical
satisfaction dimension calling for an immediate improvement. Nevertheless,
the improvement priorities should be focused on the criteria with the lowest
satisfaction indices. Assuming that the company has small improvement
margins for the "Price" criterion due to the competition, its efforts should be
focused on the credibility and the provided service.

Tidiness


Comfort/Service quality

Credibility •
Service •

Pricing •

Low High

RELATIVE IMPORTANCE

Figure 9. Relative action diagram


476 AIDING DECISIONS WITH MULTIPLE CRITERIA

5.2.3 Segmentation satisfaction analysis


The satisfaction analysis in different customer groups has been focused
on the purpose of the trip. It should be noted that this particular customer
satisfaction survey includes only international flights.
According to Tables 10-11, the comparative analysis of the customer
clusters reveals the following:
Passengers travelling for business give significant importance to the
quality of the provided service, while they are not satisfied according to
the company's pricing, service, and credibility. These customers can be
also characterised as frequent users.
On the other hand, passengers travelling for tourism give higher
importance to company's credibility, while they are not satisfied
according to criteria of "Pricing" and "Service". Usually these customers
are not frequent users.
In any case, the lowest satisfaction indices appear for the company's
prices, fact that justifies the main results presented in the previous section.

Table 10. Criteria weights per purpose of trip

Criteria Business Tourism Personal Other


Tidiness 20.0% 19.2% 51.5% 35.8%
Service 5.2% 4.5% 4.3% 14.7%
Pricing 4.4% 4.2% 4.2% 9.5%
Credibility 5.2% 49.2% 20.0% 20.0%
Comfort/Service quality 65.2% 22.9% 20.0% 20.0%

Table 11. Average satisfaction indices per purpose of trip

Criteria Business Tourism Personal Other


Tidiness 90.00% 92.40% 97.10% 96.00%
Service 71.90% 65.70% 60.90% 93.00%
Pricing 48.60% 48.30% 60.50% 45.40%
Credibility 67.40% 97.20% 83.10% 82.00%
Comfort/Service quality 94.30% 90.50% 89.40% 91.00%
Global satisfaction 89.50% 91.60% 92.60% 92.60%

5.3 Application to a fast food company


5.3.1 Satisfaction criteria
The presented customer satisfaction survey refers to a fast food company,
which takes advantage of franchising options in order to hold more than 150
restaurants in four countries. The survey was conducted in a randomly
Measuring Customer Satisfaction 477

selected customer sample and it took place in three restaurants of the fast
food company located in different towns.
The value hierarchy of customers satisfaction dimensions presented in
Figure 10 indicates the set of criteria and subcriteria used in the analysis.
The main satisfaction criteria include:
1. Personnel: this criterion includes all the characteristics concerning
personnel (skills and knowledge, responsiveness, friendliness,
communication and collaboration with customers, etc).
2. Product: this criterion refers mainly to the offered products (quality and
quantity of food, variety of dishes, and prices).
3. Service: this criterion refers to the service offered to the customers; it
includes the appearance and the cleanliness of the stores, the waiting time
during busy and non-busy hours, and the service time.
4. Access: the location and number of stores, as well as parking availability
are included in this criterion.

Figure 10. Hierarchical structure of satisfaction dimensions

5.3.2 Global satisfaction analysis


The average global satisfaction index is approximately 90%, while
company's performance according to the whole set of criteria varies between
478 AIDING DECISIONS WITH MULTIPLE CRITERIA

86% and 92%. Given the high competitive conditions of the market, this
performance is not considered relatively high.
The detailed results of Table 12 reveal the following:
The most important criterion, with a significant importance level of
45.2%, is "Product". Customers do not consider important the rest of the
criteria.
The low weight for the "Access" criterion can be explained by the fact
that the main competitors have no better performance in this particular
criterion.

Table 12. Global satisfaction results


Criteria Weight Average satisfaction index Average demanding index
Personnel 22.00% 92.44% -75.20%
Product 45.20% 86.56% -68.61%
Service 25.00% 88.08% -66.21%
Access 10.70% 87.19% -85.01 %
Global satisfaction 90.81% -73.20%


Personnel

Service-

-
Access
Product
-
Low High
RELATIVE IMPORTANCE

Figure 11. Relative action diagram (global satisfaction level)

Combining weights and satisfaction indices, the action diagram can be


formulated, as shown in Figure 11. In this diagram, the "Product" criterion
appears as a critical satisfaction dimension requiring immediate
improvement efforts: it has the lowest average satisfaction index comparing
Measuring Customer Satisfaction 479

to the rest of the criteria, while it IS considered as the most important


criterion by customers.

5.3.3 Criteria satisfaction analysis


The analysis of the partial satisfaction dimensions allows for the
identification of the criteria characteristics that constitute the strong and the
weak points of the company. The detailed results of Table 13, reveal the
following:
Personnel's friendliness constitutes a significant competitive advantage
for the fast food company.
The quality of food appears as one of the strongest points of the
company, although customers do not seem to be satisfied according to the
quantity of food. This result is related to the low satisfaction index
appearing for the "Price" criterion.
Particular attention should be paid to the waiting time during busy hours
and the service time as well. On the other hand, the appearance of the
restaurants seems to be one of the competitive advantages for the fast
food company.
The satisfaction level with respect to the "Access" criterion could have
been higher, if customers were more satisfied according to the provided
parking facilities.

Table 13. Criteria satisfaction results

Subcriteria Weight A verage satisfaction Average demanding


index index
SkillslKnowledge 34.0% 93.\0% -7\.3%
Responsiveness 15.9% 83.60% -62.1%
Friendliness 50.1% 94.80% -82.0%
Quality of food 49.8% 90.40% -84.4%
Quantity of food 13.3% 77.30% -40.9%
Variety (menu) 25.0% 90.90% -68.5%
Prices 11.9% 71.70% -33.7%
Appearance of stores 42.8% 90.80% -81.0%
Waiting time (busy hours) 8.5% 68.60% -29.8%
Waiting time (non-busy hours) 19.2% 90.90% -69.4%
Service time 8.3% 74.\0% -28.8%
Cleanliness 21.2% 93.30% -62.7%
Location of stores 87.1% 90.70% -93.4%
Number of stores 6.8% 81.10% -42.3%
Parking 6.1% 43.90% -12.9%
480 AIDING DECISIONS WITH MULTIPLE CRITERIA

6. Conclusions
The original applications presented in this paper illustrate the
implementation of the preference disaggregation MUSA method in several
business organisations from the public and the private sector. The most
important results include:
- the determination of the weak and the strong points of the business
organisation,
- the performance evaluation of the company (globally and per
criteria/subcriteria), and
- the identification of the distinctive critical groups of customers.
The applications show that the MUSA method can measure and analyse
customer satisfaction in a very concrete way, and thus it may be integrated in
any business organisation's total quality approach. Several applications of
the method in original customer satisfaction surveys can be found in
Grigoroudis et al. (1999a, 1999b), Mihelis et al. (2001), and Siskos et al.
(2001). Also, the MUSA method may be used in a similar way to measure
and analyse employee satisfaction (Grigoroudis, 1999). Furthermore,
analysing clients' preferences and expectations is the basic step to evaluate
customer loyalty.
The installation of a permanent customer satisfaction barometer is
considered necessary, given that it allows the establishment of a
benchmarking system (Edosomwan, 1993). Thus, the implementation of the
MUSA method through a period of time can serve the concept of continuous
improvement.
Grigoroudis and Siskos (2001) propose several extensions and future
research regarding the MUSA method. Among others, the comparative
analysis between the results of the MUSA method and the financial indices
(market share, profit, etc) of a business organisation can help the
development of business strategies and the evaluation of the cost of quality.
It should be mentioned that, although customer satisfaction is a necessary
but not a sufficient condition for the financial viability, several researches
have shown that there is a significant correlation among satisfaction level,
customer loyalty, and profitability (Dutka, 1995; Naumann and Giel, 1995).

Acknowledgements
The authors are thankful to all the students of the National Technical
University of Athens and the University of Cyprus who helped customer
satisfaction survey conduct and analysis. Especially, the authors would like
to thank: P. Apseros, N. Eteokleous, T. Georgiou, L. Giannoudiou,
Measuring Customer Satisfaction 481

A. Grigoriou, E. Karekla, M. Konstantinidou, K. Kotsonis, L. Morfitou,


A. Mousa, D. Nikolaou, L. Panagidou, E. Papadopoulou, Y. Papathanasiou,
M. Pimpisii, E. Pissaridi, M. Spirou, G. Stathopoulos, A. Tamani, and
D. Venezis.

References
Customers Satisfaction Council (1995). Customer Satisfaction Assessment Guide, Motorola
University Press.
Dutka A. (1995). AMA Handbook of customer satisfaction: A complete guide to research,
planning and implementation, NTC Business Books, Illinois
Edosomwan J. A. (1993). Customer and market-driven quality management, ASQC Quality
Press, Milwaukee.
Gerson R. F. (1993). Measuring customer satisfaction: A guide to managing quality service,
Crisp Publications, Menlo Park.
Grigoroudis E. (1999). Measuring and analysing satisfaction methodology: A multicriteria
aggregation-disaggregation approach, Ph.D. Thesis, Technical University of Crete,
Department of Production Engineering and Management, Chania (in greek).
Grigoroudis E. and Y. Siskos (2001). Preference disaggregation for measuring and analysing
customer satisfaction: The MUS A method, European Journal of Operational Research (to
appear).
Grigoroudis E., A. Samaras, N. F. Matsatsinis and Y. Siskos (1999a). Preference and
customer satisfaction analysis: An integrated multicriteria decision aid approach,
Proceedings of the 5th Decision Sciences Institute's International Conference on
Integrating Technology & Human Decisions: Global Bridges into the 21st Century,
Athens, Greece, (2), 1350-1352.
Grigoroudis E., 1. Malandrakis, 1. Politis and Y. Siskos (1999b). Customer satisfaction
measurement: An application to the Greek shipping sector, Proceedings of the 5th
Decision Sciences Institute's International Conference on Integrating Technology &
Human Decisions: Global Bridges into the 21st Century, Athens, Greece, (2), 1363-1365.
Hill, N. (1996). Handbook of customer satisfaction measurement, Gower Publishing,
Hampshire.
Jacquet-Lagreze E. and 1. Siskos (1982). Assessing a set of additive utility functions for
multicriteria decision-making: The UTA method, European Journal of Operational
Research, (10), 2, 151-164.
Keeney R. L. (1996). Value-focused thinking: A path to creative decision making, Harvard
University Press.
Keeney R. L. and H. Raiffa (1976). Decisions with multiple objectives: Preferences and value
tradeoffs, John Wiley and Sons, New York.
Kirkwood G. W. (1997). Strategic decision making, Duxbury Press, Belmont.
Mihelis G., E. Grigoroudis, Y. Siskos, Y. Politis and Y. Malandrakis (2001). Customer
satisfaction measurement in the private bank sector, European Journal of Operational
Research, (130), 2, 347-360.
Naumann E. and K. Giel (1995). Customer satisfaction measurement and management: Using
the voice ofthe customer, Thomson Executive Press, Cincinnati.
Siskos 1. (1985). Analyse de regression et prograrnmation lineaire, Revue de Statistique
Appliquee, 23 (2),41-55.
482 AIDING DECISIONS WITH MULTIPLE CRITERIA

Siskos J. and D. Yannacopoulos (1985). UTASTAR: An ordinal regression method for


building additive value functions, Investigal;ao Operacional, 5 (1),39-53.
Siskos Y., E. Grigoroudis, C. Zopounidis and o. Saurais (1998). Measuring customer
satisfaction using a collective preference disaggregation model, Journal of Global
Optimization, 12, 175-195.
Siskos Y., Y. Politis and G. Kazantzi (2001). Multicriteria methodology for the evaluation of
higher education systems: The case of an engineering department, HELORS Journal (to
appear).
Vavra T. G. (1997). Improving your measurement of customer satisfaction: A guide to
creating, conducting, analyzing, and reporting customer satisfaction measurement
programs, ASQC Quality Press, Milwaukee.
Woodruff R. B. and S. F. Gardial (1996). Know your customer: New approaches to
understanding customer value and satisfaction, Blackwell Publishers, Oxford.
MANAGEMENT OF THE FUTURE
A System dynamics and MCDA approach

Jean-Pierre Brans
Vrije Universiteit Brussel - Center for Statistics and OR, Belgium
jpbrans@vub.ac.be

Pierre L. Kunsch
Vrije Universiteit Brussel - Center for Statistics and OR, Belgium
p.kunsch@nirond.be

Bertrand Mareschal
Universite Libre de Bruxelles - Mathematiques de la Gestion, Belgium
bmaresc@ulb.ac.be

Abstract In this paper we propose il- Decision-Aid procedure to pilot Socio -


Economic systems. System Dynamics is used for modelling the com-
plex structure of such systems and the MCDA PROMETHEE-GAIA
procedure to select appropriate management strategies. SD long term
forecasts provide a view on the possible evolution of the systemj they
are used to validate strategies and to define possible objectives. Short
term forecasts are used to control, to monitor and to pilot the system
when the time is evolving. Selected strategies are never fixed forever,
they are adapted in function of the observed evolution in order to pro-
vide the best possible management. A watchdog procedure allows to
reconsider the management of the system in case of chaotic evolution or
when a catastrophic event emerge. The computer programme VENSIM
is used for SD modelling and PROMCALC and DECISION LAB for the
MCDA selection of management strategies. This methodology is now
applied in various economic fields. The obtained results are extremely
promising.

Keywords: Socio-economic systemsj Decision aidj System dynamicsj Complex and


chaotic structuresj MCDAj PROMETHEE-GAIAj Reengineering
484 AIDING DECISIONS WITH MULTIPLE CRITERIA

A BERNARD ROY, notre ami, notre guide, notre maitre.


Avec toute notre reconnaissance et notre admiration.

1. Introduction - The Past, The Present, The


Future
The methodology proposed in this paper is applicable to the manage-
ment of many socio-economic processes, as well in the micro- as in the
macroeconomic fields (Figure 1). The management of such systems is
extremely difficult for several reasons. The limits of a system are never
clearly defined. In a production unit for instance, should the suppli-
ers and the customers, the competitors and other stakeholders on the
market be included in the system or not?

Socio-economic process
(Real-World)

Micro

Figure 1. Real world situation

The past and the present of a system allow accumulating historical


data. It is then possible to obtain correct values for critical parameters.
It is the field of strong and efficient tools such as Information System
or Information Technology. On the one hand, the structures of a sys-
tem are complex and hypercomplex. Complex means that an infinity
of independent equations should be necessary to describe entirely the
system, while hypercomplex means that each small neighbourhood (an
employee or a customer for instance) is already an independent complex
system in itself. On the other hand, the future is unpredictable. The
structures are moving and the evolution is chaotic. In some cases it
Management of the Future 485
is even catastrophic because some strong discontinuities and some non-
reversible events can take place. In most of the cases decisions are made
for the present moment. But the present is already the past, so that
decisions should always be considered for the future, as well in the short
as in the long term.
For these reasons, and for many others, decision-making is rather del-
icate. It is essential to include the future, to select appropriate strategies
and to dispose of tools to monitor the system and to control the evolu-
tion in real time. The management of socio-economic processes requires
appropriate decisions. In order to solve the problem, objectives must be
formulated, and the question is then how to reach them. Information on
the future (or at least on possible future evolutions) is requested. How
to obtain it? In many cases the managers treat the problems by intu-
ition, by feeling, by appreciation. It is the field of perceptions based on
qualitative approaches. However, due to the increasing complexity, the
limitation of the human brain and the volume of information to treat,
quantitative approaches, such as in physics and in engineering, are more
and more required for the management of socio-economic systems. It is
the field of measurements giving the means to model Real-World situa-
tions, and of Decision Aid to influence and possibly to change the future.
H.A. Simon can be quoted here: "The only reasonable way to face the
complexity of socio-economic processes is modelling" Simon (1990).
When a quantitative approach is considered, analysts translate the
Real World into a mathematical model. According to its complexity, the
model is obviously a reduction of reality. It is up to the analysts to take
into account the most significant components. The purpose is to describe
and understand as well as possible the Real World, and finally to provide
decision assistance. During this procedure the decision-maker remains
entirely responsible of which "Real World" will become true. He is
defining the objectives, and makes the final decision. He keeps the power
on the decision stick. As soon as the decision is made, the objectives for
the future should be kept in mind, the evolution should be monitored
and further control actions should be taken when necessary Brans (1996).
Operational Research (OR) is the science of Decision Aid. As all human
features cannot be modelled, OR provides an harmonious combination of
qualitative and quantitative approaches. Its contributions are supported
as well by perceptions as by measurement.
The right-hand side of Figure 2 shows the steps of the "Decision aid"
by the analyst; the left-hand side describes the process of "Decision-
making". The management process starts with data management and
ends with modelling and decision aid. Both fields have been extensively
developed, by Information Technology on the one hand, by Operations
486 AIDING DECISIONS WITH MULTIPLE CRITERIA

Quantitative Approaches

Virtual space

Decision maker(s)
- Responsible of the R.W.
J Analyst(s)
- Modelling
objectives - Describe
- Decision stick - Understand

- Monitoring of the system - Decision aid

Figure 2. Scheme of the quantification approach for decision-making in the Real


World.

Research (OR) on the other hand. However up to now both fields are
not properly bridged. An area of interest and useful research lies in front
of us, perhaps for several decades. It is essential for the management
process as a whole. It is also the purpose of this paper.

2. Instruments to investigate the future


According to the previous introduction, it is essential to have tools at
our disposal to investigate the future. Many tools are currently used,
the most important ones being time series analysis, econometric models,
neural networks, etc. Describing these tools is beyond the scope of the
present paper. We limit our very simplified discussion to point out basic
differences in their approach with system dynamics.

2.1. Time Series Analysis


This technique is sketched for the ID-case in Figure 3. It consists
in identifying a variable of interest in the socio-economic system and to
represent this variable in a plane (time, variable). The past evolution is
Management of the Future 487
represented in the plane and approached by extrapolating a trend into
the future.

I
I
I
I
Variable I
I
I
I
I
I

: ---------
I _--

+ I1 _____ - _ - - - - - - -

~------------------------------~----------------~Time

Past Present Future

Figure 3. Time Series Analysis in the ID-case

This technique, though simple and often useful, is of course not very
reliable as it depends on the chosen extrapolating process.

2.2. Econometric Models


The principle is the same as for the graphical representation, except
that the trend is obtained by a more or less sophisticated regression
procedure. Figure 4 evidences a basic drawback of this approach ignoring
any structural aspects in the considered process. A "train" propelled in
the future delivers information on the future evolution. For each variable
of the system a new train is considered. Usually no links between the
trains are considered, so that the structures inside the system are not
taken into account. More refined multivariate approaches with time lags
are available, but do not take explicitly feedback structures into account.

2.3. Artificial Neural Networks


A neural network consists of several layers of linked neurons. Each
neuron as shown in Figure 5 acts as a black box in which internal connec-
tions are not explicitly defined. The neuron is stimulated by weighted
inputs herewith producing some outputs further dispatched along the
network. The weights are calibrated and validated by means of the past
evolution in order to obtain the observed outputs. As soon as the learn-
ing process has calibrated the neural network, the latter can be used for
making forecasts into the future.
488 AIDING DECISIONS WITH MULTIPLE CRITERIA

Future

Past

Figure 4. Econometric model

W 1-
-- --------
------
Neuron
Inputs Wi
- Outputs

--
(Black bOx)

: k
-
W

Figure 5. Action of a neuron. Wi'S are weights to be calibrated during the learning
process

2.4. System Dynamics


This approach for decision-making has been developed by Forrester
(see for example Forrester(1968); Forrester (1992)). It consists essen-
tially of 4 steps Richardson et al.(1981), :
1 The variables of the socio-economic system are identified.
2 The relations and causal links between these variables are estab-
lished.
3 The relations and links are transformed into a system of differential
equations with given initial values.
4 The whole socio-economic system is projected in the future by
integration of the system of differential equations, so that a forecast
is obtained for each variable of the system.
Of course in all four techniques, the prediction of the future is unreli-
able because the structures are moving, and the evolution shows chaotic
Management of the Future 489

and possibly catastrophic features. The System Dynamics approach


seems to be, in our opinion, the most appropriate approach because
it is considering the structure of the socio-economic systems while all
other approaches are considering the variables one by one without pay-
ing attention to the existing links between them. In order to develop
an appropriate dynamic management of the socio-economic systems, it
is essential to control their structures, to adapt the strategies and to
operate in real time in order to master the chaotic evolution. System
Dynamics allows to achieve these goals.

Future

- Moving structures
- Chaotic evolution
- Catastrophic evolution

Present

Figure 6. Schematic view of the integration process of the equations of movement


in System Dynamics

3. An overview of the management procedure


The dynamic management approach we propose is an iterative proce-
dure combining System Dynamics and multicriteria analysis. It consists
of a succession of short-term iterations, each iteration including 4 phases
and 11 steps Brans et ai., 1998 as shown in Figure 7.
490 AIDING DECISIONS WITH MULTIPLE CRITERIA

Phase I:
Step 1: Mental Model.
Information System

Phase II: Step 2: Variables. Relations. Influence Diagram.


System Dynamics Step 3: Positive and Negative Feedback Loops.
Modelling
Step 4: Core of Control Variables. Watchdogs.
Step 5: System Dynamics Integration.

Phase III: Step 6: Generation of Strategies.


Strategies Step 1: Simulation of the Strategies. Tests.
Step 8: Selection of an Appropriate Strategy
(Multi-criteria PROMETHEE-GAIA procedure.)

Phase IV: Step 9: Implementation of the Strategy.


Monitoring
Step 10: Short-term Control.
and Control
Step 11: Analysis of the Deviations
between Evolution and Forecast

--...... Branching to the Next Iteration.

Figure 7. Overview of the ll-step management procedure starting with the set-up
of the Information System (IS)

4. The four phases of the management


procedure
4.1. Phase I: Information System - Data
Step 1: Mental Model

The modelling of complex socio-economic systems requires a suitable


Information System (IS) including all possible data from the present and
the past. This Information System will be consulted each time data will
be required for further developments. The mental model consists of a
qualitative appreciation of the Real-World situation. The structures of
the system are analysed, and possible objectives and expectations for the
future are considered. In many cases the mental model is translated into
a graphical representation consisting of a rich picture diagram Checkland
Management of the Future 491

(1981). The most significant elements of the system are represented


as well as the links between them. The mental model expresses in a
qualitative way the knowledge and the maturity accumulated by the
responsible decision-makers and the analysts.

4.2. Phase II: System Dynamics Modelling


Step 2: Influence Diagram.

The structure of the system is now more systematically investigated:


All the variables describing the system, at the time t = 0, are enumer-
ated. Three types of variables are considered:

- Levels or stocks: accumulating variation rates

- Flows: variation rates associated to levels

- Auxiliary variables.

Among these variables some are decision variables, on which the


decision-maker can exert an action. These variables are particularly
important because their positioning define a management strategy.
The relations between the variables are then considered to obtain the
Influence diagram, or causal loop diagram. It is an directed graph, the
nodes of which are the variables and the edges represent the relations
between them. The relations can be of various types such as: qualitative
tables, numerical tables or analytical functions.
The initial structure can be easily translated into a mathematical
model. The dynamics in continuous time is represented by a system of
differential equations with initial conditions. According to the complex-
ity of the socio-economic systems, the number of variables considered
and the nature of the relations between them, the influence diagram is
often very intricate. Fortunately, very effective and user-friendly soft-
ware available for PC, such as VENSIM (Ventana Systems) or STELLA
II (High Performance Systems) provides assistance to develop the influ-
ence diagram and to numerically integrate the differential equations. It
is no longer a problem to simulate large-size models with a theoretically
unlimited number of variables and links in the influence diagram.

Step 3: Positive and negative feedback loop

The influence diagram often includes closed cycles, so-called feedback


loops. The latter can have either positive or negative polarity.
492 AIDING DECISIONS WITH MULTIPLE CRITERIA

A feedback loop is positive when after each iteration, an amplification


(increasing or decreasing) of the values of the variables of the loop is
observed. In addition such a positive loop can be virtuous when the
amplification is favourable, for instance a permanent increase of a profit,
or a permanent decrease of the unemployment. A positive loop can be
vicious, however, when the amplification has dangerous consequences,
for example a decrease in the company profit, or an increase in the
unemployment. A feedback loop is negative when it is goal seeking and
it has a damping effect on the variables of the loop. A convergence is
observed, oscillating or not, which keeps the system within reasonable
limits.
It is extremely interesting to detect in advance the positive and the
negative feedback loops. In this way a first prediction can be made on
how important variables will presumably evolve. Moreover, if some de-
cision variables are included in feedback loops, it is possible to change
their values from the beginning, in order to emphasize the amplification
effects on some variables if the loop is positive, or the damping effects if
the loop is negative.

Step 4: Economical core. Control variables. Watchdogs

In order to ensure an appropriate dynamic management of the system,


it is recommended to consider a core of important variables, such as
objectives, decision variables, control variables, watchdogs, etc. and to
follow carefully their evolution.
A watchdog is an alarm variable, the role of which is to detect per-
verse or noxious evolution, emergence of new structures, strong devia-
tions with regard to the objectives, etc. Usually a threshold is associated
to the watchdog variables. As soon as the threshold is trespassed, the
watchdog will be activated and start flickering: the short-term iteration
will be stopped and the management policy will have to be revisited. De-
pending on the seriousness of the deviation from the expected behaviour,
the decision variables will be modified, the management strategies will
be adapted, the system dynamics modelling will be modified or, in case
of emergence of new structures, the mental model itself could possibly be
redesigned. The flickering of watchdogs thus indicates the need for reap-
praisal of the control conditions of the managed system. Two types of
control have to be considered with respect to their intrinsic or extrinsic
nature (see Figure 8).
In many instances intrinsic control is available. This is to say that
an existing negative feedback loop includes a decision variable, which by
Management of the Future 493

Intrinsic Control. Extrinsic Control.

-""0
I I
~-~~
W _ "',

'\' 0
'.\ \ ~
I
I

Figure 8. Influence diagrams of intrinsic and extrinsic control (W = watchdog vari-


able; D= decision variable). Signs indicate the loop polarities

strengthening will exert a damping effect on the diverging behaviour, and


thus deactivate the flickering watchdog. An appropriate modification of
this decision variable is then requested. In a non-negligible number of
cases, extrinsic control will be necessary as no such damping negative
feedback loop is yet existing in the structure. This approach consists
in effectively changing the structure, first in the model, and then in the
real system (when this possibility is available to the analysts). A non-
trivial re-engineering operation has then to be applied to the system. We
have called it control of structure, or more adequately given the nature
of this control, control by structure. This type of operation is common
in mechanical regulation; we feel that it is quite new to transpose this
approach to the management of socio-economic systems.

Step 5: System dynamics integration. Long term forecasts

Suppose the system includes R decision variables (Dr, r = 1,2, ... , R).
A strategy S is a numerical positioning of these decision variables:

S(D 1 ,D2 , ... ,Dr, ... DR)


As soon as a strategy is defined, the influence diagram as obtained
in Step 2 is completed. According to the nature of the relations and
the number of variables, a generally rather intricate system of nonlinear
differential equations will result. It is most of the time totally excluded
to provide analytical solution(s) to this system of differential equations.
Fortunately the numerical integration schemes available in modern com-
494 AIDING DECISIONS WITH MULTIPLE CRITERIA

puter programmes provides fast and accurate solutions to most prob-


lems. (VENSIM and STELLA II have been used for our purpose).
Consequently long-term forecasts can be obtained for each variable of
the system and for each defined strategy. This has considerable advan-
tages for decision-makers as it helps them identifying achievable target
levels for particular variables on which realistic long-term objectives can
be defined. Appropriate dynamic management policies are thus devel-
oped with a better knowledge of potentialities in the future.

4.3. Phase III : Strategies


Step 6: Generation of strategies

As the R decision variables (Dr, r = 1,2, ... ,R) may vary continu-
ously and each strategy is a numerical positioning of the decision vari-
ables, the set of possible strategies is usually a continuous set in a R
dimensional space. In most of the cases, it will not be appropriate to
consider all the possible strategies but only a suitable discrete subset of
some of them. Each strategy is a different managerial option, a scenario
of virtual reality.
Let us suppose a set of N possible strategies is being considered by the
decision-makers, the analysts, the consultants or the experts in function
of realistic dynamic values given to the decision variables. In order
to avoid unnecessary complexity, it is recommended to keep N rather
small, say for instance N < 20. The most appropriate strategy for the
management of the system must then be selected, in t = 0, among the
considered strategies (Si,; i = 1, ... , N):

S 1 (D1(1) ,D2(1) , ... , Dr(1) , ... , D(1))


R
S2 ( D1(2) ,D2(2) , ... , Dr(2) , ... , DR
(2))

(i) (i) (i) (i)


Si(D 1 ,D2 , ... ,Dr , ... ,DR )

SN (D1(N) ,D2(N) , ••. , Dr(N) , ... , DR


(N))

Step 7: Validation of the strategies

All the considered strategies will be simulated as foreseen in Step 5.


For each of them, the mathematical model will be integrated and long
term forecasts will be obtained for each variable. Some strategies will be
rejected on base of validation tests such as unfavourable comparison with
Management of the Future 495

STR /10 120 .. . fiO ... fk(')


51 b(5d h(5d .. . 1i(5d ... fk(5d
52 /1 (52) 12(52) .. . Ii (52) ... fk(52)

5i /1(5;) 12(5;) .. . Ii (5i ) ... fk (5i)

5n /1 (5n ) h(5n ) fj (5n ) fk(5 n )

Table 1. Multi-criteria evaluation table of the considered n strategies Si(t) (i =


1, ... ,n).

the long-term objectives, extreme-values tests, long-term evolution tests,


trajectory tests, etc. For instance, if the profit of a company is attractive
in the long term after having passed through a bankrupt situation (un-
acceptable negative value of the profit), the corresponding strategy will
not pass the trajectory test. Among the n remaining strategies (n ~ N),
the most appropriate one for the management of the system must now
be selected.

Step 8: Selection of an appropriate strategy

Let the n remaining time-dependant strategies (Si(t), i = 1, ... ,n) be


evaluated on k criteria (fj (. ), j = 1, 2, ... , k). This gives the following
evaluation table (table 1).
It is quite natural to consider several evaluation criteria. Accord-
ing to the authors' opinion, at least 5 categories of criteria have to be
systematically considered for piloting complex socio-economic systems:
- Financial and Economical criteria
- Technological criteria
- Social criteria
- Ecological criteria
- Ethical criteria
Of course such criteria are not necessarily independent. On the one
hand the criteria should be selected in order to cover all possible dimen-
sions and on the other hand to avoid redundancies Roy (1985).
Usually such an evaluation table does not include a strategy be-
ing optimal for all criteria simultaneously. No optimal solution exists.
496 AIDING DECISIONS WITH MULTIPLE CRITERIA

Only compromise solutions can be considered, and the best compro-


mise should be identified. This is a (dynamic) multi-criteria problem
for which several MCDA procedures have been proposed. In principle
any discrete multi-criteria method could be used, such as the outrank-
ing methods developed by the French School of MCDA Roy (1985) for
example ELECTRE or PROMETHEE. Both methods model the prefer-
ences of the decision-makers on the basis of additional information they
provide to the analysts.
It was quite natural for applying the proposed methodology to use the
PROMETHEE-GAIA Brans et al.(1985)j Brans et al.(1994), method,
which was developed with major contributions of two of the authors at
their research institutes in Brussels. In addition this method seems to
be particularly appropriate to treat such problems, not only because the
required additional information is easy to define and well understood
by the decision-makers, but also because the results are easy to inter-
pret. They can be easily fed back into the dynamic decision-process as
explained below.
The additional information requested by the PROMETHEE-GAIA
procedure consists of:

- Information between the criteria: weights of relative importance


are allocated to each criterion (the higher the weight, the more
important the criterion).
- Information within the criteria: for each criterion a preference
function has to be defined in order to provide the preference of
one strategy with regard to another when pairwise comparisons
are made.

The PROMETHEE-GAIA procedure can easily be applied by using


the dedicated software packages PROMCALC Mareschal et al.(1988)'
or DECISION LAB 2000 (VISUAL DECISION, 2000).
Both packages provide assistance to the users to easily define the
preference functions. Results are displayed in a friendly way-to-use and
they are analysed by efficient sensitivity tools, among them:

- The P ROMETHEE II ranking: a ranking of the strategies from the


best one to the worst one, according to the additional information
considered.
- The profiles of each strategy: a strategy profile sets clearly in per-
spective the evaluation of each criterion with respect to the n - 1
remaining ones. This tool is particularly appropriate where the
decision-maker is hesitating between different strategies.
Management of the Future 497
- The GAIA plane: this plane is obtained by Principal Components
Analysis techniques. In this plane (see Figure 9) the strategies are
displayed as points and the criteria as axes. It is easy to appreciate
how good the strategies are on each criterion, and to detect the
conflicting character of the criteria. In addition, a decision vec-
tor II indicates the direction in which, according to the selected
weights, it is recommended to select the most appropriate strategy.

G
8
Figure 9. The geometric representation in the Gaia-plane (A = representation of
actions; C = representation of criteria)
Several particularly user-friendly sensitivity tools allow to modify the
weights and to appreciate how the best choice could be modified.

4.4. Phase IV: Monitoring and Control


Step 9: Implementation of the selected strategy in practice

As soon as identified, the most appropriate strategy can be imple-


mented in practice to pilot the socia-economic system. All the decision
variables are fixed and the system comes to work. Several strategic, tac-
tical and operational measures have to be taken. All these measures of
course depend on the nature of the socia-economic system itself and of
the type of application considered.
498 AIDING DECISIONS WITH MULTIPLE CRITERIA

Step 10: Short term control

As soon as the selected strategy has been implemented the system


evolves in time in a continuous way. It is not the purpose to let the
system evolve indefinitely without any control and monitoring. New
structures could emerge, catastrophic evolutions could take place, and
unforeseen events could disturb the system, so that it could be necessary
to revise the strategy.
Usually it is not possible or not appropriate to control the evolution
in a continuous permanent way. Therefore short term realistic time
intervals (r) will be defined at the end of which the evolution of the
system will be reconsidered.
The elementary time-step r of course depends on the nature of the
socia-economic system. For a micro economic system it will usually be
one day, one week, or one month. For macroeconomic systems, it will
often range from one month to half-a-year or a year. Consequently
the long-term interval [0, T] will be divided in a sequence of short-term
intervals:

[0, T] : (0, r, 2r, 3r, , T)


In any case, and for all systems, the short-term interval will be inter-
rupted as soon as a watchdog is flickering. It is the role of the watchdogs
to rapidly inform the decision-makers on a perverse or noxious evolution
and to request from them a suitable re-engineering of their management
policy.

Step 11: Analysis of the short term deviations

At the end of each short-term time interval, or when a watchdog is


flickering, the evolution of the variables of the core will be analysed.
At that time, the deviations between the foreseen evolution (the evo-
lution resulting from the system dynamic integration - Step 5) and the
observed real-life evolution will be noticed and a new short-term iter-
ation will be initiated. The branching to a particular step of the new
iteration will depend on the amplitude of the observed deviations. The
larger, these deviations, the more catastrophic the evolution, the more
the management policy will have to be globally revisited.
Branching to the next short term iteration

In most of the cases it is not necessary to reconsider the system in its


whole so that the branching to advanced phases can immediately take
Management of the Future 499

place. In any case a large flexibility of branching is offered according to


the observed situation. Let us consider some standard cases:

a) Deviations are non significant


AI. If the deviations of Step 11 are considered as small or neg-
ligible, the system is under control, the evolution is satisfac-
tory and the managers can maintain the same strategy. The
branching to the next iteration can immediately take place
within the current Phase IV.
A2. Even if the deviations are small, it is possible the managers,
due to the learning process, want to modify some prefer-
ences, to add or withdraw some criteria, or to modify some
weights. As a result, another strategy could possibly be se-
lected. In other situations, decision-makers may want to an-
ticipate probable changes in the environmental conditions,
necessitating adaptive policies. In all these cases, the branch-
ing will be back to the previous Phase III.
b) Deviations are critical
B1. Large deviations could possibly be due to a too poor mod-
elling. The system dynamics approach is not fitting properly
the Real World so that deviations occur. In this case the
whole SD modelling have to be reconsidered. The influence
diagram is then modified as well as the mathematical model.
New validations and strategies are considered after a branch-
ing to Phase II has taken place.
B2. Large deviations could also possibly be due to the emergence
of some chaotic or catastrophic events. The whole system
should be reconsidered starting from the Information System.
An updated modelling should take place and the management
should be completely re-engineered. It is then necessary to
branch back to Phase I, at the very top of the flowchart.

Of course any other branching could possibly be considered depending


on the observed situation. The whole dynamic management procedure
is displayed schematically on the flowchart in figure 10 at the end of this
paper.

5. Conclusion
In this paper we describe a dynamic Decision-Aid procedure to pi-
lot and restructure socio-economic systems. SYSTEM DYNAMICS is
500 AIDING DECISIONS WITH MULTIPLE CRITERIA

used to model the complex structures of the systems and the MCDA
PROMETHEE-GAIA procedure to select appropriate strategies. Long-
term forecasts are used to obtain a view on the evolution of the system
and to define strategic objectives. In many cases profound re-engineering
becomes indispensable. Extrinsic control feedback loops are then created
to achieve control by structure. Short-term evolutions are monitored for
the short-term on-line control of the system.
The selected strategies are never definitively fixed as they can be
adapted or changed according to the observed evolution. A watchdog
alarm procedure imposes to reconsider the management policy in emer-
gency cases in presence of uncontrolled chaotic events or of catastrophic
changes in the environment. Note that in case the adopted strategies
cannot be implemented for real, the control by structure can still be
used to investigate potential scenarios. It then provides a particularly
useful prospective tool in turbulent and uncertain environments Kunsch
et al.(1999), .
In our statistics and OR Lab of the VUB in Brussels we are applying
this management procedure in various fields:

- Pollution: CO 2 emissions and greenhouse-gas impacts on climate


imposing the development of a taxation system to reduce this pol-
lution Kunsch et al. (1998); Kunsch et al.(1999).
- Energy: Analysis of the future Production of Electricity in Europe
taking into account various production means: nuclear, oil, coal,
gas, wind, solar energy, combined heat production, etc.
- Transport: Intermodal Boat-Way terminals for the transportation
of containers; management of traffic congestion in urban areas
Springael et al.(2000).
- Population: Division of the World in seven characteristic areas and
for each of them analysis of the population dynamics.
- Corporate Planning: the presented framework has been compared
to the practical approach of management in a large food company
Kunsch et al.(2000).

The results obtained up to now are extremely promising.


Management of the Future 501

I...............................................................................................................1+-------.
Phase I: Informatin System - Data
,

! I I !
t ...................................................................! . . . . . . . . . . . . . . . . . . .J
1. Mental Model

No

TI =0
Yes

rPh::;:·ii:·········..···········----···············f··············.................
i System 2. Influence diagram
i Dynamics
: Modelling ..

3.

4.
No

5.

Yes

rph;~~·iiI;···················
i Strategies 6.

7.

8.

No
,...................................................................
i Phase IV: r----.......:~---.....,
1Monitoring 9.
!and Control
lO.

11.

Figure 10. Flow-chart of the 4-stage methodology for the management of complex
socio-economic systems.
502 AIDING DECISIONS WITH MULTIPLE CRITERIA

References
[Brans (1996)] Brans J.P. (1996), "The space of freedom of the decision-maker mod-
elling the human brain", European Journal of Operational Research 92(1996),
593-602.
[Brans et al., 1998] Brans J.P., Macharis C., Kunsch P.L., Chevalier A., and
Schwaninger M., "Combining multicriteria decision aid and system dynamics for
the control of socio-economic processes. An iterative real-time procedure" , Euro-
pean Journal of Operational Research 109(1998), 428-441.
[Brans et al.(1994)] Brans J.P. and Mareschal B., "The PROMCALC and GAIA De-
cision Support System", Decision Support Systems 12(1994), 297-310.
[Brans et al.(1985)] Brans J.P. and Vincke P., "A preference ranking organiza-
tion method. The PROMETHEE method for MCDM", Management Science
31(1985), 647-656.
[Checkland (1981)] Checkland P. (1981), System Thinking, Systems Practice, Chich-
ester, Wiley.
[Forrester(1968)] Forrester J.W. (1968), Principles of systems, Wright Allen Press.
[Forrester (1992)] Forrester J.W., "Policies, Decision and Information Sources for
Modelling", European Journal of Operational Research 59(1992), 42-63.
[Kunsch et al. (1998)] Kunsch P.L. and Brans J.P. (1998), "Une methodologie multi-
critere et dynamique d'elaboration, d'evaluation et de contrOle de politiques et de
projets du developpement durable. Application un cas de reduction des emissions
de C02", Proceedings of the 1st international symposium of the APREMA, Uni-
versit de Corse, Corte
[Kunsch et al.(2000)] Kunsch P.L., Chevalier A., and Brans J.P., (2000), "Comparing
the "Adaptive Control Methodology" (ACM) to a case study in financial plan-
ning". VUB working paper STOOTW /294 May 2000, accepted for publication
by the European Journal of Operational Research.
[Kunsch et al.(1999)] Kunsch P.L., Springael J. and Brans J.P., "An adaptive mul-
ticriteria control methodology in sustainable development. Case study: a C02
ecotax" to appear in Colorni, Parrucini, and Roy (Eds), Euro Courses Environ-
mental Management Volume, Kluwer Academic Publishers for the Commission
of the European Communities.
[Mareschal et al.(1988)] Mareschal B. and Brans J.P., "GAIA. Geometrical represen-
tation for MCDM ", European Journal of Operational Research 34(1988), 69-77.
[Richardson et al.(1981)] Richardson G. P. and Pugh III A.L. (1981), Introduction to
System Dynamics Modelling, Productivity Press, Portland, OR.
[Roy (1985)] Roy B., (1985), Methodologie multicritere d' aide Ii la decision, Econom-
ica, Paris 423 p.
[Simon (1990)] Simon H.A., "Prediction and prescription in systems modelling", Op-
erations Research 38(1990), 7-14.
[Springael et al.(2000)] Springael J., Kunsch P.L. and Brans J.P., (2000), "Traffic
crowding in cities: Control with urban tolls and flexible working hours. A group
multicriteria aid and system dynamics approach" Presented at the 14th ORBEL
conference at FUCAM, Mons, January 2000. Submitted for publication to JOR-
BEL, Brussels.
VI

MULTI-OBJECTIVE
MATHEMATICAL PROGRAMMING
METHODOLOGIES FOR SOLVING
MULTIOBJECTIVE COMBINATORIAL
OPTIMIZATION PROBLEMS

Jacques Teghem
Faculte Poly technique de Mons, Belgium
teghem(l)mathro.fpms.ac.be

Abstract We give an overview of approaches developped by our research team


to tackle multi-objective combinatorial optimization problems. We first
describe two methodologies - direct methods and two phase methods -
to generate the set of efficient solutions; they are illustrated respectively
for multi-objective knapsack problem and multi-objective assignment
problem. As it is unrealistic to extend these exact methods to problems
with more than two criteria or more than a few hundred variables, we
then analyze how to adapt metaheuristics to generate an approximation
of the set of efficient solutions; the so-called MOSA and MOTAS meth-
ods are presented, based respectively on Simulated Annealing and Tabu
Search. Finally, we describe the way to apply these heuristics methods
in an interactive way.

Keywords: Multiple-objectives; Combinatorial optimization; Metaheuristics

1. Introduction
It is well known that, on the one hand, combinatorial optimization
(CO) provides a powerful tool to formulate and model many optimiza-
tion problems, on the other hand, a multi-objective (MO) approach is
often a realistic and efficient way to treat many real world applications.
Nevertheless, until recently, Multi-Objective Combinatorial Optimiza-
tion (MOCO) did not receive much attention in spite of its potential
applications. One of the reasons is probably due to specific difficulties
of MOCO models. We can distinguish three main difficulties.
The first two are the same as those existing for Multi-Objective Integer
Linear Programming (MOILP) problem [Teghem86a; Teghem86b] i.e.

• the number of efficient solutions may be very large;


506 AIDING DECISIONS WITH MULTIPLE CRITERIA

• the non convex character of the feasible set requires to devise spe-
cific techniques to generate the so called "non supported" efficient
solutions {LoukilOO; Teghem86a].
A particular single CO problem is characterized by some specificities of
the problem, generally a special form of the constraints; the existing
methods for such problem use these specificities to define efficient ways
to obtain an optimal solution. For MOCO problems, it appears interest-
ing to do the same to obtain the set of efficient solutions. Consequently,
and contrary to what is often done in MOLP and MOILP methods, a
third difficulty is to elaborate methods avoiding to introduce additional
constraints so that we preserve during all the procedure the particular
form of the constraints.

The general form of a MOCO problem is

"min "Zk(X) = Ck X k = 1, ... ,K


XES
1
where S = D n Bn
with X(n xI), B = {O, I}.

D is a specific polytope characterizing the CO problem: assignment


problem, knapsack problem, travelling salesman problem, etc.

The set S E( P) of supported efficient solutions can be generated by


solving the parametric problem with a single objective

K
for the value of vector A E A = {A I >'k > 0 and L >'k = I}.
k=l
SE{P) is generally a proper subset of the set E(P) of all the efficient
solutions. This is still true even if the combinatorial optimization prob-
lem satisfies the so called "total unimodularity" property. It is the case
of the assignment problem (see ['fuyttensOO]).

We denote by NSE(P) = E(P) , SE(P) the set of non supported


efficient solutions

A few years ago Ulungu and Teghem [Ulungu94] published a survey on


MOCO problems, examining successively the literature on MO assign-
ment problems, knapsack problems, network flow problems, traveling
Multi-Objective Combinatorial Optimization 507

salesman problems, location problems, set covering problems.

In the present article we put our attention on the existing methodolo-


gies for MOCO. Our aim is to survey the intensive works made by the
research team MATHRO in this field.
We first examine how to determine the set E{P} of all the efficient so-
lutions and we distinguish three approaches: direct methods, two-phase
methods and heuristic methods. After we will describe the way to tackle
MOCO problems in an interactive way.
We will not give here the results of numerical experiments. Only general
comments of these results will be presented. The reader interested in
more details can refer to the particular papers cited in the bibliography.

2. Direct methods
The first idea is to use intensively classical methods for single objective
problem P existing in the literature to determine E(P). Of course, each
time a feasible solution is ob~ed the k values Zk{X) are calculated
and compared with the list E{P) containing all the feasible solutions
already obtained ~d non dominated by another generated feasible so-
lution. Clearly E{P) - called the set of potential efficient solutions -
plays the role of the so-called "incumbent solution" in single objective
methods. ~ each step, E{P) is updated and at the end of the procedure
E(P) = E{P). Such an extension of single objective method is specially
designed for enumerative procedures based on a Branch and Bound (BB)
approach. Unfortunately, in a MO framework, a node of the BB tree is
less often fathomed than in the single objective case, so that logically
such MO procedures are less efficient.
We describe below an example of such direct method - extending the well
known Martello and Toth procedure - in the context of multi-objective
knapsack problem as developed in [Ulungu97]. Problem (P) is thus
n
"max "Zk(X) = LC)k)Xj k=l, ... ,K
j=1
n
LWjXj::; W
j=l
Xj = (0,1).
The following typical definitions are used (k = 1, ... , K):
ck
• Ok: variables order according to decreasing values of u}.
J

• rt) the rank of variable j in order Ok.


508 AIDING DECISIONS WITH MULTIPLE CRITERIA

• e-: . bles order accord'mg to mcreasmg


varIa . . vaIues 0 f L,:=I
K rJk)

We assume that variables are indexed according to ordinal preference e.


At any node of the BB tree, variables are set to 0 or Ij let Bo and
BI denote the index sets of variables assigned to the values 0 and 1
respectively. Let F be the index set offree variables which always follow,
in the order 8, those belonging to BI U Bo. If i-I is the last index of
fixed variables, we have BI U Bo = {I, ... , i-l}j F = {i, ... , n}.
Initially i = 1. Let

• W = W - L Wj ~ 0 be the leftover capacity of the knapsack.


JEBl

• Z= (~ = .L c;k)j k= 1, ... , K) be the criteria values vector


JEBl
o~ined with already fixed variables.
E(P) contains non dominated feasible values Z and is updated at
each new step.
Initially ~ = 0 V k and EfP) = 0.

• Z = (Zk) be the vector whose components are upper bounds of


feasible values respectively for each objective at considered node.
These upper bounds are evaluated separately, for instance as in
the Martello-Toth method.
Initially Zk = 00 V k.
A node is fathomed in the following two situations:

(i) if {j E Flwj < W} = 0j _


{
(ii) or Z is dominated by z* E E{P).

When the node is fathomed, the backtracking procedure is performed:


a new node is build up by setting to zero the variable corresponding to
the last index in B I.
Let t be this index :

BI \{t}
{Bo n {I, ... , t - I}) U {t}
{t+l, ... ,n}.
When the node is non fathomed, a new node of BB tree is build up for
next iteration in the following way.
Multi-Objective Combinatorial Optimization 509

• Define s the index variable such that


if Wi > W, set s = i-I,
I
else s = max{l E FIL: Wj < W}.
j=i

- If s 2i: Bl t - Bl U {i, ... , s}


Bo t - Bo
F t-F\{i, ... ,s}
-Ifs=i-l BIt-BIU{r}
Bo t - Bo U {i, ... , r - I}
F t - F\{i, ... ,r}
with r = min{j E Flwj < W}
P~cedure stops when the initial node is fathomed and then E(P) =
E(P).
In comparison with the two-phase approach described in the next sec-
tion, direct methods are more consuming CPU times. Nevertheless, an
advantage of direct methods is that it is easier to tackle problems with
more than two objectives.

3. Two-phase method
Such an approach is particularly well suited for bi-objective MOCO
problems.

3.1. The first phase


The first phase consists of determining the set 5E(P) of supported
efficient solutions. Let 5 U 5' be the list of supported efficient solutions
already generated; 5 is initialized with the two efficient optimal solu-
tions respectively of objectives ZI and Z2. Solutions of 5 are ordered by
increasing value of criterion 1; let xr and X S be two consecutive solu-
tions in 5, thus with Zir < Zis and Z2r > Z2s where Zkl = Zk(X I ). The
following single-criterion problem P>.. is considered
minz>..(X) = AIZI(X) + A2Z2(X)
(P>..) [ X E 5 = D n Bn
Al 2 0, A2 2 o.
This problem is optimized with a classical single objective CO algo-
rithm for the values Al = Z2r - Z2s and A2 = Zis - Zlr; with these values
the search direction z>..(X) corresponds in the objective space to the line
defined by Zr and Zs. Let {xt, t = 1, ... , T} be the set of optimal solu-
tions obtained in this manner and {Zt, t = 1, ... , T} their images in the
objective space. There are two possible cases
510 AIDING DECISIONS WITH MULTIPLE CRITERIA

• {Zr,Zs}n{Zt,t= 1, ... ,T} =0.


Solutions X t are new supported efficient solutions. Xl and XT -
provided T > 1 - are put in Sand, ifT > 2, X2, ... , X T - I are put
in S'. It will be necessary at further steps to consider the pairs
(xr,xl) and (XT,X S ) .

• {Zr, Zs} c {Zt, t = 1, ... , T}.


Solutions {xt; t = 1, ... , T} \ {xr, XS} are new supported efficient
solutions giving the same optimal value as xr and X S for z>.(X);
we put them in list S'.

This first phase is continued until all pairs (xr, X S ) of S have been
examined without extension of S.
Finally, we obtain SE(P) = SuS' as illustrated in Figure 1.

Figure 1. Illustration of SE(P) = SUS'

(5 correspond. to EJ. 5' to.)

3.2. The second phase


The purpose of the second phase is to generate the set NSE(P) =
E(P) \ SE(P) of non supported efficient solutions. Each non supported
efficient solution has its image inside the triangle 6Zr Z s determined by
two successive solutions X r and X S of SE(P) (see Figure 1). So each of
the ISE(P)1 - 1 triangles 6Zr Z s are successively analysed. This phase
is more difficult to manage and is dependent of the particular MOCO
problem analysed; generally this second phase is achieved using partly a
classical single objective CO method. Examples of such a second phase
are given in [Visee98] for the bi-objective knapsack problem.
Multi-Objective Combinatorial Optimization 511
We present here this second phase for the bi-objective assignment
problem [TuyttensOO]:
n n
"min"zk(X) = LLc~7)xij k = 1,2
i=l j=l
n
LXij =1 i = 1, ... ,n
j=1
n
LXij =1 j = 1, .. . ,n
i=1
Xij = (0,1).
We note c~~)
'J
= AIC~~)
ZJ
+ A2c~~)
ZJ •
It is well known that the single objective assignment problem satisfies
the integrality property; nevertheless in the multi-objective framework,
there exists non-supported efficient solution as indicated in the following
didactic examples:

C(I) -
-
[~2 8
~ ~4 ~41' C(2) -
-
[~5 2
~ 2 31'
: ;

357 1 423 5

The values of the feasible solutions are represented in the objective


space in Figure 2.
24 Z2 ZI

17 -

13 -
11- •
10
7 - •

6 9 12 16 19 22
Figure 2. The feasible points in the (Zl, Z2)-space for the didactic example
512 AIDING DECISIONS WITH MULTIPLE CRITERIA

There are four supported efficient solutions, corresponding to points


Zl, Z2, Z3 and Z4; two non-supported efficient solutions corresponding
to points Z5 and Z6; the eighteen other solutions are non efficient.
In the first phase, the objective function ZA(X) has been optimized
by the Hungarian method giving
• ZA = >'lZlr + >'2Z2r = >'lZls + >'2Z2s, the optimal value of ZA(X);

• the optimal value of the reduced cost = ci;) ci;) -


(Ui + Vj)
where Ui and Vj are the dual variables associated respectively to
constraints i and j of problem PA'

At optimality, we have ci;) ~ 0 and Xij = 1 => ci;) = O.


First step. We consider L = {Xij : c~;) > O}. To generate non supported
efficient solution in triangle 6.ZrZs, each variable Xij E L is candidate to
be set to 1. Nevertheless, a variable can be eliminated if we are sure that
the reoptimization of problem PA will provide a dominated point in the
objective space. If Xij E L is set to 1, a lower bound Iij of the increasing
of ZA is given by Io
tJ
0 = c~~)
tJ
+ min (C~A~ . min C~A) + min c(~) . C~A~ .
trJr' k:j::j zrk k:j::i kJr' tsJ.'
. _(A) . _(A))
mIn c. k
k:j::j·s
+ mIll
k:j::i
ckJ
o

s
,

where indices ir and jr (is and js) are such that in solution xr (Xs)
we have
Xirj = Xijr = 1 (Xi.j = Xij. = 1).

Effectively, to reoptimize problem P A with Xij = 1, in regard with its


optimal solution xr (X S ), it is necessary to determine - at least - a new
assignment in the line ir (is) and in the column jr Us). But clearly, to
be inside the triangle 6.ZrZs, we must have (see Figure 3)

ZA + Iij < >'lZls + >'2 Z2r.


Consequently, we obtain the following fathoming test:
Test 1 Xij E L can be eliminated if ZA + Iii ~ >'IZls + >'2Z2r or equiva-
lently if Iii ~ >'1>'2
So in this first step, the lower bound Iii is determined for all Xij E L;
the list is ordered by increasing values of Iij.
Only the variables not eliminated by test 1 are kept. Problem PA is
reoptimized successively for each non eliminated variable; let us note
that only one iteration of the Hungarian method is needed. After the
optimization, the solution is eliminated if its image in the objective space
is located outside the triangle 6.ZrZs' Otherwise, a non dominated
Multi-Objective Combinatorial Optimization 513

Figure 3. Test 1

solution is obtained and put in a list N Brs; at this time, the second step
is applied.
Second step. When non dominated points Zl, ... , Zm E N Brs are found
inside the triangle 6Zr Z s , the test 1 can be improved. Effectively (see
figure 4), in this test the value

can be replaced by the lower value

(f) == . max
~=o, ... ,m
(>'1Z1,i+1 + A2Z2,i)
where Zo == Zr and Zm+1 == Zs

Figure 4. Test 2

with (T) == AIZl,m+1 + A2Z2,O.


The new value corresponds to an updated upper bound of z>.(X) for non
dominated points. With the new test
Test 2. Xij E L can be eliminated if z>. +lij ~ . max ().lZl,i+1 + ).2Z2,i)
t=o, ... ,m

more variables of L can be eliminated. Each time a new non dominated


point is obtained, the list N Brs and the test 2 are updated. The proce-
dure stops when all the Xij E L have been either eliminated or analyzed.
514 AIDING DECISIONS WITH MULTIPLE CRITERIA

At this moment the list NSrs contains the non supported solutions cor-
responding to the triangle 6Zr Z s .
When each triangle have been examined NSE(P) = UrsNSrs .

This two phase method has been applied for bi-objective assignment
problems of dimension varying from n = 5 till n = 50 (see [TuyttensOO]).
It appears:

a) that the CPU time used by the method increases exponentially


with the size of the problemj
b) the increasing of the number of supported solutions and of the
number of non supported solutions are approximatively the same.
For instance :

n E(P) SE(P) NSE(P)


5 8 3 5
20 54 13 41
35 82 27 55
50 156 57 99

We note that this second fact is different for the bi-objective knapsack
problem (see [Visee98]) in which the number of non supported solutions
increases faster compared with the number of supported efficient solu-
tions. For instance :
n E(P) SE(P) NSE(P)
10 4 2 1
100 134 18 116
200 410 36 374
300 834 55 778
400 1198 69 1129
500 1778 86 1689

4. Heuristics
As pointed out in [TuyttensOOj Ulungu94j Ulungu97j Visee98] it is
unrealistic to extend the exact methods describe above to MOCO prob-
lems with more than two criteria or more than a few hundred variablesj
the reason is that these methods are too much time consuming. Because
meta-heuristic - Simulating Annealing (SA), Tabu Search (TS), Genetic
Algorithms (GA), etc - provide, for the single objective problem, excel-
lent solutions in a reasonable time, it appeared logical to try to adapt
these metaheuristics to a multi-objective framework.
The seminal work in this direction is the Ph. D. thesis of Ulungu E.L.
in 1993 giving rise to the so called MOSA method to approximate E(P)
Multi-Objective Combinatorial Optimization 515

(see in particular [Ulungu99]). After this pioneer study, this direction


has been tackled by other research teams : Czyzak and J aszkiewicz (
Czyzak98) proposed another way to adapt S.A. to a MOCO problem;
independently, Hansen [HansenOO]' Gandibleux et al. [Gandibleux97]
and Ben Abdelaziz et al. [Abdelaziz99] did the same with T.S., the later
combining also T.S. and G.A.; G.A. is also used by Viennet et al. [
Viennet96]. Very recently Loukil [LoukilOOj, in collaboration with our
research team, also proposes an adaptation - called MOTAS - of tabu
search to a multi-objective framework.

Here after we successively presented the MOSA and MOTAS ap-


proach.

4.1. MOSA method


The principle idea of MOSA method can be resumed in short terms.
One begins with an initial iterate Xo and initializes the set of potentially
efficient points P E to just contain Xo. One then samples a point Y in
the neighborhood of the current iterate. But instead of accepting Y if
it is better than the current iterate on an objective - we now accept
it if it is not dominated by any of the points currently in the set P E.
If it is not dominated, we make Y the current iterate, add it to P E,
and throw out all points in P E that are dominated by Y. On the other
hand, if Y is dominated, we still make it the current iterate with some
probability. In this way, as we move the iterate through the space, we
simultaneously build up a set P E of potentially efficient points. The
only complicated aspect of this scheme is the method for computing the
acceptance probability for Y when it is dominated by a point in P E.
We now describe the MOSA method mathematically.

4.1.1 Preliminaries.
• A wide diversified set of weights is considered: different weight
vectors >..(/), l E L are generated where >..(1) = (>..~), k = 1, ... , K)
with >..~) ~ 0 Vk and fS >..~) = 1 V l E L.
k=l
This set of weights is
uniformly generated:

1 2 ... ,--,1
>"k(I) E { 0,-,-, r - 1 }
.
r r r

The number r can be defined by the DM, so that


516 AIDING DECISIONS WITH MULTIPLE CRITERIA

• A scalarizing function s(z, A) is chosen. As specified in (Visee98],


the effect of this choice on the procedure is small due to the stochas-
tic character of the method. The weighted sum is very well known
and it is the easiest scalarizing function:
K
s(z, A) = L AkZk'
k=l

• Three classic parameters of a SA procedure are initialized


- To: initial temperature (or alternatively an initial acceptance
probability Po);
- a « 1): the cooling factor;
- Nstep: the length oftemperature step in the cooling schedule.
• Two stopping criteria are fixed
- Tstop: the final temperature;
- Nstop: the maximum number of iterations without improve-
ment.
• A neighborhood V(X) of feasible solutions in the vicinity of X is
defined. This definition is problem-dependent.

4.1.2 Determination of PE(~(l», I E L. For each 1 E L


the following procedure is applied to determine a list P E( A(l» of poten-
tially efficient solutions. Similar to a single-objective heuristic in which
a potentially optimal solution emerges, in the MOSA method, the set
PE(A(l» will contain potentially efficient solutions.

(a) Initialization
• A greedy step is considered to produce an initial solution Xo'
This step is problem-dependent.
• Evaluate Zk(Xo ) 'V k.
• PE(A(l» = {Xo }; Ncount = n = O.

(b) Iteration n
• Draw at random a solution Y E V(Xn).
• Evaluate Zk(Y) and determine AZk = Zk(Y) - Zk(Xn ) 'V k.
Multi-Objective Combinatorial Optimization 517

• Calculate D.s = s(z(Y),'x) - s(z(Xn), ,x).


If D.s :s; 0, we accept the new solution:
Xn+1 +- Y, Ncount = O.
Else we accept the new solution with a certain probability p =
exp(-~:):

{ ?l-p YXn Ncount = 0,


X n+1 N count = N count + 1.

• If necessary, update the list PE(,X(l)) in regard with solution Y.


• n+--n+1
- If n(mod Nstep) = 0 then Tn = aTn- l ;
else Tn = Tn-I;
- If Ncount = Nstop or T < Tstop then stop;
else iterate.

4.1.3 Generation of E{P). --- Because of the use of a scalarizing


function, a given set of weights ,x (l) induces a privileged direction on
the efficient frontier. The procedure generates only a good subset of
potentially efficient solutions in that direction and these solutions are
often dominated by some solution~enerated with other weight sets.
To obtain a good approximation E(P) to E{P) it is thus necessary to
ILl
filter the set U P E (,X (I)). This operation is very simple and consists
1=1
only in making pairwise comparisons of all the solutions contained in
the sets P E(,X(l)) and removing the dominated solutions. This filtering
procedure is denoted by 1\ such that
ILl
ETP) = APE{,X(I)).
1=1
Examples of the use of MOSA are presented in [Ulungu99] and [Tuyt-
tensOO], respectively for the multi-objective knapsack problem and the
multi-objective assignment problem.
Some measures are designed to evaluate the proximity and the unifor-
mity of approximation set with respect to the exact efficient set obtained
with two-phases method.
The numerical tests show that MOSA provides a good approximation of
the efficient set and that the results are stable with respect to the size
of the problem.
The MOSA method remains valid for a larger number of objectives and
for large scale problems.
518 AIDING DECISIONS WITH MULTIPLE CRITERIA

4.2. MOTAS method


The tabu search method can also be adapted to tackle MOCO prob-
lems. We present here the so-called MOTAS method (see [LoukilOO)).

The aim of the MQIAS method is also to determine a good approxi-


mation denoted by E(P), called the set of potentially efficient solutions
Le. the generated solutions which are not dominated by any other gen-
erated solution.
As in MOSA method, MOTAS will require to consider some weight vec-
K
tor A E A = {A I Ak ~ 0 and I: Ak = I} to aggregate, in a way defined
k=l
below, the different objective functions.

4.2.1 Basic concepts.

Let Xn a current solution at iteration n.

V{Xn) is a neighborhood of X n.

A subneighborhood SV(Xn) is made by randomly selecting Kl neigh-


bors.

Tabu list length is K2 with K2 < K 1.

Let Y a solution in SV{Xn).

6k{Y) = Zk{Y) - Zk{Xn ) is the variation of the objective function k.

Among the non dominated solutions in SV{Xn ), it is necessary to define


a method for selecting the "best" neighbor for X n .

Solution Y;; is "better" than solution "Yj if its modification vector 6(Y;;)
is smaller than the modification vector 6 ("Yj) based on the infinite norm.

\ 6k(Y;;) < \ 6k("Yj)


max "'k max "'k
19~K Rk -19~K Rk
where Rk is the range of the kth objective function for all non-dominated
neighbors of Xn : Rk = mk - Mk, otherwise
Multi- Objective Combinatorial Optimization 519

An aspiration value is defined by the equation

A(Y)

A* min A(X)
XEPE(>")

4.2.2 Principles of MOTAS algorithm.

a) Determination of P(E(>..) ••
The following procedure is applied to generate a set of potentially effi-
cient solutions PE(>").

• Initialization:

Draw at random an initial solution Xo.


Evaluate Zk(XO) V k.
PE(>..) = {Xo}.
V k.
Parameters K 1 > K 2.
Parameter N (maximum number of iterations).
T = 0 (tabu list); n = O.
• Iterative procedure:
Xn current solution.
6 = O.
Generate randomly Kl neighbors: Yi (i = 1, ... ,K1 ) is a neighbor
ofXn .
For each i < K 1 ,
- If Yi is dominated by any X E P E(>"), do i = i + 1.
- If Yi is non-dominated by all X E P E(>") , then update
P E(>..) by including Yi.
* If Yi is non tabu
o If 8 = 0 : X n +1 +- Yi and 8 = 1;
o If 6 = 1 and A(Yi) < A(Xn+1) : Xn+l +- Yi.
* If Yi is tabu
o If 8 = 0 and A(Yi) < A*: X n+1 +- Yi,
520 AIDING DECISIONS WITH MULTIPLE CRITERIA

If 0 = 1 and A{Yi)
o < min {A*, A{Xn +1)) : X n +1 ~
Yi.
Doi=i+l

- If i = Kl : actualize Mk, mk, A* and tabu list.


- While n < N do n = n + 1;
- End ifn = N.

b) Generation of E(P) ..
The procedure is similar to the one described in section 4.1.3
• A wide diversified set of weights is considered : different weight
vector >..(l), l E L are generated where >..(1) = {A~), k = 1, ... , K}
with >..~) E A 'if l E L. This set of weights is uniformly generated
as in MOSA method. For each of them, the procedure described
in paragraph 4.2.2a) is applied to obtain ILl lists PE{>..(I)).
ILl
• The set U P E ( >.. (I)) is filtered by pairwise comparisons in order
1=1
to remove the dominated solutions :
ILl
E(P) = /\ PE (>..(1))
1=1

5. An interactive MOSA method


For large dimension--E,.roblems, it appears unrealistic and useless to
completely generate E{P) which is a too large set. So, we only seek
a small number of solutions satisfying the preferences of the DM. The
DM will express progressively the characteristics that the solutions must
satisfy.

We present here an interactive MOSA method, but MOTAS method


can be adapted similarly.

5.1. Initializations
• The SA parameters: Po, O!, Nstep' Tstep and Nstop.
• Application of SA on each objective Zk individually.
The best solution found for objective Zk is noted 13k.
• Defining the interval variation [Mk' mk] of each objective Zk
Mk = Zk{13k) and mk = max Zk{13I). Mk and mk are approxi-
1=1, ... ,K
Multi-Objective Combinatorial Optimization 521
mations, respectively, of the coordinates of the ideal point and the
nadir point. The definitions of these values correspond to the case
of a minimization problem.

• The list of solutions proposed to the DM : Lo = 0.

• The restrictive goals gk on each objective. We suggest to initialize


these quantities with a value contained in the interval variation of
the objectives Zk and given by the DM.

Nevertheless it is possible, even for this initialization, to set these


quantities as upper bounds for the objective i.ej minimal sat-
isfaction levels with respect to the worst value of objectives Zk
(gk = mk, k = 1, ... , K) so that all the efficient frontier will be
explored at the first iteration.

• The set of ILl weight vectors: W = {(A~), k = 1, ... ,K), l E L}.

5.2. Iteration m: dialog phase with the DM


(m> 1)
A list L m - 1 is proposed to the DM who:
• discards the unsatisfying solutions from L m - 1 , keeping only the
preferred solution(s)j

• modifies the satisfaction levels gk, taking into account the infor-
mation furnished by the preferred solution(s). Based on the goals
expressed by the DM and when the relation 3 k E {1, ... , K} such
that gk =1= mk holds, a new set AILI+1 of weights is defined as follow:

• updates the parameters of SA.


After the analysis of the list Lm-l and based on his/her prefer-
ences, the DM can modify the parameters Nstep and 0' of SA.
At each iteration, the DM can choose either to intensify (namely by
increasing parameters Nstep and 0') or not (decreasing parameters
Nstep and 0') the research of new potentially interesting solutions
in directions towards the efficient frontier.
522 AIDING DECISIONS WITH MULTIPLE CRITERIA

5.2.1 Iteration m: computation phase of the MOSA method .

• Based on the goals expressed by the DM, new bounds are defined:
K
L Wk
k=l
K
<
-
<1
'V
1-

and a restricted list of weights is used:

K
We have the relation 0::; L Wk ::; K.
k=l
K
When L Wk = 0 or 9k = mk V k (the DM defines no aspiration
k=l
levels), no new set of weights is defined because the complete set of
weights W is used and in this case the DM explores all the efficient
frontier.

K
The more is increase in L Wk > 0, the more is the reduction in
k=l
the explored part of the efficient frontier. The restriction set w(m)
allows to eliminate some weights becoming useless with respect to
the aspiration levels defined by the DM. In order not to become too
restrictive in the elimination of the sets of weights and to be sure
to cover the reduced part of the efficient frontier the DM wants to
explore, a not too small value is to be assigned to the parameter 'Y.
Fixing'Y to its lower value is equivalent to restricting the set w(m)
to the unique set of weights A(!L!+l). We used the value 'Y = 0.9 in
our numerical experiments.

In order not to obtain an empty set of weights (namely when the


aspiration levels of the DM become too strong or a too small value
assigned to 'Y) we introduce the new set of weights A(!L!+l).

• The MOSA method is then executed with each set of weights in


w(m). During the execution, the following values are updated.

If 3 Zk(X) < Mk then Mk = Zk(X)i


If 3 Zk(X) > mk then mk = Zk(X).
Multi- Objective Combinatorial Optimization 523
• The MOSA proposes the list of solutions Lm defined as follows:

This filtering procedure (denoted by 1\) consists in making pair-


wise comparisons of all the solutions contained in the sets P E().(l»
and the solutions in the previous list Lm-l. The solutions dom-
inated and the solutions not respecting the satisfaction levels 9k
are removed.
When the satisfaction levels 9k defined by the DM are too strong,
the list of solutions becomes empty Lm = 0. In this case, the
DM is asked to redefine new satisfaction levels. Instead of just
proposing the previous list Lm-l and in order to take into account
the information obtained during the last computation phase of the
MOSA method, we propose the following list:

This interactive MOSA procedure stops when the DM is completely


satisfied by one or a subset of solutions in Lm.
Some large scale multi-objective assignment problems and knapsack
problems - with more than two objectives - are treated by such an inter-
active approach in [TeghemOO). It appears particularly interesting and
well designed to solve real case study (see [Ulungu98]).

6. Conclusions
MOCO appears as a general model with many potential applications,
and in this field there is still a large place for intensive research.

Some exact methods to generate the set E(P) of efficient solutions


can be designed, but their complexity induces a large computing time
so that only instances of small dimension can be tackled.

The metaheuristics can be adapted to a multi-objective frame~k


and they provide a powerful tool to determine an approximation E(P)
of E(P); they are able to treat easily large dimension problems with
more than two objectives.

For real~plications, it appears unrealistic and useless to completely


generate E(P) which is a too large set. Only a small number of solutions
satisfy the preferences of the decision maker. So it is interesting to
develop interactive methods in which the decision maker will express
524 AIDING DECISIONS WITH MULTIPLE CRITERIA

progressively the characteristics that the solutions must satisfy. The


interactive MOSA method has been designed in this spirit.

References
Ben Abdelaziz F., Chaouachi J. and Krichen S. (1999). A Hybrid Heuristic for Mul-
tiobjective Knapsack Problems, in Voss S., Martello S., Osman I. and Roucairol
C., (Eds) Metaheuristics, Advances and Trends in Local Search Paradigms for
optimization. Kluwer Academic Plublishers, 205-212.
Czyzak P. and Jaszkiewicz A. (1998). Pareto Simulated Annealing - A Metaheuristic
technique for Multiple Objective Combinatorial Optimization, Journal of Multi-
Criteria Decision Analysis, Vol. 7, 34-47.
Gandibleux X., Mezdaoui N. and Freville A. (1997). A Tabu Search procedure to
solve multiobjective combinatorial optimisation problems. In proceedings Volume
of MOPGP'96. R. Caballero and R. Steuer (Eds), Springer Verlag 291-300.
Hansen M.P. (2000). Tabu Search for Multiobjective Combinatorial Optimization: TA-
MaCa, Control and Cybernetics, vol. 29, (3), 799-818.
Loukil T., Teghem J. and Fortemps Ph. (2000). Solving multi-objective production
scheduling problems with tabu-search. Control and Cybernetics, vol. 29, (3), 819-
828.
Teghem J. and Kunsch P. (1986). A survey of techniques for finding efficient solutions
to multi-objective integer linear programming. Asia-Pacific Journal of Operations
Research, Vol. 3, 1195-1206.
Teghem J. and Kunsch P. (1986). Interactive methods for multi-objective integer linear
programming. In G. Fandel et al. (Eds), Large scale modelling and interactive
decision analysis, Springer, pp. 75-87.
Teghem J., Thyttens D. and Ulungu E.L. (2000). An interactive heuristic method for
multi-objective combinatorial optimization. Computers and Operations Research,
Vol. 27, 621-634.
Thyttens D., Teghem J., Fortemps Ph. and Van Nieuwenhuyse K. (2000). Performance
of the MOSA method for the bicriteria assignment problem. Journal of Heuristics,
6,295-310.
Ulungu E.L. and Teghem J. (1994). Multi-objective combinatorial optimization prob-
lems: a survey. Journal of Multi-Criteria Decision Analysis, Vol. 3, 83-104.
Ulungu E.L. and Teghem J. (1997). Solving multi-objective knapsack problem by a
branch and bound procedure, in Multicriteria Analysis, J. Climaco (Ed.), Springer-
Verlag, 269-278.
Ulungu E.L., Teghem J., Ph. Fortemps and Thyttens D. (1999). MOSA method: a
tool for solving MaCa problems. Journal of Multi-Criteria Decision Analysis, Vol.
8, 221-236.
Ulungu E.L., Teghem J. & Ost Ch. (1998). Efficiency of interactive multi-objective
simulated annealing through a case study. Journal of Operational Research Society,
Vol. 49, 1044-1050.
Viennet R. and Fontex M. (1996). Multi-objective combinatorial optimization using a
genetic algorithm for determining a Pareto set. International Journal of Systems
Science, Vol. 27, (2), 255-260.
REFERENCES 525
Visee M., Teghem J., Pirlot M. and Ulungu E.L. (1998). Two-phases method and
Branch and Bound procedures to solve the bi-objective Knapsack problem. Journal
of Global Optimization, Vol. 12, 139-155.
OUTCOME - BASED NEIGHBORHOOD
SEARCH (ONS)
A Concept for Discrete Vector Optimization Problems

Walter Habenicht
University of Hohenheim. Stuttgart. Germany
habenich@uni-hohenheim.de

Abstract: In this paper, a concept for an enumerative approach in vector optimization is


developed. This concept is based on a two-stage-procedure, identifying
efficient solutions in the first stage, and performing a local search in outcome
space in the second stage. The paper focuses on the discussion of different
neighborhood concepts in outcome space. They are discussed under the
aspects of convergence and complexity.

Key words: Discrete vector optimization; Local search; Quad tree; Neighborhood search

1. Basic concepts
This paper deals with the discrete vector optimization problem (DVOP),
defined as follows: "minimize" y = f(x), x E X. With a countable decision
set X, and a vector valued outcome function f: X~ Y c mm. Restricting
attention to the outcome space, one gets the equivalent formulation of
(DVOP): "minimize" y E Y. Where Y is the countable set of feasible
outcomes. In solving DVOP one is interested in the nondominated set
N:={YEYI y'~y => Y'(;!: V}, and the efficient set eff(X):={XEXI f(X)EN},
respectively. Throughout this paper we assume, that the efficient set still has
a huge cardinality. In fact, this is the case in many combinatorial problems,
like complex routing problems or in scheduling.
The main approach in vector optimization is an interactive one.
Transforming the vector optimization problem to a classical optimization
problem by means of some scalarizing function, one realizes an interactive
search in the efficient set. The most prominent approach for discrete
528 AIDING DECISIONS WITH MULTIPLE CRITERIA

problems is the reference point method introduced by Wierzbicky(1986), a


good overview can be found in Steuer(1985).
The aim of solving the optimization problem, which, in fact, is used in a
parameterized form, is twofold. On the one hand one tries to identify some
efficient solution, on the other hand the parameters of the optimization
problem are used to control the iterative searching process in the set of
efficient outcomes. In other words, in each iteration of a classical interactive
approach, the problem of identifying efficient solutions and the problem of
selecting some specific efficient solution are treated simultaneously. We call
such type of procedure a one-stage procedure.
Alternatively, one may define a two-stage procedure, by enumerating the
set of efficient solutions in a first stage, and performing a searching process
in the enumerated set in the second stage. Such a two-stage procedure is
sketched infigure 1.

. ....... .
......... (partial) Enumeration
(param.) Optimization
Metaheuristics

Structure of tbe
Emdent set:
• Linear Ust
• Efficient Quadtree

Optimizationprocedures:
• Weighting
• Goalprogramming
• Referencepoint Approach

Neighborhood Searching:
• Tree-Neighborhood
• ~-Neighborhood
• p-Neighborhood
• cone-Neighborhood

Figure 1. Two-stage concept

The rationale for this approach lies in the idea, that discrete optimization
problems are often NP-hard. Hence, in the one-step procedure, we would
Outcome - Based Neighborhood Search (ONS) 529

have to solve a NP-hard problem in any iteration of the interactive process,


producing very long response times. This may be unacceptable for a decision
maker, involved in this process. On the other hand, in some cases the
difference in the amount of computation between solving a single NP-hard
optimization problem to optimality, and enumerating the efficient set of the
related vector optimization problem may be moderate.
Moreover, in the enumeration process we do not need any information
from the decision maker. Therefore, it can be done as some kind of pre-
processing, without involving the decision maker. The crucial point of this
strategy to be successful, lies in the fact, that one can realize an interactive
searching process in the enumerated (efficient) set with reasonable response
times.
Normally, at the end of the enumeration process in stage 1, we have
generated a set of points in the m-dimensional outcome space without any
structure. If this set is stored in some arbitrary manner, probably as a linear
list, most searching operations will lead to a more or less complete
enumeration of this set. This is even true for the application of optimization
approaches based on weightings, goal programming or reference points, if
they are operating on this unstructured set. From a computational point of
view, this may cause no problems as long as the cardinality of the
nondominated set is moderate, say some thousands points. But if we have a
huge number of nondominated points we may get into troubles with
computation time, in trying to identify specific subsets of the nondominated
set.
In the sequel we will show, that quad trees provide a structure, that may
be helpful in analyzing the nondominated set in this situation. This data
structure is well suited to the evaluation of dominance relations. This
property may be used to speed up the identification process for efficient
solutions which, in fact, is performed in the first stage (see Habenicht
(1984), pp. 38 and Habenicht(1983)). The main purpose of this paper,
however, is to show, that the quad tree structure may also support the
identification of specific subsets of the nondominated set, namely of some
types of neighborhoods. The definitions of neighborhood used in this paper
are predominantly motivated by the aspect of identifying neighbors using the
quad tree structure.

2. Efficient quad trees


The data structure quad tree was first introduced by Finkel and
Bentley(1974) as a means for associative searching in multidimensional
spaces. Habenicht(1983) was the first, who applied it to the identification of
530 AIDING DECISIONS WITH MULTIPLE CRITERIA

efficient solutions in discrete vector optimization problems. He developed


the System ENUQUAD (see Habenicht(l991» which has been the first quad
tree based decision support tool for discrete vector optimization problems.
Meanwhile, there exist other quad tree based systems like InterQuad,
developed by Sun, Steuer(1996), and CHESS by Borges(1999).
Here, we will give a short informal introduction to the quad tree concept,
a detailed description can be found in Habenicht(l984). A quad tree in m-
dimensional space is a 2m_ary rooted tree. The nodes of the tree represent
points in the m-dimensional space. The structure is given by the notion of k-
successor, defined as follows:

Let x, y E mm, x:;t:y. x is called a k-successor ofy, iff k=Li: Xj>Yj i-I.

If x and yare nodes of a quad tree, x is called a k-son of y, iff

i) x is a k-successor ofy, and


ii) x is a son ofy, i.e. there is an edge from y to x in the tree.

Now we can introduce the quad tree structure by adding an algorithm for
the insertion of a vector into a quad tree.

Algorithm: Insertion into a quad tree.

a. Prerequisites: Let x E mm, Q be a quad tree in m dimensions


with root r.
h. Starting condition: If the node set of Q is empty, then
let r:= x, stop, else, let y:=r.
c. Iteration: Determine k, such that x is a k-successor ofy.
Ifthere exist no k-son ofy in Q, then
x becomes the k-son ofy, stop,
else, let y:= k-son ofy, repeat iteration.

Figure 2 shows a quad tree in two dimensional space, storing the set Y =
{(5,6),(7, 7),(2,4),(6,3),(8,4),(3,7),(7, I)}. Obviously, the structure is not
unique, for a given set. It depends on the order the elements are inserted. The
numbers at the edges are the "k's" of the k-son-relation. In brackets we have
noticed them as dual numbers. This is of some interest, since the dual
representation is related to the relative position between the k-successor and
his predecessor in the following way. In the i-th position of the dual
representation of k, counted from right to left, we find aI, if and only if the
i-th component of the k-successor is greater then its predecessor. In the
Outcome - Based Neighborhood Search (ONS) 531

lower part of figure 2 we show the separation of the space induced by the
quad tree structure.
There is a strong relation between k-successorship and dominance, as
stated in the following theorem:
Theorem 1 (Habenicht(1984),p.43):

Let x, y E mm, x*y, and x be a k-successor ofy. Then


i) x dominates y, iffk = 0.
ii) Y dominates x, iff Xj = Yj ViE Sock).

I
1 •

:H .•..
.7 H..... I:
··1:
1 ~
:


1
:- - . - - - - - - - - - - - -
1
-6 -"'---,..-:----~ :
.............I····r········.. ···~······················ ...
1 : 1
1 : 1
1 : 1

:~-.------------
:
1
1 1
-------,---~
: 1
: 1

1 2

Figure 2. Quad tree

With So(k):= {i I bj=O, i :;;m} with bi E {O,l}, such that k=Lj bji- I, i.e. iE
SoCk) ¢::> Xj :;; Yj. Accordingly, we define Sl(k):= {i I bj=l, i :;; m}, i.e. iE
SI(k) ¢::> Xi > Yi. Then, we can formulate another theorem, concerning
dominance relations among certain triples of vectors. This, in fact, is the
central result for the evaluation of dominance relations in quad trees.
532 AIDING DECISIONS WITH MULTIPLE CRITERIA

Theorem 2 (Habenicht(1984),p.44):

Let x, y, Z E 9t m, x be a k-successor of y, and z be a I-successor of y,


then
i) z dominates x ~ 8 0(k) c 80(1).
ii) z is dominated by x ~ 8 1(k) c 8 1(1).

From theorem 1 it follows, that in an efficient quad tree, i.e. a quad tree
without any dominance relations among nodes, there exist no O-successors
and no (2m-I)-successors. Hence an efficient quad tree is a (2m-2)-ary rooted
tree. The implication of theorem 2 is demonstrated in figure 3. If, for
example, we want to evaluate the dominance relations of some vector z E
9t4 , which is a 5-successor of some node y, then, of course, it can be
dominated by the 5-successors of y, but additionally only by the 1- and 4-
successors ofy. On the other hand, it may dominate some 5-successors ofy,
and additionally only some 7- and 13-successors. Assuming, that all
successors of y are contained in the tree, everyone being the root of a
subtree, then, from number theoretic arguments, it can easily be shown, that
we have to view at most half of those subtrees (see Habenicht(1984),p.52).

~____~____L-~II~____- L____________~~____________~
I I
may dominate z may be dominated by z

Figure 3. k as dual numbers


Outcome - Based Neighborhood Search (ONS) 533

In figure 4 we show an efficient quad tree in three-dimensional space


together with a representation of the outcomes in that space. For
presentational reasons, we use an example, where all efficient solutions lie
on the hyperplane L Yi = 100. Here, the "k"'s are printed in italics. On the
first layer of the tree, they are amplified by their dual representations. We
will use this example to demonstrate the ONS-approach, which will be
developed in the sequel.

Figure 4. Efficient Quad tree


534 AIDING DECISIONS WITH MULTIPLE CRITERIA

3. Neighborhood search in outcome space


The Outcome-based Neighborhood Searching(ONS) approach that we
propose in this paper, is, in fact, a local search algorithm. The framework of
this approach is sketched in figure 5.

Identify eff(Y)
(Create an efficient quad tree).
Choose a starting point y*

Identify some neighborhood


N(y*)

Decision maker chooses best outcome


y' e N(y*)

( Stop ~ ,--~ Let y* := y'

Figure 5. Outcome based Neighborhood Search (ONS)

The main difference to other well known local search algorithms (see
Czyzak, Jaszkiewicz(1998), Glover, Laguna(1997), Osman, Kelly(1997),
Vogt, Kull, Habenicht(1996)) like simulated annealing or taboo search, lies
in the fact that in this approach the searching process is based on
neighborhoods defined in outcome space, whereas the classical approaches
use neighborhoods in decision space. Since neighborhood definitions in
outcome space, in general, do not rely on special properties of the underlying
decision problem, the applicability of this approach is quite general. The
most prominent existing approach using neighborhoods in outcome space,
too, is the Light Beam Searching(LBS) approach, introduced by
Jaszkiewicz, Slowinski(1997). In this approach special attention is given to
the derivation of a neighborhood corresponding to a certain preference
information derived from an outranking approach. The neighborhood used in
this approach is a polyhedral one. The authors do not deal with the problem
Outcome - Based Neighborhood Search (ONS) 535

of identifying the members of the neighborhood. As already mentioned, this


only causes no problems as long as the cardinality of the nondominated set is
moderate. Otherwise, response times may become too long for an interactive
process. Therefore, the neighborhood definitions proposed in the ONS
approach, explicitly reflect computational aspects of the identification of the
neighborhood members, taking advantage of the support provided by the
quad tree structure in analyzing dominance relations. In fact, all
neighborhoods used, except the tree neighborhood, can be defined by
dominance relations.

Identify eff(Y)
(Create an efficient Quadtree).
Choose as the starting point y*
the root of the quadtree

Identify as neighborhood N(y*)


the sons ofy*

DM chooses best solution y'


from N(y*)

rl
Detennine as neighborhood N(y*)
the (1'" -l-i)-sons of the
i-th nodes ofN(y*) Lety* :=y'

Figure 6. Tree-search
536 AIDING DECISIONS WITH MULTIPLE CRITERIA

3.1. Tree Neighborhood


Assuming that the efficient set is given as a quad tree, it is obvious to
define a neighborhood on the basis of the topology of the quad tree. We call
it tree-neighborhood, and the search based on this neighborhood a tree-
search. The tree-search is sketched out infigure 6. Starting with the root, we
are traversing the tree top-down in a straight forward way. Normally, we
present to the decision maker a node of the tree together with its sons. If one
of the sons is chosen, the process continues with the chosen node together
with its sons. But if the decision maker chooses the node itself, we choose as
neighbors those sons of the sons, that lie between the node and its sons.
These are the (2ffi-I-i)-sons of the i-sons of the node. The search terminates,
when there are no more neighbors.

• y. I. & 2. Iteration

o
• N(y*) I. Iteration

Chosen I. ~on
• (y*) 2. Iteration
•••
• : chosen 2. Iteration
•••

Figure 7. Example tree-search

In figure 7 the tree-search process is sketched out under the assumption,


that the decision maker behaves according to the indifferent curves. It can be
seen that in this case, the search ends in a local optimum. Since the search is
straight forward, we are not able to leave a subtree, which we have entered
Outcome - Based Neighborhood Search (ONS) 537

before. The search starts with the root. The neighbors are the six sons of the
root (#2 to #7). In this case the DM prefers the root. Therefore, in the 2nd
iteration the neighborhood is formed by the (2m-I-i) -sons of the i-sons of
the root (#3, #39, #16 ,#20, #12, #35). Among these nodes #12 is chosen.
Now the process stops, because #12 has no successors. In this case the
search stops in a local optimum. Obviously, #11 would have been the best
solution.
Under the aspect of complexity, tree-search is obviously optimal, since
the maximal number of steps to be performed corresponds to the longest
path in the quad tree. But there is a great risk of running into a local
optimum. Nevertheless, tree search may be a good means to reach in few
iterations at a fairly good starting solution.

3.2. P- Neighborhood
The 13 - Neighborhood N~(y*) is a distance based neighborhood. It
contains all points, whose Tschebytscheff -<iistance from y* is not greater
then 13.

NI3(Y*) := { y E Y : <lo(y,y*):=rnax IYi* - Yil :S ~ }

The I3-Neighborhood is sketched out in figure 8. Obviously, the members


of this neighborhood can be identified by the evaluation of two dominance
relations. A I3-Neighbor dominates the point 0:= y*+I3, and it is dominated
by u:= y*-I3.

Y dominates 0 }
y, ~(y.) = { y e Y: y is dominated by u

o
OJ - Yi· + P
u,- Yi· W ~

o
y,

Figure 8. (j - Neighborhood
538 AIDING DECISIONS WITH MULTIPLE CRITERIA

Infigure 9, we can see that quad trees support the identification of the 13-
Neighborhood. In this example, only the shaded nodes have to be examined,
if we use the results of theorems 1 and 2.

y. : E)(amined region I p(y*)

Figure 9. Identifying Nft(y*) in a quad tree


Outcome - Based Neighborhood Search (ONS) 539

The disadvantage of this type of neighborhood lies in the fact, that the
cardinality of Np(y*) depends on the choice of 13. If we choose it too large,
we may get too much neighbors. On the other hand, if we choose it too
small, the neighborhood may be empty.

3.3. P - Neighborhood
We can avoid this disadvantage of the I3-Neighborhood by using a p-
Neighborhood NP(y*), which is defined as follows:

NP(y*) :=N~(y*) with ~ = max {P : card(N~(y*» ~p }

Here, we replace the choice of 13 by the choice of p, which is the


cardinality of the neighborhood. To identify NP(y*) we have to modify the
identification procedure of Np(y*). We start with p points (i.e. the first p
points of the quad tree) as a first approximation of NP(y*), and we compute
the maximal Tschebytscheff-distance of all these points from y*. Then we
search for points not chosen so far, with a smaller distance. Shrinking the
searching region step by step. Obviously, the identification of this
neighborhood has a higher complexity then Np(y*).
Figure 10 gives an example for the p-neighborhood still to run in a local
optimum. This occurs particularly, if the distribution of the efficient points in
outcome space is very heterogeneous. In this we can use another type of
neighborhood, that we call cone-neighborhood.

..
Figure 10. Local Optimum, using p - neighborhood (p = 5)
540 AIDING DECISIONS WITH MULTIPLE CRITERIA

3.4. Cone-Neighborhood
Cone-neighborhood is based on the consideration of the 2m_2 cones
originating at y*, that can be built by the separation of the set {1 ,... ,m} into
two non-empty subsets Sand R, as defined in figure J J. The neighborhood
consists the points nearest to y* in each cone, based on the Tschebytscheff-
distance. Hence, we have at most 2m_2 neighbors. Again, we have in general
a higher complexity for the identification of the neighborhood, because we
have to identify N 1(y*) for each of the 2m_2 cones.

kilt neighborhood-cone:

o
o
let (S,R) be a seperation of {1,2, .. ,m} and
k: = L ie R 2;-1, S,R ..0

NJ«Y*):= {y =argminy e J<~y'} d...(y,y*) lk=1 ,.. ,2m _2}

Figure JJ. Cone-Neighborhood

4. Summary
In this paper, we introduced the concept of output-based neighborhood
search (ONS). As a core feature of this concept, we discussed a sequence of
neighborhood definitions with improving convergence properties, but
increasing complexity. Some experimental tests on complex routing
problems indicate, that using the following sequence of neighborhoods- tree-
neighborhood~p-neighborhood~cone-neighborhood- form a good compro-
mise between complexity and convergence of the ONS approach.
Outcome - Based Neighborhood Search (ONS) 541

References
Borges, P.C.(1999): CHESS - Changing Horizon Efficient Set Search. A simple principle for
multiobjective optimization. European Journal of Operations Research, 1999.
Czyzak,P., Jaszkiewicz,A.(l998): Pareto Simulated Annealing-a Metaheuristic Technique for
Multiple-Objective Combinatorial Optimization. In: 1. of Multi-Criteria Deciision
Analysis, vol. 7, 1998, pp. 34-47.
Finkel, R.A., Bentley, 1.L.(1974): Quad-Trees. A Data Structure for Retrieval on Composite
Keys. In: Acta Informatica, vol. 4, 1974, pp.I-9.
Glover, F, Laguna, M. (1997): Tabu Search. Boston et al. 1997.
Habenicht, W. (1983): Quad Trees, a Datastructure for Discrete Vector Optimization
Problems. In: Hansen, P (Hrsg.): Essays and Surveys on Multiple Criteria Decision
Making. (Lecture Notes in Economics and Mathematical Systems 209), Berlin et al. 1983,
pp. 136-145.
Habenicht, W. (1984): Interaktive Losungsverfahren flir diskrete Vektoroptimierungs-
probleme unter besonderer Beriicksichtigung von Wegeproblemen in Graphen.
(Mathematical Systems in Economics, Bd. 90), Konigstein 1984. (In German)
Habenicht, W. (1991): ENUQUAD - A DSS for discrete vector optimization problems.
Arbeitspapier 3, Lehrstuhl flir Industriebetriebslehre, Universitat Hohenheim 1991 (In
German)
Jaszkiewicz, A., Slowinski, R.(1997): Outranking-based interactive exploration of a set of
multicriteria alternatives. In: 1. of Multi-criteria Decision Analysis, vol. 6, 1997, pp. 93-
106.
Osman, I. H., Kelly, J. P.(Hrsg.) (1997): Meta-Heuristics: Theory & Applications, 2nd print,
Boston et al. 1997
a
Roy, B. (1985): Methodologie Multicritere d' Aide la Decision. Paris 1985 (In French)
Steuer, R.E. (1985): Multiple Criteria Optimization: Theory, Computation, and Application.
New York et al. 1985.
Sun, M., Steuer, R.E.(l996): InterQuad: An interactive quad tree based procedure for solving
the discrete alternative multiple criteria problem. In: European Journal of Operations
Research, vol. 89, 1996, pp.462-472.
Vogt, 1., Kull, H., Habenicht, W.(1996): Konzeption eines Tabu-Search-Algorithmus flir
multikriterielle Optimierungsprobleme. Arbeitspapier 21, Lehrstuhl flir Industri-
ebetriebslehre, Universitat Hohenheim 1996. (In German)
Wierzbicki, A. (I 986): On the completeness and constructiveness of parametric characteri-
zations to vector optimization problems. In: OR Spektrum vol. 8, 1986, pp. 73-87.
SEARCHING THE EFFICIENT FRONTIER
IN DATA ENVELOPMENT ANALYSIS

Pekka Korhonen
Helsinki School of Economics and Business Administration, Finland
korhonen@hkkk.fi

Abstract: In this paper, we deal with the problem of searching the efficient frontier in
Data Envelopment Analysis. The approach developed to make a free search on
the efficient frontier in multiple objective linear programming can also be used
in DEA. The search is useful, when preference information is desired to
incorporate into efficiency analysis. For instance, the DM may want to use
some flexibility in locating a reference unit for an inefficient unit or (s)he
would like to find the most preferred input- and output-values on the efficient
frontier. Using a numerical example, we will demonstrate how to make the
search and how to use the results.

Key words: Data envelopment analysis; Multiple objective programming; Efficient


frontier; Free search; Pareto race

1. Introduction
Data Envelopment Analysis (DEA), originally proposed by Chames,
Cooper and Rhodes [1978 and 1979], has become one of the most widely
used methods in operations research/management science. It is easy to agree
with Bouyssou [1999] that "DEA can safely be considered as one of the
recent "success stories" in OR. .. ". An obvious reason for this success is that
DEA is a problem-oriented approach and focuses on an important task: to
evaluate performance of Decision Making Units (DMU). The analysis is
based on the evaluation of relative efficiency of comparable decision making
units. Based on information about existing data on the performance of the
units and some preliminary assumptions, DEA forms an empirical efficient
surface (frontier). If a DMU lies on the surface, it is referred to as an
efficient unit, otherwise inefficient. DEA also provides efficiency scores and
reference units for inefficient DMUs. Reference units are hypothetical units
544 AIDING DECISIONS WITH MULTIPLE CRITERIA

on the efficient surface, which can be regarded as target units for inefficient
units. A reference unit is traditionally found in DEA by projecting an
inefficient DMU radially! to the efficient surface.
From a managerial point of view, there may be a need sometimes to have
some flexibility to choose a reference unit. The reference unit found for an
inefficient unit through radial projection dominates the inefficient unit, but it
is not the only one on the efficient frontier with that property. Dominance is
an important and often desirable property, but why always choose one
specific unit. Why not to take into account the preferences of the DM? There
are several possibilities to incorporate preference information into the
analysis, for instance:

- to allow a OM to specify a priori a projection direction or


to restrict acceptable input- and output-values, or
to assist the DM to make a search on the efficient frontier and let him/her
choose the unit which (s)he prefers the most.

A widely used way to incorporate preference information in DEA is to


restrict the flexibility of the variables of the so-called multiplier model.
Several weight flexibility restriction schemes have been proposed by
Chames, Cooper, Wei and Huang [1989, 1990], Dyson and Thanassoulis
[1988], Thompson, Langemeier, Lee, Lee and Thrall [1990], Thompson,
Singleton, Thrall and Smith [1986], and Wong and Beasley [1990], among
others.
However, not many papers have been written on the other ideas of
incorporating preference information into DEA. A few exceptions are
Golany [1988], Thanassoulis and Dyson [1992] and Zhu [1996]. A common
feature in all these analyses is that first preference information is gathered
from the DM, and then this information is used to produce a target unit for
an inefficient unit on the efficient frontier.
In this paper, our purpose is provide the OM with an interactive method
which allows him/her to incorporate preference information into the efficient
frontier analysis by enabling him/her to make a free search on the frontier.
(S)he may search for the most preferred input- and output-values over the
whole frontier, or (s)he may desire to find the most preferred target unit for
each inefficient unit - separately. Sometimes, the purpose might be only to
have a better overall impression on the values of the inputs and outputs on

I) Term "radial" means that an efficient frontier is tried to reach by increasing the values of
the current outputs and/or decreasing the values of the current inputs in the same
proportion.
Searching the Efficient Frontier in Data Envelopment Analysis 545

the frontier. A requisite technique has already been developed for Multiple
Objective Linear Programming (MOLP) models. Because DEA- and
MOLP-models can be presented structurally in the same formulation (see,
Joro, Korhonen and Wallenius [1998]), we may apply the technique to DEA
problems as well.
The approach recommended is based on a so-called reference direction
approach originally proposed by Korhonen and Laakso [1986]. Korhonen
and Wallenius [1988] developed from the original reference direction
approach a dynamic version called Pareto Race. Actually, Pareto Race is an
interface, which makes it possible to move along the efficient surface and
freely search various target units in DEA as well. We demonstrate with a
numerical example and discuss various situations in which the approach
might be useful in DEA.
The rest of this paper is organized as follows. In Section 2, we review the
reference point/direction approach to multiple objective programming, and
demonstrate its similarity with the approaches used in the data envelopment
analysis. In Section 3, using a numerical example we demonstrate how to
make a free search on the efficient frontier in DEA. Section 4 concludes the
paper.

2. MOLP and DEA


Assume we have n DMUs each consuming m inputs and producing p
outputs. Let X E 9l~.1C71 and Y E~.1C71 be the matrices, consisting of
nonnegative elements, containing the observed input and output measures
for the DMUs. We further assume that there are no null-columns in the
matrices and no duplicated units in the data set. We denote by Xj (the jib
column of X) the vector of inputs consumed by DMUjo and by Xii the
quantity of input i consumed by DMUj. A similar notation is used for
outputs. When it is not necessary to emphasize the different roles of inputs
and outputs, we denote u = (!) and U = Ci), where y and x refer to
columns of the matrices Y and X, respectivel~. 2)
Furthermore, we denote 1 = [1, ... , 1] and refer by ej to the i lb unit
vector in mD.

2) Because the results concerning u and U are valid for (~) and G) as well, for simplicity,
we often refer to u and U, although we are factually interested in results concerning (~)
and G}
546 AIDING DECISIONS WITH MULTIPLE CRITERIA

2.1 Multiple Objective Linear Programming


Consider the following Multiple Objective Linear Program (MOLP):

Max u =VA= Ci) A


s.t. (2.1)
A E A = {A IAE 9t: and AA .s b},

where A c 9lft is a feasible set, matrix A E 9l k ..,. is of full row rank k and
vector b E 9l k • The set of feasible values of vector u c 9l m +p is called a
I
feasible region and denoted by T = {u U = VA, A E A}. In the model (2.1),
we try to maximize the linear combination of the outputs and minimize the
linear combination of the inputs.
The efficient solutions of the model (2.1) are defined in a usual way:

Definition 1. A point u* = VA E T is efficient (nondominated) iff (if and


only it) there does not exist another u E T such that u ~ u*, and u "* u*.

Definition 2. A point u* = VA E T is weakly efficient (weakly


nondominated) iff there does not exist another u E T such that u > u *.

In the MCDM-literature (see, e.g., Steuer [1986]), the concept of


efficiency is often used to refer to the solutions in the decision variable space
9l n (set A) and the concept of dominance is often used to refer to the
solutions in the criterion space 9lP+m (set T). In the DEA-literature, this
distinction is not made. The nondominated solutions are called efficienf----
To generate efficient solutions the achievement (scalarizing) function
(ASF) (Wierzbicki [1980]) can be used:

min s(g, u, w, 8) = min { ".lax [(gi - u) / wJ + 8 ~)gi - u)} (2.2)


leP ieP

s.t. u E T,

where s is the ASF, w= (;) > 0, w E 9l m+p is a vector of weights, 8> 0 is


"Non-Archimedean" (see for more details, Arnold et al. [1997]) and P = {l, 2,
... , m+p}. Vector g = C~ E 9l m +p is a reference point, the components of
which are called aspiration levels. The reference point can be feasible or
infeasible. Using (2.2), point g E 9l m +p can be projected onto the set of
Searching the Efficient Frontier in Data Envelopment Analysis 547

efficient solutions of (2.1). Varying the aspiration levels, all efficient


solutions of (2.1) can be generated (Wierzbicki [1986]). We may also
generate efficient solutions by using a fixed reference point and varying the
weighting vector w. However, the changes in aspiration levels are easier to
handle, because they can be implemented as changes in the rhs-values in a
linear programming formulation (see, 2.3a,b).
The reference point method is easy to implement. The minimization of
the achievement scalarizing function is an LP-problem. In Joro, Korhonen
and Wallenius [1998], we have shown that the projection problem can be
written in the following form:

Reference Point Model Primal Reference Point Model Dual


(REFp) (REFD)

max Z = 0' + c(lTs + + 1Ts) min W= vTg" - pTl! + uTb


S.t. (2.3a) S.t. (2.3b)
Yl - aw"-s+=l! -,l"Y + V"x + u TA ~ 0 T
Xl + ow' + s· =g" Jlw+V"W< =1
lEA P. v~ &1
s· ,s+ ~ 0 U ~O

0 (UNon-Archimedean") 0 (UNon-Archimedean")
!R:
&> &>

A = {l l EI and Al Sb}

Vector II consists of the aspiration levels for outputs and If the aspiration
levels for inputs. Vectors w > 0 and w'" > 0 (w= (;) ) are the weighting
vectors for outputs and inputs, respectively. Let's denote the optimal value
of the models Z* and W*.
Korhonen and Laakso [1986] further developed the reference point
approach by parameterizing the reference point using a reference direction.
In the approach, a direction r specified by the DM is projected onto the
efficient frontier (see, e.g. Korhonen and Laakso [1986]):

max 0" + c(ITs + + ITS)


s.t. (2.4)
Y A - ow" - s+ = II + tr
Xl + ow' + s-= If + tr'
AEA
s-,s+~O
548 AIDING DECISIONS WITH MULTIPLE CRITERIA

&>0
A = {A. IA. E 91: and AA. Sb},

when t: 0 -+ 00. As a result, we will generate an efficient path starting from


the projection of the reference point and traversing through the efficient
frontier until reaching a boundary. Korhonen and Wallenius [1988]
developed a dynamic version from the reference direction approach. The
implementation was called Pareto Race. In Pareto Race, a reference direction
is determined by the system on the basis of preference information received
from the DM. By pressing number keys corresponding to the ordinal
numbers of the objectives, the DM expresses which objectives (s)he would
like to improve and how strongly. In this way (s)he implicitly specifies a
reference direction. Figure 1 provides the Pareto Race interface for the
search, embedded in the VIG software (Korhonen [1987]).
As we will see in Section 2.2, the reference point model (2.3a) is a
generalization of the DEA- models. In the DEA-model, the reference point
is one of the existing units, and the weighting vector is composed of the
components of the input- and/or output-values of the unit under
consideration.

2.2 Data Envelopment Analysis


Charnes, Cooper and Rhodes [1978, 1979] developed their initial DEA-
models by considering the following problem formulation:
p
L,ukYtO
max ho = .::.:k==~ _ _

LV;x;o
;=1
s.t.: (2.5)
p
L,ukYAj
.::...k:~1_ _ :$; 1, j = 1,2, ... , n
LV;Xij
1=1

J.4, Vi'? &. k = 1, 2, ..., p; i = 1, 2, ... , m, & > O.

The unit under consideration is referred by subscript '0' in the functional,


but its original SUbscript is preserved in the constraints.
At the optimum of the model (2.5), the outputs and inputs of the unit
under consideration are weighted such that the ratio of the weighted sum of
its outputs to the weighted sum of the inputs is maximal. At the first glance,
Searching the Efficient Frontier in Data Envelopment Analysis 549

model (2.5) seems to have nothing to do with the reference point model.
However, it turns out that the model (2.5) can be first transformed into a
linear model, then the dual of that model (called an envelopment model) is
structurally like a reference point model (see, e.g. Joro, Korhonen, and
Wallenius [1998]). The weighting vectors w" and ~ and reference points If
and It are determined in a certain specific way in DEA-models as shown in
Table 1.
Let's consider now (feasible) decision making units U E T from the
perspective of data envelopment analysis. Set T is called a Production
Possibility Set in the data envelopment analysis literature. Note that in this
paper, we assume T is convex. Other assumptions are possible as well. In
DEA, we are interested to recognize efficient DMUs, which are defined as a
subset of points of set T satisfying the efficiency condition defined below:

Definition 3. A solution (YA.*, XA.*) = (y*, x*), A.* E A, is efficient iffthere


does not exist another (y, x) E T such that y ~ y*, x ::;; x*, and (y, x) *- (y*,
x*).

Definition 4. A point (y*, x*) E T is weakly efficient iff there does not exist
another (y, x) E T such thaty > y* and X <x*.

By comparing the Definitions 1 and 2 to Definitions 3 and 4, we notice


that the Definitions 1 & 3 and 2 & 4 coincide. Thus the reference point
model (2.3) can also be used to make search on the efficient frontier in DEA,
because the frontiers for the same T are the same in the both analyses.
The type of the models used in DEA depends on the definition of set T.
The original DEA-models as introduced by Charnes et at. [1978] were so-
called constant returns to scale models. Set A = 9t: is for those models.
Later Banker, Charnes and Cooper [1984] developed variable returns to
scale models, where A ={A. IA. E 9t:
and ITA. = J}. Those basic DEA -
models are currently referred to as CCR- and BCC-models, respectively.
Moreover, in DEA the term orientation is used to refer to the projection
direction a DMU onto the efficient frontier. When the projection is made
improving (proportionally) of the values of outputs, the term output-oriented
is used. The corresponding terminology is used for inputs.
In Table 1, we demonstrate how the basic DEA-models can be presented
as special cases of model (2.3). We consider envelopment models and refer
to the unit under consideration by superscript '0'.
550 AIDING DECISIONS WITH MULTIPLE CRITERIA

Table I: Modifications of Model (2.3a) for Different (Envelopment) DEA-Mode\s.

Model Type w" g' w" It A


Output-Oriented CCR-model
(Chames et al. [1978])
0 XO
l 0 9l n
+
Input-Oriented CCR-model XO 0 0 yO 9l n
(Chames et al. [1978]).3) +
Combined CCR-model
(Joro et al. [1998])
XO XO
l yO 9l n
+
Output-Oriented BCC-model
r
(Banker et al. 1984D
0 XO
l 0 AC
Input-Oriented BCC-model
(Banker et al. [19841) 2)
XO 0 0 l AC
Combined BCC-model
(Joro et aI. rt 9981)
XO XO
l l AC
General Combined model - XO - l -
AC={A, Lte91:andl TA,=J}

Each modified model 1-7 produces an efficient solution corresponding to


a given specific (;:). Which efficient solution is obtained depends on the

model type (= constraint set) and the weighting vector w = (:.) ~ O. We


assumed earlier that w > O. However, the presence of the term c(]TS + + ]TS)
in the objective function of model (2.3a) according to a classical theorem by
Geoffrion [1968] guarantees that all solutions of models are efficient, even if
some components w... i E P, were zeroes. Thus the assumption w ~ 0, w ~ 0,
is sufficient to guarantee that a solution is efficient.
Referring to the value of the objective function, we can define that unit
DMUo with UO = (~~ is efficient iff

for models 1 and 4


z*= W*= { ~ for models 3; 6 and 7
-1 for models 2 and 5
otherwise it is inefficient (see, e.g. Charnes et aI., [1994]). For an efficient
unit all slack variables s-, s+ equal zero. The efficient units lie on the frontier,
which is defined as a subset of points of set T satisfying the efficiency

3)The input-oriented models are usually in DEA solved as a minimization problem by


writing w" =- XO and modifying the objective function accordingly.
Searching the Efficient Frontier in Data Envelopment Analysis 551

condition above. The projection of an inefficient unit on the efficient frontier


is called a reference unit in the DEA-terminology.
Using the general weighting vector w = (~) > 0 in model (2.4), we any
part of may search the efficient frontier in DEA-problems as well.

3. Illustrating the search of the efficiency frontier in


DEA
In the previous section, we have shown that the theory and the
approaches developed in mUltiple objective programming for finding the
most preferred solution can also be used in DEA to search the efficient
frontier. Even if the efficient frontier as such is playing an important
conceptual role in DEA, the main interest is to project inefficient units onto
the frontier and to evaluate the need of improvement of the input- and
output-values to reach that frontier. In DEA, we also refer to those
projections on the efficient frontier by term "solution" like in MOLP. The
projection technique is simple: the current input- and/or output-values are
used to specify the projection direction. The approach is possible, because all
values were assumed to be non-negative, and at least one input- and one
output-value is strictly positive. It means that the projection of an inefficient
unit is made without taking into account the preference information of the
DM. However, in some problems the DM may want to have more flexibility
to projection. For instance, (s)he may be willing to consider all solutions
dominating an inefficient unit under consideration, or (s)he may set some
additional restrictions.
There are also problems, in which a DM would like to find the most
preferred solution on the efficient frontier. It is a solution that pleases the
DM most. (S)he might be willing to use that solution as an "ideal" example
for all other units. The most preferred solution also plays a key role in the
approach developed by Halme, Joro, Korhonen, Salo, and Wallenius [1999]
to incorporate preference information into DEA. The approach is called
Value Efficiency Analysis (VEA). Value efficiency analysis is based on the
assumption that the DM compares alternatives using an implicitly known
value function. The unknown value function is assumed to be
pseudoconcave and strictly increasing for outputs and strictly decreasing for
inputs. It is assumed to reach its maximum at the most preferred solution.
The purpose of value efficiency analysis is to estimate a need to increase
outputs and/or decrease inputs for reaching the indifference contour of the
value function at the optimum. Because the value function is not assumed to
be known, the indifference contour cannot be defined precisely. However,
552 AIDING DECISIONS WITH MULTIPLE CRITERIA

the region consisting of the points surely less or equally preferred to the most
preferred solution can be specified. This region is used in value efficiency
analysis.
Let's consider the following simple numerical example, in which the data
in Table 2 are extracted and modified from a real application. Four large
super-markets A, B, C, and D are evaluated on four criteria: two outputs
(Sales, Profit) and two inputs (Working Hours, Size). "Working Hours"
refers to labor force available within a certain period and "Size" is the total
area of the super-market. (The super-markets are located in Finland.)

Table 2: A Multiple Objective Model

A B e D
Sales (106 Fim) 225 79 66 99 max
Profit (106 Fim) 5.0 0.2 1.2 1.9 max
Working Hours (10 3 h) 127 50 48 69 min
Size (103 m2) 8.1 2.5 2.3 3.0 min

A managerial problem is to analyze the performance of those super-


markets. We will make constant returns to scale-assumption, and analyze
performance (efficiency) by using a CCR-model. As the result of the
analysis, we obtain that super-markets A, B, and D are efficient and C
inefficient. Let's consider closer the performance analysis of the inefficient
unit C. The unit C is projected onto the efficient frontier. When the
orientation is chosen, the projection direction is fixed. The model
formulations and solutions obtained by the output-oriented (model 1), input-
oriented (model 2), and combined CCR-model (model 5) are given in Tables
3.a-c.

Table 3a: Efficiency Analysis of Unit C with the Output-Oriented CCR-Model (1)

A B e D Max (0) ex Ref.Unit


A-coefficients 0.0709 0.0924 0.0000 0.4981 1.0996
Sales 225 79 66 99 -66 72.58
Profit 5 0.2 1.2 1.9 -1.2 1.32
Work Hours 127 50 48 69 48.0 48.00
Size 8.1 2.5 2.3 3 2.3 2.30
Searching the Efficient Frontier in Data Envelopment Analysis 553

Table 3b: Efficiency Analysis of Unit C with the Input-Oriented CCR-Model (2) 4)

A.-coefficients A B C D Min (a) Cy Ref.Unit


0.0645 0.0841 0.0000 0.4530 0.9094
Sales 225 79 66 99 66.0 66.00
Profit 5 0.2 1.2 1.9 1.2 1.20
Work Hours 127 50 48 69 -48 43.65
Size 8.1 2.5 2.3 3 -2.3 2.09

Table 3c: Efficiency Analysis of Unit C with the Combined CCR-Model (5)
A B C D Max (a) C
A.-coefficients 0.0676 0.0881 0.0000 0.4745 0.0475 Ref.U"it
Sales 225 79 66 99 -66 66.0 69.13
Profit 5 0.2 1.2 1.9 -1.2 1.2 1.26
Work Hours 127 50 48 69 48 48.0 45.72
Size 8.1 2.5 2.3 3 2.3 2.3 2.19

Each model will find a different reference unit on the efficient frontier for
C. In the first model, the output values are projected radially onto the frontier
subject to the current levels of resources. In the second model the role of the
input- and output-variables have been changed. In the last model, the
projection is made by improving the values of the output- and input-
variables simultaneously.
The reference unit is like a target for the inefficient unit. Why to choose
this specific target? Perhaps, the DM would like to incorporate some
flexibility into the selection. (S)he has many possibilities to choose a target
unit. For instance, some other dominating unit for unit C might be more
desirable. To be able to make a choice, (s)he needs help in evaluating a
certain part of the efficient frontier. (S)he may take, for instance, the input
values given, and consider possible output values. The efficient frontier in
this problem is trivial - one line. The whole line can be obtained as a convex
combination of the output values in columns I and IV in Table 4. In column
ill, we have the solution on the line resulted by the output-oriented CCR-
model in Table 3a. The DM has thus several alternatives to choose the
reference unit (s)he prefers most. All solutions on the efficient line satisfy
the input constraints; they all consume the resources less or the same amount

4) The model is solved as a minimization problem as usually done in DEA


554 AIDING DECISIONS WITH MULTIPLE CRITERIA

as C. However, all solutions on the efficient line generated do not dominate


the input- and output-values of C. The value of "Profit" in column I is lower
than the corresponding value of C. If the OM is interested only in the values
dominating those of C, (s)he can consider the convex combination of
solutions in columns II and IV.
The OM has many options to emphasize various aspects in searching for
the most preferred unit to C. If "Sales" is important, (s)he can choose a
solution maximizing "Sales" (solution I in Table 4), but if (s)he cannot
accept a worse value on "Profit" than on C, the solution in column II might
be most preferable. In case, "Profit" is important, the OM might be willing
to use as a reference unit the solution in column ill.

Table 4: Characterizing AIJ Efficient Solutions of the Output-Oriented CCR-Model in Table 3a

I II III IV
Sales 73.61 72.74 72.58 72.40
Profit 0.55 1.20 1.32 1.45
Work Hours 48.00 48.000 48.00 48.00
Size 2.30 2.30 2.30 2.30
A 0.0599 0.0709 0.0826
B 0.6533 0.1799 0.0924
D 0.2222 0.4551 0.4981 0.5436

The above considerations are very easy to perform if we have only two
criteria. The efficient frontier is piecewise linear in two dimensions, and all
efficient solutions can always be displayed visually. (The efficient frontier of
our example was exceptionally simple.) The characterization of the efficient
frontier can also be carried out in a straightforward way e.g. by first
identifying all extreme points and then using those points for characterizing
all efficient edges.
Generally, the efficient frontier cannot be characterized by enumerating
all efficient facets by means of the efficient extreme points. Even in quite
small size problems, the number of efficient points is huge. Actually, it is not
necessary to approach the problem in this way. Even if we could characterize
all efficient facets, the OM would need help to evaluate solutions on
different facets. Therefore, we recommend a free search. Using Pareto Race
the OM can freely move on the efficient frontier by controlling the speed and
the direction of motion on the frontier as explained in Section 2.
Searching the Efficient Frontier in Data Envelopment Analysis 555

Assume that the DM is willing to consider the reference units for C


which do not necessarily fulfill the input-constraints. Then the problem
becomes four criterion problems, because the DM has preferences over input
values as well. Let's assume the DM will start from the solution in Table 3c,
but is not fully satisfied with the solution. (S)he may search the
neighborhood of the current solution by using Pareto Race and end up with
the solution displayed in Figure 1. Pareto Race enables the DM to search for
any part of the efficient frontier.

Pareto Race

Goal 1 (max): Sales ->


vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv 69.5465
Goal 2 (max): Profit >
\ 'V \' vvvvvv v\, \,vvvvvv\,vvvvvvvvvV\,\,\,VV\' V 1.39094
Goal 3 (min): Working H -->
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv 46.0137
Goal 4 (min ): Size ->
vvvv\'vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv 2.21314

Bar:Accelerator Fl:Gears (B) F3:Fix num:Twn


F5:Brakes F2:Gears (F) F4:Relax FIO:Exit

Figure I: Searching/or the Most Pre/erred Values/or Inputs and Outputs

In Pareto Race the DM sees the objective function values on a display in


numeric form and as bar graphs, as (s)he travels along the efficient frontier.
The keyboard controls include an accelerator, gears, brakes, and a steering
mechanism. The search on the nondominated frontier is like driving a car.
The DM can, e.g., increase/decrease the speed, make a turn and brake at any
moment (s)he likes.
The DM can also use Pareto Race to find the most preferred solution for all
units. In the example, a good candidate for the most preferred solution might be
(Sales: 147.13, Profit: 3.02, Work HOUTS: 92.98, Size: 4.87). That solution is
reached, when the weights of units A: 0.321 and D: 0.757. The point can be
used as an ideal to the other units, or it can be used in value efficiency analysis
to introduce efficiency scores taking into account the preference information of
556 AIDING DECISIONS WITH MULTIPLE CRITERIA

the DM (see, Halme, Joro, Korhonen, Salo and Wallenius [1999]). In this
problem the value efficiency scores are exceptionally the same as technical
efficiency ones.

4. Conclusion
We have shown that the approaches developed to search the efficient
frontier in multiple objective linear programming (MOLP) are useful in
analyzing the efficiency in data envelopment analysis (DEA). To
characterize the efficient frontier of a MOLP-problem, a widely used
technique is to transform the problem into a single objective problem by
using an achievement scalarizing function as proposed by Wierzbicki
[1980]. This transformation leads to a so-called reference point model in
which the search on the efficient frontier is controlled by varying the
aspiration levels of the values of the objective functions. For each given
aspiration level point, the minimization of the achievement scalarizing
function produces a point on the efficient frontier. Because the reference
point model and models used in DEA are similar, the methods based on the
reference point approach can be used in DEA as well. One of those further
developments is Pareto Race, a dynamic and visual free search type of
interactive procedure for multiple objective linear programming proposed by
Korhonen and Wallenius [1988]. The theoretical foundations of Pareto Race
are based on the reference direction approach developed by Korhonen and
Laakso [1986]. The main idea in the reference direction approach was to
parameterize the achievement scalarizing function
The search of the efficient frontier in DEA-models is desirable for
instance, when the DM would like to have more flexibility in determining a
reference unit to an inefficient unit than a radial projection principle
provides. Sometimes a DM may be interested to make a search on the
efficient frontier just for finding the most preferred unit on the frontier. The
most preferred solution is needed for instance for value efficiency analysis
proposed by Halme, Joro, Korhonen, Salo and Wallenius [1999].

References
Arnold v., Bardhan, I., Cooper, w. W., Gallegos, A., "Primal and Dual Optimality in
Computer Codes Using Two-Stage Solution Procedures in DEA", (Forthcoming in
Aronson, J. and S. Zionts (Eds.): Operations Research: Models, Methods and
Applications, Kluwer, Norwell. (A Volume in honor ofG.L. Thompson)), 1997.
Banker, R.D., Chames, A. and Cooper, W.W., "Some Models for Estimating Technical and
Scale Inefficiencies in Data Envelopment Analysis", Management Science 30, 1078-1092,
1984.
Searching the Efficient Frontier in Data Envelopment Analysis 557

Bouyssou, D., "DEA as a Tool for MCDM: Some Remarks", Journal 0/ the Operational
Research Society 50,974-978,1999.
Chames, A., Cooper, W., Lewin, A.Y. and Seiford, L.M., Data Envelopment Analysis:
Theory, Methodology and Applications...Kluwer Academic Publishers, Norwell, 1994.
Chames, A., Cooper, W.W. and Rhodes, E., "Measuring Efficiency of Decision Making
Units", European Journal o/Operational Research 2, 429-444, 1978.
Chames, A., Cooper, W.W. and Rhodes, E. "Short Communication: Measuring Efficiency of
Decision Making Units", European Journal 0/ Operational Research 3, 339, 1979.
Chames, A., W.W. Cooper, Q.L. Wei, and Z.M. Huang, "Cone Ratio Data Envelopment
Analysis and Multi-Objective Programming", International Journal 0/ Systems Science
20,1099-1118,1989.
Chames, A., W.W. Cooper, Q.L. Wei, and Z.M. Huang, "Fundamental Theorems of
Nondominated Solutions Associated with Cones in Normed Linear Spaces", Journal 0/
Mathematical Analysis and Applications 150, 54-78, 1990.
Dyson, R.G. and E. Thanassoulis, "Reducing Weight Flexibility in Data Envelopment
Analysis", Journal o/Operational Research Society 6,563-576, 1988.
Geoffrion, A., "Proper Efficiency and the Theory of Vector Maximisation", Journal 0/
Mathematical Analysis and Applications 22, pp. 618-630, 1968.
Golany, B., "An Interactive MOLP Procedure for the Extension of DEA to Effectiveness
Analysis", Journal o/Operational Research Society 39, 725-734, 1988.
Halme, M., Joro, T., Korhonen, P., Salo, S., and Wallenius, J., "A Value Efficiency Approach
to Incorporating Preference Information in Data Envelopment Analysis", Management
Science 45, 103-115, 1999.
Joro, T., Korhonen, P. and Wallenius, l, "Structural Comparison of Data Envelopment
Analysis and Multiple Objective Linear Programming", Management Science 44,962-970,
1998.
Korhonen, P., "VIG - A Visual Interactive Support System for Multiple Criteria Decision
Making", Belgian Journal 0/ Operations Research, Statistics and Computer Science 27,
3-15, 1987.
Korhonen, P., and Laakso, J., "A Visual Interactive Method for Solving the Multiple Criteria
Problem", European Journal o/Operational Research 24, 277-287, 1986.
Korhonen, P. and Wallenius, J., "A Pareto Race", Naval Research Logistics 35, 615-623,
1988.
Steuer, R.E., Multiple Criteria Optimization: Theory, Computation, and Application, Wiley, New
York,1986.
Thanassoulis E. and Dyson R.G., "Estimating Preferred Target Input-Output Levels Using
Data Envelopment Analysis", European Journal o/Operational Research 56, 80-97, 1992.
Thompson R.G., Langemeier, L.M., Lee, C-T., Lee, E., and Thrall, R.M., "The Role of
Multiplier Bounds in Efficiency Analysis with Application to Kansas Farming", Journal 0/
Econometrics 46, 93- \08, 1990.
Thompson, R.G., Singleton, Jr., F.R., Thrall, R.M., and Smith, B.A., "Comparative Site
Evaluation for Locating a High-Energy Physics Lab in Texas", Interfaces 16,35-49, 1986.
Wierzbicki, A., "The Use of Reference Objectives in Multiobjective Optimization", in G.
Fandel and T. Gal (Eds.), Multiple Objective Decision Making, Theory and Application,
Springer-Verlag, New York, 1980.
Wierzbicki, A., "On the Completeness and Constructiveness of Parametric Characterizations
to Vector Optimization Problems", OR Spektrum 8, 73-87, 1986.
558 AIDING DECISIONS WITH MULTIPLE CRITERIA

Wong Y-H.B. and J.E. Beasley, "Restricting Weight Flexibility in Data Envelopment
Analysis", Journal o/Operational Research Society 41,829-835, 1990.
Zhu, J., "Data Envelopment Analysis with Preference Structure", Journal o/the Operational
Research Society 47, 136-150, 1996.

You might also like