You are on page 1of 9

An Investigation of the Failures of Decision Support Systems: The Impact of

Paradigm and Interface Errors

Sumedha M. Makewita

School of Management
University of Southampton
Southampton, United Kingdom

Various errors incurred in the process of development could contribute significantly to the failures of Decision
Support Systems (DSS) in organisations. A majority of these errors are the misinterpretations and wrong
assumptions made by the developers, which make the DSS both drift away from the decision problem and
difficult to use. Our investigations of the failure of a DSS tool in one organisation revealed the presence of
two further kinds of errors which could cause negative influence on user acceptance. We denote them as the
‘paradigm error’ and ‘interface error’, and we highlight the need for eliminating these particularly when
developing DSS tools for optional usage.

DSS failure, paradigm error, interface error

Decision Support Systems (DSS) are developed at various levels in the organisation, but a significant number
of these are failures in terms of both providing effective decision support and gaining user acceptance. Apart
from being infeasible, the failures of DSS can be ascribed broadly to various errors made in the development
process. Some of these errors are not easily recognised by the developers who put those systems into use
without realising that their success is only illusive. Interestingly, however, the organisations do not always
sense such illusiveness because the consequences of the errors sometimes remain camouflaged within
organisational complexity. The danger is therefore organisations trying to invest in DSS without realising that
some of these systems might hinder the decision-making function, be underutilised or even become redundant.
These failure outcomes seem to further depend on the rationale behind systems’ development. Organisations,
on the one hand, develop DSS as mandatory tools that are incorporated into their operational norms. A
primary cause of failure with such DSS is misrepresentation of the decision problem. This is often the
consequence of misinterpretations and wrong assumptions made by the developers at design stage that cause
the DSS to drift away from representing the decision problem. Misinterpretations can be ascribed to
communicational and observational errors while the assumptions, which are made to fill gaps in knowledge,
can be ascribed to time constraints and other political factors. On the other hand, organisations, particularly
the larger ones, often take proactive approaches to find DSS opportunities within its existing practices, which
usually results in the development of optional DSS tools. In contrast to mandatory tools, the optional DSS tools
require user acceptance as a fundamental requirement, and without that the capabilities of such systems become
redundant. The failures would therefore manifest as underutilisations or even as complete redundancies.
In this paper we present a case study of DSS failure which brings two fundamental issues regarding
institutionalising optional DSS tools into perspective. We present these as errors in the basic beliefs of the DSS
developer, which we denote as the paradigm error and interface error. It shows that, unlike misinterpretations
and assumptions, which are incurred in the development process of the DSS, the paradigm errors are more
fundamental in nature and are inherited from the DSS development traditions and practices followed by the
This paper begins by setting the context for understanding the paradigm and interface errors, which also sets
the context for the investigation and its findings. It then briefs out the research methodology which is followed
by a case study of DSS failure. The paper concludes with a discussion of the key findings.

An Investigation of the Failures of Decision Support Systems: The Impact of Paradigm and Interface Errors


2.1 The Theory of DSS

DSS fall into the category of computer-based tools that support people by transcending the limitations of the
human cognitive system to handle complexity and process large amounts of information. The core of DSS
consists of simulation and information processing which is accessible through an interactive shell, which is the
user interface.
The basis of simulation is rationalisation of the decision problem into a mathematical model that represents its
variables, constants and their interrelationships. This aspect of DSS assumes primarily that human
intuitiveness is limited for understanding complex phenomena at analytical level. It means that we would not
know exactly how the variables and constants of such phenomena interrelate to produce the observed outcomes
(Diehl and Sterman, 1995). Our understanding of such phenomena is therefore limited to a combination of
norms and input-output relationships (Kerstholt and Raaijmakers, 1997). Norms are the given sets of rules that
can be used for relating inputs to the outputs, whereas input-output relationships are the observed discrete links
between inputs and outputs which can be interpolated and extrapolated. We accumulate input-output
relationships through acquaintances where, theoretically, over time, we would learn all the aspects of our
domain of decision problems.
In reality, however, the decisions are made within some social, political and economic context, which limits
people’s repertoires of input-output relationships, leading to inadequate understanding of certain decision
situations (Isaac and Senge, 1992). What the simulation offers therefore is an analytical model of such decision
problems which can be used for (1) manipulating the inputs to obtain the desired outcomes, and for (2)
obtaining a larger set of input-output relationships.
The information processing aspect of DSS assumes Simon’s theory of bounded rationality (Simon, 1973;
Simon, 1945), which suggests that people facing decision situations would not investigate their repertoires of
viable options, but instead use their prior knowledge to select a subset. Simon based this theory on the
assumption that human cognitive capacity is limited for processing large amounts of information in a short
period of time. People’s motive, as Simon suggests, is to reduce the need for processing information, and
therefore data-richness would have no positive impact on decision-making beyond a certain threshold. In
comparison with human mind, the information processing capacity of computer based tools would be contended
only by the availability of computing power. Thus, DSS offer the opportunity for the decision-maker to operate
over larger databases which would otherwise be ignored.
In reality, the data exist in multiple forms and categories where the ‘right data’ for a problem does not pre-
exist. The development of DSS therefore raises the questions ‘what data to be used?’ and ‘how that data be
used?,’ which the developer needs to resolve in the process.
The interface aspect of DSS assumes that computer is a complex system that is beyond the comprehension of
the general user. Based on this assumption, Tucker (1975) suggested a criterion for interface design such that
the interface ought to allow the user to concentrate on “what” to be done (i.e. task) in a language closer to
his/her own discipline rather than “how” it could be done (i.e. how to use the computer). The key challenge for
the developer is therefore to cater for the diversity in multi-user environments. Most commercial software
developers have met this challenge by developing customisable software that caters for a wide range of users.
Providing such sophisticated interfaces is not warranted by organisational DSS, thus the developers usually
provide only a limited flexibility in the software itself while the diversity of the user is handled through training
programmes. What would still be important for the developer is to understand the decision-making
environment so that the system remains effective under all circumstances faced by the user.

2.2 The Developers Paradigm

One would notice that the development of DSS primarily depends on the developers’ interpretation of (1) the
decision problem, (2) the data requirements, (3) the skills and behaviours of the decision-maker and (4)
decision-making environment. Hence, any misinterpretations or assumptions made regarding these would
reflect directly on the performance of the DSS. The consequences could go as far as producing ineffective
systems that are invalid in the context of the decision problem and difficult to use.
However, the effectiveness of DSS has two aspects to it. On the one hand, the effectiveness can be objectively
defined where it is a measure of how well the system represents the decision problem irrespective of what the
users perceive. But, on the other hand, effectiveness can be subjectively defined where it becomes a measure of
the user perception. A system would become objectively ineffective because of misinterpretations and wrong
assumptions made by the developers. In contrast, a system would become subjectively ineffective when the user

Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference 2004

perceives that it does not add value to their decision-making process. This is however a complex issue that
depends on three main factors, which are (1) the developers’ interpretation of the decision problem, (2)
communication of developers’ interpretations to the user and (3) the users’ own interpretations of the decision
Naturally, a person’s interpretation of a particular phenomenon depends fundamentally on what he/she believes
about its reality. Accordingly, the developers’ interpretation of the decision problem depends on their own set
of beliefs about it reality, which we may call the developers paradigm. In this case, the paradigm has two
contrasting alternatives such that, depending on the decision problem, the developers can become either
subjectivists or objectivists.
The subjectivist paradigm: The developers become subjectivists when they believe that the reality of the
decision problem is constructed by the decision-makers. In which case, the developers believe that the decision
problem is limited to decision-makers interpretation, and would not transcend. Therefore, to understand the
decision problem, the developers would emphasise the need for a dialectical engagement with the decision-
The objectivist paradigm: The developers become objectivists when they believe that the reality of the decision
problem is independent of and transcending decision-maker’s interpretation. Therefore, to understand the
decision problem, developers would perceive the need to think beyond the interpretations of the individual
As same as DSS developer, the DSS user would also occupy the subjectivist or objectivist paradigms when
interpreting the decision problem. In the case of users occupying the subjectivist paradigm, they would
perceive that the decision problem is exhaustively confined within their interpretation. In contrast, when users
occupy the objectivist paradigm, they would perceive that the decision problem could be much broader than
their own interpretation.
Hence, the paradigms occupied by the users and developers would give rise to four distinct situations as
described below:
Situation 1: Both users and developers occupying the subjectivist paradigm. In this case the developers would
perceive the need to interpret the decision problem as close as possible to that of the users who also carry faith
in his/her understanding. Potentially, this would lead to a system which has higher probability of gaining user
acceptance. But, it will have limited scope and low stability which might require frequent updates to cope with
changing user interpretations.
Situation 2: The users occupying the objectivist paradigm whiles the developers occupying the subjectivist
paradigm. In this case, the developers will perceive the need to interpret the decision problem as close as
possible to that of the users, but the users might be in search for a broader interpretation. Potentially, this
would lead to a system that becomes unstable from the launch itself.
Situation 3: Both users and developers occupying the objectivist paradigm. This would be the optimum state
where the users are in search for a broader interpretation of the decision problem whiles the developers trying
to offer the same. Such systems would therefore carry a higher probability of gaining user acceptance.
Situation 4: The users occupying the subjectivist paradigm whiles the developers occupying the objectivist
paradigm. This would represent what is commonly occurring in practice where the developers taking a broader
perspective at the problem than individual users who feel comfortable with their own interpretations.
Potentially, gaining user acceptance with such systems would be a challenge to the developers.
Hence, the optimal approach for the developer, from a theoretical perspective, would be to occupy the
objectivist paradigm. But, it might not gain user acceptance unless the rationale is made explicit. It literally
means making the system transparent to the user who would otherwise perceive the system as a ‘black box’
which could be understood and evaluated only through input-output relationships. Hence, transparency is a
mandatory issue for sustaining the objectivist paradigm in the development of DSS.

2.3 Transparency through User Interface

There are two fundamental approaches for making a system transparent to the user. The most effective means
would be to provide formal training, supplemented by documentations, and to setup support environments that
stimulate self-exploration. The significance of this approach is highlighted clearly in research literature, which
identified lack of training and support as one of the key determinants of poor DSS usage (e.g. Finlay and
Forghani, 1998; De Dombal, 1993). However, providing the necessary training and support for all the DSS
that is developed by an organisation is not warranted. Therefore, the developers are able to offer only a limited
support for some of the medium to lower level systems, which are more or less left out for self-exploration.

An Investigation of the Failures of Decision Support Systems: The Impact of Paradigm and Interface Errors

However, in the absence of resources to provide training and support to the user, the developers could still
make a DSS transparent through the user interface. The rationale of the user interface is to facilitate
interactions with the analytical model, which usually takes into account only the operating aspects where the
inputs are manipulated to obtain desired outcomes. This could be extended to include an exploratory aspect to
the interface which allows the user to access the analytical model itself, and perhaps manipulate. This would,
in general, not be within DSS development traditions, but the case study we present in this paper provides some
interesting insights into why this aspect might become important.

2.4 Moving towards a Use-Centred Approach

It is however interesting to notice that the above issue of the paradigm has not been in discussed explicitly in
DSS literature. However, the literature points out that, over the past decade, the DSS development traditions
have been shifting more towards user-centred approaches as a means of gaining user acceptance and
satisfaction. (e.g. Sisk, Miles and Moor, 2003; McCown, 2002; Parker and Sinclair, 2001; Liu and Lu, 1998;
Loucks, 1995). The theoretical basis of this approach is not entirely clear from DSS literature, but it is clear
enough for us to state that this approach is not synonymous to occupying the subjectivist paradigm. Its
apparent success seemed to base on following three factors:
• It captures the interpretations of many users which allow the developers to see many aspects of
the decision problem thus helping them to build a comprehensive understanding of the decision
• It allows the developers to gain a clearer understanding of the constraints in users’ environment
(e.g. time, data, social and political factors etc), which assists them to decide the levels of
flexibility required, particularly when defining the data requirements and interface design.
• It makes the user receive information about developers and the development process. Such meta-
information sometimes helps in raising the profile of the system.
Hence, the user-centred approach is an effective way of understanding both the decision problem and decision-
makers’ environment, but it could still be encompassed within objectivist paradigm. It is therefore reasonable
for us to suggest that the subjectivist paradigm is not generally favoured by the DSS community, perhaps due to
its underlying instability. However, the case study that we present provides an interesting example of how the
subjectivist paradigm sometimes becomes useful within a broader objectivist framework.

The research was inspired by our observation of the failure of internally developed TPI software at Air
Company. Because this software appeared conceptually stable, our investigation assumed that the reason for
the observed failure could be a complex phenomenon in the social, political and economic context of the
organisation. The investigation was therefore carried out using interpretive research methods.
The data was collected exclusively through in-depth interviews with selected personal at Air Company. A total
of nine interviews were carried out with six people over a period of approximately three months.
The data was analysed using Grounded Theory techniques (Strauss and Corbin, 1998; Glaser and Strauss,
1967), and are presented in this paper in the form of case study.


The software produced by the Operational Research department of Air Company to evaluate the Transfer
Passenger Index (TPI) was a typical example of DSS failures in organisations.

4.1 Background
Air Company is a major European airline. Like many other leading players in the region, Air Company’s early
growth is attributable to the high domestic market for international air travel and the highly regulated nature of
the air transport industry. The regulatory structure controlled the competition in such way that passengers
between two countries had to be shared by the airlines registered within those countries. Thus, population and
economic prosperity were the fundamental factors that determined the size of air carriers in each country. This
early growth had however reached saturation by the eighties, and, by the end of that decade, the industry
entered a different phase mainly because of deregulations and general globalisation trends. In addition, the
European Union brought further changes into the industry structure within the region, which led to the
declaration of ‘open skies’ in 1993. These changes brought opportunities and threats to individual airlines.

Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference 2004

The opportunities were such that the market became unrestricted, and therefore further growth became
possible. The threats were such that stronger companies were able to grow and survive at the expense of
weaker ones. The industry reacted to these changes where companies worldwide, including Air Company,
began to exploit new markets and form various strategic alliances.
In mid nineties, Air Company was trying to exploit a particular segment of the air travel market which had
indicated opportunities for further growth. This is the international transfer passenger market. The transfer
passengers are those who change aircraft at some intermediate port before arriving at their destination. For
example, a passenger travelling from Madrid to New York via Paris is a transfer passenger at Paris. They
differ from direct passengers who travel without changing aircraft. What is important with transfer passenger
market is that it gives airlines the opportunity to capture part of the air travel market between countries other
than its own. It is however important to notice that the transfer passenger market had existed for many
decades, but the focus of major airlines had been on the direct travel market thus had shown only little formal
emphasis. Interestingly, however, the transfer passenger market had been growing naturally such that a survey
carried out in 1993 showed transfer passengers occupying 10-40% of all passengers travelling through major
European airports. The Air Company was now trying to incorporate this segment formally in its strategic
The company studied the transfer passenger market from two different perspectives. Firstly, they wanted to
know what growth prospects existed in the long term. Secondly, they wanted to know how this market can be
manipulated in the short term to increase the seat factor (i.e. to fill empty seats). In fact, the latter was a more
complex task which literally meant understanding people’s behaviour in selecting amongst the different
alternatives that are available to make a journey. In other words, the company wanted to develop a decision
tool for evaluating its competitiveness amongst the other competing airlines for attracting passengers to travel
via its own hub airport.

4.2 Transfer Passenger Index (TPI)

The task of developing the decision tool was assigned to a member in the operational research team (say
David). He set his objectives through the question ‘what makes a person select Air Company to fly via its hub
airport rather than selecting any other alternative route.’ David decided to begin the investigation by inquiring
from the people who had experience dealing with transfer passengers. He noticed that the passenger behaviour
depended on many factors, but, interestingly, different personnel within the company emphasised only a subset
of these. For example, one person from the strategy team emphasised only the price and the total flying time,
but another person from revenue control department additionally emphasised the total number of stops in the
journey. Therefore, David came up with the idea of consolidating these different ideas that were scattered
around the company:
‘I interviewed all the guys who had some knowledge, or it is better to say who had some
interest in the transfer passenger market. There is no single way to judge, say the
behaviour of a passenger, and that is what I saw as the main problem here. People look at
transfer passengers from different angles, they have experience only in certain aspects. I
called them attributes in my model. What I did was combining all these scattered views into
a single formula…’.
David encompassed all the attributes of transfer passenger behaviour into a single index known as the Transfer
Passenger Index (TPI). It was calibrated using complex statistical techniques. The TPI was an absolute
indicator of competitiveness:

TPI = k 0 + k1 ⋅ TFT + k 2 ⋅ TNS + k 3 ⋅ CT + k 4 ⋅ TI + k 5 ⋅ HI + k 6 ⋅ PI

TFT = Total Flying Time
TNT = Total Number of stops
CT = Connection Time
TI = Type Index (Sub model)
HI = Hub Index (Sub model)
PI = Price Index (Sub model)
Although TPI depended on several variables, the price was the only one that could be changed in the short-
term; hence one could investigate how the competitiveness changes with the price. Subsequently, the TPI was
built into software that was available to all in the company. The software was linked to several databases. The

An Investigation of the Failures of Decision Support Systems: The Impact of Paradigm and Interface Errors

user only had to specify the originating port and the destination port, and the software obtained all the data
from online sources and calculates the TPI for all the different alternatives available to a passenger. The
decision-maker could then investigate price changes to bring the airline into competitive position. For
example, say, the decision-maker investigates the route from Madrid (Spain) to Boston (USA) and finds out
that a competitor has a higher TPI than that of Air Company. What he/she could do then is change the price
until TPI of Air Company rise above that of the competitor. The actual decision depends on the feasibility of
the adjusted price (and other factors).
Surprisingly, however, the TPI did not become the ultimate tool for assessing transfer passenger behaviour.
Despite David’s effort, its usage was surprisingly low, for example, people from revenue control department
used it only once over a nine-month period. David commented, ‘we are going to improve it, there are other
attributes which we have not taken seriously’. Despite David’s views, the problem was not in the accuracy of
the index. Subsequent investigations revealed some interesting insights into the problem.

4.3 Problems of the TPI

The concept of TPI was found to be opaque to most of its potential users. For example, the following comment
was made by an executive from the revenue control department:
‘I have to make crucial decisions about fares and discounts. Sometimes I have to discuss
with Marketing about promotions and special offers …. I am not very comfortable to base
my decisions on this TPI because I am not very sure about it. I mean, I was OK with my
own assessment, but it takes time, that is the problem, and not very accurate, but still it is
OK. The TPI is fast, I am happy with that, but I need to know what is going on, I can afford
to compromise on the accuracy’.
The problem this person had was the insufficient insight given by the TPI. The system allowed only very
limited interactions with the user, thus it was difficult to obtain a good insight from the few numbers that
appeared on the computer screen. Also, the user did not have any control over the computational process, in
particular, control over the attributes that were used in the computation.
As a reaction these complaints, David decided to prepare a document explaining the entire model including the
calibration process. This was well received by all the users as it gave them enough knowledge to decide either
to continue using the TPI or to give up and revert back to their own evaluations.
Many users did not agree on the importance of certain attributes in the formulation, which they considered as
too many and confusing. For example, those in the strategy team believed that price and total flying time are
adequate for a pragmatic estimate. Also, there were certain other attributes that people perceived as significant
but were not found in the formulation. The reason for such omissions had been the difficulty of quantification.
For example, trends in tourism is an important determinants of transfer passenger behaviour, but difficult to
quantify. But, these attributes had been taken into account by people in their own evaluations. One person
‘It is difficult to modify the TPI with all this other information we have. TPI is a number,
and we have these other facts and opinions which are not numbers, they are just like apples
and bananas….so it is either you take the TPI or just leave it completely and go by your
own method….’
The TPI did not reflect the changes in the data as perceived by people. The reason for this was hidden within
the complexity of the calibration. Each user had his/her own view about the effect of each attribute, but
coefficients given by David’s calibration were significantly different to what some users would have expected.
He commented that:
‘Some people don’t agree with my coefficients, but this is what you get in the
calibration….one of the guys made a big argument that price changes should have more
effect, but I don’t agree with him. What we have done is very scientific….’
Thus, even after David publishing the details of the model, people were reluctant to use it. One person
commented ‘Earlier we did not know what it was, but now we know.’ The concept of TPI was not a
challenging alternative to what people had been doing already which they believed had given them a pragmatic
solution. By developing the TPI, the organisation tried to institutionalise a practice but without realising that
people had already institutionalised different practices which converged at pragmatic level. A person from the
strategy team commented after reading David’s document:
‘I found quite a few assumptions in the formulation. I felt [that] they have done this just to
complete the formulae and may be they wanted to produce a clean software. But, people

Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference 2004

like us have lots of experience and we are very sensitive even to these little things. You
can’t just get away by showing us a complicated formula. Some people have this wrong
idea, they think what we need is posh-looking software. Wrong!! Completely wrong!! They
[operational research team] should have given us more control over the formula, left the
assumptions open so we can decide. That way they could have produced customisable
software which people would have preferred better’.
Making the TPI software fully customisable was not an easy task, which required designing a sophisticated
interface. David did not receive adequate funds for this. However, the interface was refined to the level where
the user could manipulate the model to some limited extent. This new version of the software made a better
impact that some people begun to use it for the purpose of convenience as it had online links to important data
sources. However, the perspective of some users changed significantly within the next few months. That is
because these people had been experimenting with some changes to David’s original formulation to discover
that the TPI can become useful in some aspects. But, a majority of the targeted users did not bother at all.
David made the following comment about his effort:
‘Ideally, this customising thing should have been there from the beginning. But, that is a
little against our profession. Actually, we are here to make improvements, and that is why
we try to look at problems objectively. But, I have to admit, there is no point if it is only me
who see the improvement. Actually, it should be the people who use our products…’

4.4 Discussion
The TPI was a proactive effort by the company to assist decision-makers to figure out how the transfer
passenger market behaves in the short term. The system they developed was for the optional use, but the
company expected that it would become the norm. Instead, the TPI suffered seriously from underutilisation.
There were two aspects to the failure of the TPI software. The outset rejection of the TPI can be ascribed to the
lack of transparency in the interface design. The interface of the TPI software had been designed under the
assumption that its acceptance by the user would be automatic. Thus, the interface facilitated only the
operating aspects, and this was the reason for the initial rejection of the system where people could not figure
out how those numbers on the screen were calculated. Had the developer given adequate facilities in the
interface to browse the simulation model, the usage of TPI could have been higher. That is because the users
who agreed with the TPI formulation wouldn’t have rejected it at the outset.
The failure of the TPI at its subsequent stage (i.e. after publishing the model) can be ascribed primarily to the
objectivist perspective that was taken by its developer. The problem it addressed was a complex one, about
which the individual user did not have a comprehensive knowledge, even though each of them had developed a
pragmatic way of dealing with it. By consolidating these different view points, TPI tried to offer a more
general view of passenger behaviour than what the user believed. Obviously, this general view did not comply
with that of the user. The models that user had developed were much simpler in terms of attributes, and were
used in an inspirational manner compared to the rationalistic approach suggested by the TPI. The key issue
was that the decision-makers were satisfied with those simple models, and they did not perceive any reason for
changing into a complex alternative.
The TPI had other important advantages. Firstly, the simulation was based on comprehensive data sources.
Secondly, the computations were extremely efficient as all the data were readily linked. However, these
advantages were overridden by the rejection of the simulation aspect.
Alternatively, the company could have made the TPI mandatory for assessing transfer passenger behaviour.
But, this would have been too risky as the TPI was still only a theoretical concept developed through
rationalistic approaches. The TPI was developed with the fundamental assumption that passenger behaviour is
rational. Hence, even the developers would not know how well it reflects competitiveness until it is empirically
validated. But such validations could be done only by the decision-makers themselves. That is because
decision-making by these autonomous roles are partly inspirational, thus validation is literally a judgement
about how useful the system could become. Hence, the TPI can be validated only by putting it into real use
with several users who represent the diversity of opinions.
Because of the above validation issue, the TPI could only be put into optional use. For it to be utilised better,
the system had to withdraw its objective stance and offer the user the control over the computational process so
that each person can customise the simulation according to what they believed. But, this clearly diluted the
advantage of developing a comprehensive model, and, apart from convenience, the benefit of using the system
often became limited to the information processing aspect. Nevertheless, such scaling down was inevitable to
gain user acceptance which is essential for supplementary DSS tools such as TPI. Following such initial

An Investigation of the Failures of Decision Support Systems: The Impact of Paradigm and Interface Errors

acceptance, the user became self-motivated, or being persuaded, to further exploit the system which is now
familiar to them. What may be important therefore is to institutionalise the DSS tool into the decision-making
practice by scaling down to the lowest benefit if required, but with the expectation that the true potential will be
exploited subsequently by the decision-maker.

The case study suggested that, beside misinterpretations and assumptions, a DSS is vulnerable to have incurred
two other fundamental errors. One is selecting the wrong paradigm, which we denote as the paradigm error,
and the other is inadequate interface design, which we denote as the interface error.
It might appear from the outset that taking the objectivist approach is more appropriate when developing
complex DSS for the implementation at multi-divisional or corporate level. That is because such decision
problems usually transcend the perspectives of individual user therefore the objectivist approach could offer a
more general view. The case study however suggested that this is not the only basis for deciding the paradigm.
The alternative was to focus on the rationale behind systems’ development. On the one hand, organisations
develop DSS as mandatory tools for the decision-makers who perform fairly directed roles (Less in autonomy).
These people have little incentive for understanding the decision problem in-depth as their focus would be on
manipulating the inputs to obtain the specified outputs. Thus, taking the objectivist approach would be more
appropriate in the development of mandatory DSS tools because it offers the decision-makers a solution that is
based on a broader view than what might be achieved through their own understanding of the decision
In contrast, many organisations, particularly the larger ones, have launched the initiative of trying to improve
the existing decision-making practices by proactively seeking for DSS opportunities. These have led to the
development of supplementary DSS tools which are available for the optional usage by the decision-makers
who perform autonomous roles. Unlike with directed roles, the autonomy gives the decision-maker the
incentive for understanding the decision problem where he/she could use DSS to produce a large enough set of
input-output relationships. Naturally, however, such decision-makers would not use DSS without believing
that it represents the decision problem at a pragmatic level. Hence, deciding the paradigm for supplementary
DSS tools is not as straightforward as that for mandatory DSS tools.
The case study however suggests that supplementary DSS tools should adopt the subjectivist perspective to gain
user acceptance. But, adopting the perspective alone is not sufficient. It has to manifest itself within users’
perceptions, and this defines an important aspect of the interface design. The case suggests that with
supplementary DSS tools, the interface should be adequately transparent and interactive for the user to inspect
and perhaps manipulate the simulation model. The lack of such character in the interface design is therefore
denoted as the interface error. In contrast, with mandatory DSS tools, the interface design could limit itself
just for manipulating the inputs and outputs.
It is interesting to notice how the paradigm error was incurred. The case study shows that the DSS developer
never made any explicit decisions regarding which perspective to adopt. The perspective was drawn implicitly
from the DSS development practices adopted by the developer. In the idealistic case where users share a
unitary view of the decision problem, the developer would have automatically taken the subjectivist approach.
It appears therefore that choosing the paradigm had been circumstantial. Like the paradigm error, interface
error too has been incurred by the DSS development practices adopted by the developer. Finally, the interface
error always camouflages the paradigm error such that an opaque interface could lead to the rejection from the
user despite using the right paradigm.

Diehl, E. and Sterman, J.D. (1995), Effects of feedback complexity on dynamic decision making,
Organizational Behaviour and Human Decision Processes, 62, 198-215
Glaser, B.G. and Strauss, A. (1967), Discovery of Grounded Theory, Aldine, Chicago
Isaacs W., Senge P. (1992), Overcoming limits to learning in computer-based learning environments, European
Journal of Operational Research, Vol. 59, 183-196
Kerstholt, J.H. and Raaijmakers, J.G.W. (1997), Decision making in dynamic task environments, in Ranyard,
R., Crozier, R. and Svenson, O. (eds), Decision Making: Cognitive Models and Explanations, Routledge
& Kegan Paul Ltd, London
Liu, W. and Lu, Y.Z. (1998), A multiple criteria decision support system with considering user’s preference,
Proceeding of 98 International Conference on Management Science & Engineering, pp. 45-49

Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference 2004

Loucks, D.P. (1995), Developing and Implementing decision Support Systems – A Critique and a Challenge,
Water Resources Bulletin, Vol. 31, Part. 4, pp. 571-582
McCown, R.L. (2002), Changing systems for supporting farmers’ decisions: problems, paradigms and
prospects, Agricultural Systems, Vol. 74, pp. 179-220
Parker, C. and Sinclair, M. (2001), User-centred design does make a difference. The case of decision support
systems in crop production, Behaviour & Information Technology, Vol. 20, No. 6, pp 449-460.
Simon, H.A. (1973), Applying Information Technology to Organization Design, Public Administrative Review,
Vol. 33, 268-278
Simon, H.A. (1945), Administrative Behaviour, Macmillan, NY
Sisk, G.M., Miles, J.C. and Moor, C.J. (2003), Designer Centred Development of GA-Based DSS for
Conceptual Design of Buildings, Journal of Computer in Civil Engineering, July 2003, pp. 159-166
Strauss, A. and Corbin J. (1998), Basics of Qualitative Research (1st edition), Sage Publications, CA
Tucker, A. (1975), Very high-level language design – a viewpoint, Computer Language, Vol. 1, 3-16

Sumedha M. Makewita © 2004. The authors grant a non-exclusive licence to publish this document in full in
the DSS2004 Conference Proceedings. This document may be published on the World Wide Web, CD-ROM, in
printed form, and on mirror sites on the World Wide Web. The authors assign to educational institutions a non-
exclusive licence to use this document for personal use and in courses of instruction provided that the article is
used in full and this copyright statement is reproduced. Any other usage is prohibited without the express
permission of the authors.