You are on page 1of 15

Decision Support Systems 42 (2006) 593 – 607

www.elsevier.com/locate/dsw

The impact of intelligent decision support systems on intellectual
task success: An empirical investigation
E´liane M.-F. MoreauT
Department of Management Sciences and Economic Sciences, Universite´ du Que´bec a` Trois-Rivie`res,
P.O. Box 500, Canada G9A 5H7
Available online 24 May 2005

Abstract
The purpose of this research was to analyze the impacts of intelligent decision support systems (IDSS) on intellectual task
success. Data were gathered from 180 users in three Canadian firms. A structural equation analysis was performed to test a
causal model. The main findings of this study are that intellectual workers who are satisfied with IDSS user-friendliness
perceive their tasks as being more enriching and the systems themselves as being more useful. In addition, if these users
perceive a good job outcome with IDSS, then it may lead to the successful performance of the user’s task.
D 2005 Elsevier B.V. All rights reserved.
Keywords: Intelligent decision support systems; Intellectual task success; User satisfaction; Perceived job outcomes; Decision quality

1. Introduction
In recent decades, information technologies (IT),
including decision support systems, have transformed
work and organizations. Many researchers have
studied their impacts [1,15,18,30,31,39,43,44,46,50,
65,66,68,71]. Among other things, the impacts of IT
use on individuals and organizations, strategies,
structures, productivity, work design and individual
tasks have all been considered. However, the research
done so far is by no means exhaustive [3,49,51].
Indeed, there is still some controversy regarding the

T Tel.: +1 819 376 5011x3147; fax: +1 819 376 5079.
E-mail address: Eliane_Moreau@uqtr.ca.
0167-9236/$ - see front matter D 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.dss.2005.02.008

impacts of new technologies, especially with regard to
decision support systems [5]. For example, Morris [48]
and Molloy and Schwenk [47] found that the decisions
of decision support system users were superior in
quality to those of non-users. On the other hand, Coll,
Coll and Rein [17] found that the use of decision
support systems did not improve decision quality.
Moreover, despite the extent of the budgets at stake,
company managers and researchers still do not have
tools that would enable them to assess the impacts of
IT on user performance [19,21,31,56,62].
Expert systems have been in use for many years, and
the literature is full of examples and assessments
[24,69]. Yet, very little research has been done to
explain their actual impacts on user decisions [69].
Information technologies in general, and expert sys-

594

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

tems in particular, will never be fully accepted if we do
not know their actual and potential impacts on users.
The research carried out so far on expert systems is by
no means conclusive with regard to their impacts on
results or on the decision-making process [12,67,71].
Edwards [22] defined a decision support system as
a system that provides users with access to the data
and models they need to make better decisions.
Farreny [26], for his part, defined expert systems as
software able to replace or assist human beings in
fields where human expertise is recognized as being
insufficiently structured to constitute a precise, certain
working method that can be transposed in its entirety
onto a computerized medium and that is subject to
review or augmentation as additional experience is
acquired. Turban and Watkins [63] referred to
decision support systems with inbuilt expert systems
technology as intelligent decision support systems
(IDSS). Intelligent systems incorporate both forms of
technology and support users at every phase of the
decision-making process. The purpose of this research
is to analyze the impacts of IDSS on intellectual work.
Very few studies have been performed so far to
explain the impacts of these systems on task success.
This paper presents the results of a study aimed at
analyzing a model relating to the impacts of IDSS on
intellectual work. The explanation of the constructs
used for the study describes the proposed conceptual
framework and presents the research hypotheses. The
research variables and research method are then
described, and the structural model is presented. The
results of the analysis are discussed, and the conclusion
contains some suggested avenues for future research.

associated it with the concept of performance, finding
that the positive impact of information systems was to
improve user or departmental performance. However,
the performance construct includes several dimensions, including system success [37,42,64]. Ein-Dor
and Segev [23], Hamilton and Chervany [34,35] and
Seddon [57] discovered that the success aspect was
particularly appropriate for research into specific
information systems.
Seddon’s respected and slightly extended version
of DeLone and McLean’s model presents IT use as an
event that necessarily precedes success [57,19]. Since
this research considers the individual impacts of IDSS
and the scientific literature considers intellectual task
success as a performance concept, the research model
will represent individual impact by intellectual task
success. Seddon [57] defined success as user satisfaction, individual and organizational impact, and
the observed consequences of system use [57].
DeLone and McLean [19], in an assessment of 180
papers, also presented user satisfaction as a significant
factor influencing the impact of decision support
systems. Based on the field of organizational behaviour, Millman and Hartwick [46] identified work
design and job satisfaction (among the observed
consequences) that Yoon and Guimaraes [70] defined
as expert systems impact on users’ jobs. These
constructs are considered to be extremely important
with regard to individual or performance impacts [45].
The model for this study, based on the model of
DeLone and McLean [19] and on research by Sanders
and Courtney [55], Raymond [53], Yoon and Guimaraes [70], and Millman and Hartwick [46], including
the impacts of IDSS on intellectual workers, is
presented in Fig. 1.

2. Theoretical considerations
2.1. IDSS user satisfaction
A number of factors have been called into play
when assessing decision support systems. The models
used differ according to the analysis unit applied. The
research presented in this paper was based on the
model of DeLone and McLean [19].
Because of the goal of this research, the organizational impact of IT use will not be considered, nor will
system quality and information quality. Individual
impact is probably the most difficult concept to define
for both information systems and IDSS. DeLone and
McLean [19] and Hamilton and Chervany [34,35]

With regard to decision support system effectiveness, Udo [64] surveyed the most frequently studied
notions, namely system use and user satisfaction.
Other authors have tried to establish links between

IDSS user
satisfaction

Perceptions
of job and
work

Fig. 1. Proposed research model.

Intellectual
task
success

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

595

user satisfaction and IT use [4]. DeLone and McLean
[20] specified that use and user satisfaction are closely
interrelated.
In this research, user satisfaction is analyzed using
three components: management support, user-friendliness and output data or report quality. These three
same components have also been analyzed by Sanders
and Courtney [55], Raymond [53], Guimaraes et al.
[32] and Yoon and Guimaraes [70] among others.
Gelderman [29], examining the relationship
between user satisfaction, information system use
and performance, concluded that user satisfaction is
directly related to performance. Similarly, Chen et al.
[16] showed in their research that user satisfaction is a
valid construct in a data warehouse environment.
In the same way, in the IS and DSS literature, user
satisfaction has been identified as a factor having a
significant effect on work design and job satisfaction
[46,70]. The study described in this paper will test
user satisfaction for IDSS.

review of the literature by Hauser and Hebert [36]
revealed the advantages and disadvantages of using
expert systems, including their contribution to reducing motivation and satisfaction, loss of power for the
organization and an increase in stress levels. Worker
satisfaction is, however, a multi-dimensional construct
corresponding to the general attitude of individuals
towards their work [41]. Yoon and Guimaraes [70]
concluded that user satisfaction with the expert
systems has a desirable impact on users’ jobs.
Although information technologies, decision support systems and expert systems influence perceptions
of jobs and work, another construct, intellectual task
success, is influenced by user satisfaction with
systems and their perceptions of jobs and work
design. The latter construct in particular is deemed
indicative, given that the nature of IDSS is not linked
to intensity and frequency of use.

2.2. Perceptions of jobs and work

Intellectual task success is an aspect of performance
or individual impact. Based on laboratory experiments
using support systems for intellectual workers, Leitheiser [42] identified four categories of variables including one of interest to many researchers, namely
computer system performance. DeLone and McLean
[19] analyzed individual impact through effectiveness,
efficiency, estimated value of the information and the
system, and changes of behaviour. The notions of
effectiveness and efficiency have also been identified
as useful in assessing decision support systems [40].
Technology efficiency is a concept corresponding to
decision speed (or the time required to make the right
decision) and cost. Seddon [57] explained system
success by means of benefits including better performance or more work done in the same time, or less time
for more work of the same quality. Effectiveness is
concerned more with decision quality, confidence in
the decision, meeting deadlines, decision-maker satisfaction and perceptions, and so on [25,32,61].
Sprague and Carlson [59] examined the impacts of
an interactive decision support system and identified
four aspects including productivity, i.e. the impacts of
interactive decision support systems on decisions and
the perceptions of or impacts on the decision-maker.
Guimaraes et al. [32] assessed decision support
system success with the decision itself, and the

The introduction of IDSS, even IS and DSS, is
likely to affect many different aspects of the manager’s job. To determine the full range of potential
changes, researchers have consulted the work design
and job satisfaction literature in the field of organizational behaviour. Based on the work of Hackman and
Oldham [33], Millman and Hartwick [46] identified
two aspects of individual work modified by IT use,
namely work design and job satisfaction. These two
constructs have an impact on individual and organizational performance. Sviokla [60], in an analysis of the
impacts of the XCON expert system, observed
changes in user performance and job satisfaction,
along with an improvement in organizational performance. Rahman and Abdul-Gader [52], in a survey,
concluded that intellectual worker productivity was
positively linked to worker satisfaction.
Hauser and Hebert [36] argued that the impact of
expert systems on work must be revelatory because
the systems provide problem-solving expertise in a
specific area and enhance user productivity. Byrd [14]
stated that expert systems automate routine decisionmaking tasks, giving users more time for creative
work. The impact of expert systems on work has thus
been found to be a significant indicator of success. A

2.3. Intellectual task success

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

596

perceived benefits. Ives et al. [37] identified success
as an acceptable substitute variable for assessing
system usefulness in decision-making. Molloy and
Schwenk [47] studied success on the basis of users’
beliefs and their perception that use of the system
improved decision quality.
Thus, the notion of perception, unlike objective
measures of success, has gradually become more
important for researchers [54]. In this study, the
perceptions of intellectual task success are defined
by the two factors identified by perceived job
outcomes, and decision quality, reflecting improvements in the decision due to the system.
McCue and Gianakis [45] considered these constructs to be extremely important with regard to
individual or performance impacts. Thus, all the
above research was used to form the proposed
research model presented in Fig. 2.

Hypothesis 1b. The greater the satisfaction with
management support, the greater the job satisfaction
will be.

2.4. Proposed research hypotheses
In the IS and DSS literature too, user satisfaction
has been identified as a factor that has a significant
impact on work design and job satisfaction [46,71].
Considering the use of IDSS as a specific field of
inquiry, this research aims to study the abovementioned constructs and, more particularly, their
interrelationship. On the basis of this model, the
following set hypotheses can be proposed:

Researchers such as Millman and Hartwick [46],
Yoon and Guimaraes [70] and Rahman and AbdulGader [52] have found two constructs to be determinant of individual performance, namely work design
and job satisfaction. Sviokla [60] also obtained a
similar result for the XCON expert system. Given that
the individual’s task has been modified by the IDSS,
its impact on individual performance, defined by
perceived job outcomes and decision quality, can be
posited as follows:

Hypothesis 1a. The greater the satisfaction with
management support, the greater the improvements
to work design will be.

Hypothesis 4a. The more significant the improvements to work design, the better the perceived job
outcomes will be.

IDSS User
Satisfaction
Satisfaction with
management support

Satisfaction with
IDSS user-friendliness

Hypothesis 2a. The greater the satisfaction with IDSS
user-friendliness, the greater the modifications to
work design will be.
Hypothesis 2b. The greater the satisfaction with
IDSS user-friendliness, the greater the job satisfaction will be.
Hypothesis 3a. The greater the satisfaction with
report or output data quality, the greater the modifications to work design will be.
Hypothesis 3b. The greater the satisfaction with
report or output data quality, the greater the job
satisfaction will be.

Perceptions
of jobs and work
H1a

H1b

Modifications
Work Design

Intellectual
Task Success
H4a
H4b
H5a

H2a
H2b

Job Satisfaction

H5b

a

H3

Perceived
job outcomes

H3b

Satisfaction with report or
output data quality

Fig. 2. Proposed research model with hypotheses.

Decision
Quality

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

Hypothesis 4b. The more significant the improvements to work design, the better the perceived
decision quality will be.
Hypothesis 5a. The greater the job satisfaction, the
better the perceived job outcomes will be.
Hypothesis 5b. The greater the job satisfaction, the
better the perceived decision quality will be.

3. Research methodology
The methodological approach used in this research
is transversal analysis. Data were collected by three
groups of professionals working in three large
Canadian firms in the forestry and financial sectors.
The IDSS were therefore not the same, and had
different names or acronyms. The analysis unit was
the professional performing the intellectual task,
assisted in the decision-making process by an IDSS.
For this study, the intellectual task is a managerial task
such as deciding whether or not to grant credit to a
businessperson or agreeing to or amending a production plan. In the sample, the use of an IDSS was
mandatory for all three groups of professionals. The
demographic profile of the respondents is shown in
Table 1. The questionnaire was tested in several stages
Table 1
Demographic information for sample
Variables

N

Class

Frequency
%

Age

185

7.6
35.1
39.5
17.8

Experience

186

20–29 years
30–39 years
40–49 years
50–59 years
In the firm

Experience

186

Qualification

184

Experience

184

Experience

185

Days of training

185

In the current
positions
College,
bachelor’s,
master’s
Computer
systems
IDSS

Mean
standard
deviation

16.99
11.58
8.44
6.95
25.0
33.7
12.5
7.96
5.56
2.01
1.43
4.65
1.51

597

by practitioners and academics. The final version was
verified with six users, and the author received
comments on it at a weekly meeting of professionals.
Following the validation process, it comprised 62
questions and was sent out to approximately 300
users, along with an explanatory letter to maximize
the response rate. An e-mail reminder was sent out
one week after the questionnaires were mailed. A
response rate of 58% (187 questionnaires) was
obtained, and an alpha significance threshold of 5%
or less was used for the analysis.
3.1. Measurement
Intellectual worker satisfaction with IDSS use is a
complex construct that is assessed using three
components, namely management support, userfriendliness and IDSS output data or report quality.
The measures envisaged for satisfaction with
management support are the material and human
resources available, the IDSS training received and
assistance or support from senior management. Guimaraes et al. [32] measured training by means of a
single question on a five-point scale. The question was
taken from the study by Sanders and Courtney [55].
Raymond [53] measured training according to whether
it was complete and sufficient. Management support
was measured by Guimaraes et al. [32] and by Sanders
and Courtney [55] using two questions. The mean
corresponded to overall management support, and the
Cronbach alpha coefficient was 0.85. Raymond [53]
also measured the constancy of senior managerial
support. Yoon and Guimaraes [70] measured managerial support using four questions and obtained a
reliability coefficient of 0.84 (Cronbach’s alpha).
Raymond [53] measured the precision and signification
of support for users with regard to hardware and
software, relationships with, competency of and
attitude of service staff, and communications with the
support service. In this study, there is therefore one
question concerning training and six to measure
managerial support.
User satisfaction with IDSS user-friendliness and
output data or report quality are measured by elements
drawn from studies by Yoon and Guimaraes [70],
which were themselves adapted from Raymond’s
questionnaire [53]. IDSS user-friendliness is measured
by means of five questions: the value ascribed to the

598

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

system, IDSS running time to obtain a result, ease of
use, ease of learning, usefulness of system documentation and freedom to browse. Report or output data
quality is measured by timing (data available at the
right time), and by the reliability, exactness and
exhaustiveness of the data. To assess the reliability of
these measurements, Yoon and Guimaraes [70] used a
seven-point Likert scale to obtain a Cronbach’s alpha of
0.87. In the study described here, the term bexhaustiveQ
was replaced by bcompletenessQ. The following measures from Raymond [53] were added: relevance of the
information provided by the system and accuracy of the
information produced (rather than exactness). For this
study, user satisfaction with the conformity of the
system’s decisions was also measured. This measurement may not seem pertinent for the user satisfaction
construct, and may appear more relevant for measuring
intellectual task success. However, whether or not the
system’s recommendation is consistent with the user’s
decision is certainly a factor in user satisfaction. Thus,
respondents were asked to indicate their level of
satisfaction or dissatisfaction on a seven-point Likert
scale for each of the elements presented.
Perceived improvement to work design and job
satisfaction was measured based on the research of
Yoon and Guimaraes [70] and Millman and Hartwick
[46]. In previous research, perceived improvement to
work design included modifications to the significance of the task, i.e. the importance of the task
performed using an IDSS, to the volume of work
demanded and to the required level of precision.
Variety of skills is defined by the number of skills, the
skills required to perform the computerized task and
the intrinsic interest of the skills. Changes are also
measured in feedback concerning performance of the
work and in autonomy by the level of responsibility
and freedom in work planning and methods. Second,
satisfaction with the job as modified is measured by
the potential for promotion, job security, relations
with co-workers and peers, and satisfaction with the
type of work. Millman and Hartwick [46] added level
of supervision and level of responsibility. This latter
factor will be measured via work design autonomy.
Respondents were asked to say whether their perception of the improvement to work design and job
satisfaction was lower ( 2, 1), higher (1,2) or
unchanged (0). A five-point scale was therefore used
for these measures. In the study by Yoon and

Guimaraes [70], the reliability coefficient was 0.89
(Cronbach’s alpha).
Perceptions of intellectual task success are measured among other things by two factors developed by
Sanders and Courtney [55] to measure decision
support system success. Using a factorial analysis
with varimax rotation, these authors identified two
factors with eigen values of 54.5% corresponding to
the total variation. The first factor, referred to by
Sanders and Courtney [55] as the user’s perception of
job outcomes, accounts for 80% of the common
variation and includes six questions reflecting the
user’s general perception of job outcomes. In the
study by Will [67], this factor corresponds to the
user’s perception of job outcomes in the decisionmaking process, and was identified as such. The
internal consistency or reliability of this aspect was
found by these authors to be 0.87.
Sanders and Courtney’s [55] second factor, perceived decision quality, explains 20% of the common
variance and reflects the improvement in the decision
due to the decision support system. This factor
contains seven questions, with a reliability level
(internal consistency) of 0.82. Guimaraes et al. [32]
used exactly the same measures in their research into
decision support systems. In Will’s study, they
correspond to improvement in task performance [67].
In the study described here, the perceived job
outcomes and perceived decision quality are used to
assess intellectual task success. The questions are
evaluated on a seven-point Likert scale, so that our
results can be compared to those of Sanders and
Courtney [55], Guimaraes et al. [32] and Will [67].
The variables are described in Appendix A with the
mean and standard deviation and factor loadings.
3.2. Test of model measurement
Structural equation modelling was used to test the
research hypotheses (Fig. 2), based on the approach of
Bentler and Weeks [11] as implemented in the EQS
application [6]. The model was tested by simultaneously assessing the measurement model and the
theoretical (or structural) model using the data
obtained from the 187 sample respondents, applying
the maximum likelihood method to estimate the
causality coefficients between the constructs, the
factor loadings of the variables on the constructs they

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

measure, along with correlations, residual variance
and goodness-of-fit (v 2 base).
The model produced at this stage was then used to
establish the structural model containing causal links
between the constructs of the proposed model. Since
the questionnaire was composed of statements evaluated on a seven-point Likert scale, the data obtained
from the survey tend to lie more towards the left. To
correct this problem, they were converted in log10
(8 1), thus presenting a more normal curve without
being modified. For the perceived improvement to
work design and job satisfaction measures, the values
extended from + 2 (change increase) to 2 (change
decrease) with a zero value for no change. The data
were analyzed using the EQS application [6].
In causal analysis with latent variables (i.e. nonobservable or non-measurable variables), a measurement model describes the nature of the link between
two types of variables: first, the latent variables or
factors, and second, the manifest variables that
measure the latent variables. The model analyzed
was composed of seven latent variables corresponding
to seven constructs of the model under study:
satisfaction with management support, satisfaction
with user-friendliness, satisfaction with output data or
report quality, perceived improvement to work design
and job satisfaction, perceived job outcomes and
decision quality. Each of these seven latent variables
was measured by at least three manifest variables. The
model was estimated using the maximum likelihood
method and chi-square value. After some iterations1,
the model was accepted as the final measurement
model for the study, with six factors.
3.3. Evaluation of the measurement model
The unidimensionality, reliability and convergent
validity of the constructs were assessed by examining
the fit of the submodels and the causality coefficients
linking the constructs to their component factors. The
basic condition for a construct to have meaning is that
its measure must be unidimensional [2]. The model’s
fit to the data is assessed by its chi-square; however,
1
The model without the job satisfaction factor (its variance
extracted value, CFI and NNFI was too low) was tested to obtain the
measurement model. Each time a variable was eliminated from the
model, it was because of the high residuals obtained (over 3.5).

599

this particular statistic must be used with care in
structural equations where the size is small [27]. We
used a normalized value by dividing v 2 by its degree
of freedom (df); fit is considered adequate where the
value of (v 2|df) is below 5 [38]. The model’s chi/df
was 2.42, i.e. below 5.
Chi-square is usually completed by various ad hoc
indexes of fit, which are more practical and more
robust in showing the extent to which a model
explains the data. In the EQS approach, the indicator
of choice is Bentler’s [8,9] comparative fit index
(CFI), since it correctly illustrates the fit regardless of
sample size. The CFI is calculated as follows:
CFI = |(v 20 df 0 ) (v 2k df k ) / (v 20 df 0 )|, where
v 20 = the null model (the model in which all intervariable correlations are presumed to equal 0), v 2k = the
proposed model, df 0 = degrees of freedom for the null
model and df k = degrees of freedom for the hypothesized model, and | | shows that the resulting value is
abbreviated to fall between 0 and 1. In addition, the
model must have a value greater than 0.9 for the
comparative fit index (CFI) and the non-normed fit
index (NNFI). These two indices did not show an
acceptable adjustment: CFI = 0.84 and NNFI = 0.82
[7,10]. However, two other widely-used fit indices,
the root mean squared residual (RMR) and the root
mean squared error of approximation (RMSEA) attain
acceptable threshold values of 0.01 and 0.08, respectively [13] and confirm the unidimensionality of the
model constructs. Other tests were performed to
assess the model’s reliability and validity.
As regards the convergent validity, the square of the
factor loading is represented by the percentage variance
of a variable explained by the construct it is meant to
measure. Two of the seven satisfaction with management support variables, one of the six satisfaction with
user-friendliness variables and five of the nine perceptions of work design variables had to be removed from
the sub-models because the square of the factor loading
was not significant. The variables in question were: the
purchase of IDSS models is wisely invested and top
management is strongly in favour of the concept of
IDSS for the satisfaction with management support;
IDSS running time to obtain a result for the satisfaction
with user-friendliness, and importance of the task
performing using an IDSS, volume of work demanded,
required level of precision and standardization for
perceptions of work design.

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

600

Table 2
Correlation, reliability and validity of the factors analyzed using structural equation modelling
F1
F2
F3
F4
F5
F6

satisfaction with management support
satisfaction with IDSS user-friendliness
satisfaction with report or output data quality
improvements to Work design
perceived job outcomes
decision quality

F1

F2

F3

F4

F5

F6

0.68
0.67
0.51
0.37
0.24
0.20

0.62
0.88
0.53
0.34
0.29

0.77
0.40
0.25
0.22

0.63
0.63
0.54

0.67
0.85

0.82

Diagonal: (average variance extracted estimate)1/2 .
Sub-diagonal: correlation between constructs = (shared variance)1/2 .

The research model’s discriminant validity also had
to be assessed because of the presence of multiple
constructs. This involves testing the extent to which the
constructs are really separate from each other, using the
correlation between each pair of constructs as a
criterion. The shared variance between two constructs
(i.e. the squared correlation) must be less than the mean
variance extracted by each construct from the variables
that measure it [27]; the results in Table 2 confirm this
for correlation except satisfaction with report or output
data quality (F2 with F3) and perception of work design

The reliability of a construct measure was assessed
using the coefficient q, i.e. the ratio of construct
variance to the sum of that variance and the residual
variance (A|k i |)2 / (A|k i |)2+A(1 k2i ) where k i is the
standardized loading linking variable i to the construct. A value in excess of 0.70 shows that the
construct variance captures at least 70% of the
measurement variance [27], which was the case for
the six constructs varying from 0.72 to 0.93. The
perception of improvement to work design construct
had a value of 0.72, suggesting acceptable reliability.
F1
Satisfaction with
management
support
(ρ=.80)

F5
Perceived Job
Outcomes
R2=37%
(ρ=.82)

0.251
(n.s.)
0.67

0.607***
F2
Satisfaction with
userfriendliness
(ρ=.75)

0.51

0.700*

F4
Work Design
R2=27%
(ρ=.72)

-0.222

0.87

F3
Satisfaction with
report or
output data quality
(ρ=.90)

Nota: * = p < 0.05

** = p < 0.01

0.055
(n.s.)

χ 2=1137 d.f. = 470
χ 2 d.f.= 2.42 (<5)

CFI = .84 (<.90)
NNFI = .82
RMSEA = .08 ( ≤ .08)
*** = p < 0.001

Fig. 3. Structural equation model.

F6
Decision
Quality
(ρ=.93)

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

(F5 with F6). In addition, we tested that the correlation
between any two constructs is significantly different
from unity, i.e. the confidence interval around the
correlation includes 1 [2]. However, it is not too
surprising that the two sub-dimensions of the intellectual task success construct (F5, F6) and the three subdimensions of the IDSS user-satisfaction construct (F1,
F2, F3) are more highly correlated among themselves
including a correlation of 0.85 and 0.88.
These different measurements or values (CFI,
NNFI, square of factor loading, variance extracted
estimate and correlation) support the reliability and
validity of the constructs and variables.
3.4. Assessment of structural model
Fig. 3 presents the structural model. The most
interesting result relates to the standardized path coefficient
between the perceptions of improvement to work design
and perceived job outcomes, which is significant
(c =0.607). Indeed, the perceptions of improvement to
work design contribute significantly to perceived job
outcomes. Hypotheses H4a cannot be rejected. With
regard to IDSS user satisfaction, only satisfaction with
user-friendliness was linked significantly to perceptions
of improvement to work design (c = 0.700, p b 0.05).
Thus, hypothesis H2a cannot be rejected either. However, hypotheses H1a and H3a cannot be accepted, nor
can those concerning the job satisfaction construct,
because the other two factors are not significant.
The R 2 value for perceived job outcomes is
composed directly of improvement to work design to
a level of 37%. Improvements to work design are
explained directly by satisfaction with user-friendliness for a value of 27%. The distribution of standardized residuals for this revised model is symmetrical
and zero-centred. No standardized residual has an
absolute value of more than 2.0. The satisfaction with
management support and satisfaction with report or
output data quality are not significant in this relation of
model, and also between perceptions of improvement
to work design and the decision quality.

4. Analysis of results and discussion
Respondents were aged 30 to 49 and had an
average of 17 years of experience in the firm and 8.5

601

years in their current positions. Some 47% had
university qualifications. They had been using computer systems for more than seven years, and IDSS for
more than two years. For the IDSS, they had received
an average of 4.65 days of training. The three
respondent groups did not exhibit any significant
differences for the measures analyzed.
The model shown in Fig. 3 displays the structural
equation analysis and presents the coefficients
obtained for the links between the factors. Although
the results show that the v 2 value (470, N = 187) =
1137 is significant ( p b 0.001), the CFI and NNFI
indices are respectively 0.84 and 0.82, thus demonstrating acceptable adjustment [10]. In addition, the
model explains 37% of the observed variance in
perceived job outcomes. It explains the hypotheses
linked to these constructs.
The first hypothesis concerning the link between
satisfaction with IDSS use and perception of modifications to work design and job is partially invalidated. The job satisfaction factor was removed from
the model during the first stage of the confirmatory
factor analysis, partly because its correlation with the
other constructs was too weak and partly because only
two variables passed the reliability test. A factor with
only two variables could not be included in the
analysis. This also confirmed the results obtained by
McCue and Gianakis [45], to the effect that the direct
link between job satisfaction and performance is
weak. Thus, the structural model does not contain
this factor, and the hypotheses H1b, H2b, H3b, H5a
and H5b cannot therefore be retained.
The satisfaction with IDSS use construct comprises
three components: management support, user-friendliness and quality of output data or reports. The
management support component is closely linked to
the two other satisfaction factors (0.67 and 0.51).
However, it has no significant impact on the other
model factors, except indirectly through its link to and
improvement of the other satisfaction factors. Thus,
according to the sample and the structural equation
analysis, if the management support factor was not
part of the questionnaire, the model would be equally
well adjusted. Although satisfaction, according to
Attewell and Rule [3], reflects a bnewness effectQ, the
effect is not applicable because the IDSS had been
used for an average of more than two years by the
respondents. However, other authors have also

602

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

reported different findings. Galletta and Lederer [28]
explained that satisfaction was an attitude and use was
a component of that attitude. Shirani et al. [58], for
their part, developed a user satisfaction model
showing that satisfaction was a consequence of
combining the characteristics of the user, the system
and the organization. In this research, it is possible
that the mandatory as opposed to voluntary use of
IDSS, or the number of years since introduction of the
IDSS (more than two years on average), had an
impact on the model and would thus explain this
situation. Then, the hypothesis H1a is invalidated.
According to our sample, the same applies to the
satisfaction with report or output data quality factor,
which had no significant causal link with the other
model constructs. Satisfaction with report or output
data quality is closely linked to the two other
satisfaction factors (0.87 and 0.51). Thus, hypothesis
H3a cannot be retained.
On the other hand, the results show the importance
of satisfaction with user-friendliness on perception of
modifications to work design. Perceived improvement
to work design is influenced positively and is
explained (27%) by satisfaction with user-friendliness.
Thus, the more satisfied the user is with user-friendliness, the better his or her perception of improvements to work design. A user who is satisfied with
IDSS user-friendliness perceives that, even with the
IDSS, his or her own skills are required to perform the
task. Although the user perceives the greater level of
skill required, due to the high performance speed of
the IDSS, he or she nevertheless appreciates the
feedback in performing the task provided by the IDSS
and the freedom it offers in work planning and
methods. These findings concur with researchers
who have suggested that IT leads to the enrichment
of jobs and work [51]. Studies finding that IT has a
negative impact on work are in the minority, and have
tended to focus mainly on routine tasks. In a similar
study relating to the impacts of expert systems on
tasks, Yoon and Guimaraes [70] concluded that user
satisfaction (with user-friendliness and output data
quality) was a good substitute measure of system
success and had a significant impact on users’ tasks.
Raymond [53], too, concluded that user satisfaction
(with user-friendliness and output data quality) had a
significant impact on the task, and this is partially
supported by the model. Their findings are therefore

confirmed by this study. Thus, where a user is
satisfied with IDSS user-friendliness, he or she also
has a better perception of improvements to work
design. Hypothesis H2a is validated: The greater the
satisfaction with IDSS user-friendliness, the more
significant the improvements to work design will be.
The hypothesis linking improvements to work
design with perceived job outcomes, H4a, is validated: The greater the improvements to work design,
the better the perceived job outcomes will be.
Perceived job outcomes is influenced positively and
is explained (37%) by improvements to work design.
For example, if the user perceives that his or her skills
are still required to perform the task, he or she
perceives that task to be more interesting and feels
dependent on IDSS. Moreover, a user who perceives
while enhancing his or her skills and the feedback
provided by the IDSS, and remaining free to do his or
her work, will perceive job outcomes as defined
previously. Thus, intellectual workers who are satisfied with IDSS user-friendliness will be more
accepting of modifications to work design and will
have a better perception of job outcomes. On the other
hand, the results do not show the importance of
perceived of improvements to work design for better
quality decisions. The Beta weighting is non-significant. Thus, the hypothesis linking improvements to
work design with decision quality, H4b, is invalidated:
The greater the improvements to work design, the
better the perception of decision quality will be.
In this study, the perceived job outcomes factor
explains the perception of the users satisfied with the
system. Thus, if users perceive a good job outcome
with IDSS and are satisfied, the IDSS which may lead
to the successful performance of the user’s job;
however, not in the context of decision making.
Guimaraes et al. [32], who used exactly the same
measures in their research into decision support
systems found that the systems improved decision
quality. Molloy and Schwenk [47], for their part,
found that IT improved decision quality. On the other
hand, it is important to remember that these two
factors, perceived job outcomes and decision quality,
were designed to be two components of the same
higher-order construct: intellectual task success.
Finally, these findings should be interpreted with
some caution. Possibly, the research instruments were
not designed to capture information about the specific

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

task being attacked with the IDSS. Hence, the
possibility of the IDSS being applied to structured
tasks in a context of semi structured tasks has not been
analysed.
In short, according to Fig. 3 of the model, the
results show the importance of satisfaction with
user-friendliness for improvements to work design
and perceived job outcomes. Thus, satisfaction with
user-friendliness has a positive impact on perceived
improvement to work design and, perceived
improvement to work design has a positive impact
on the perceived job outcome. Thus, a user who
perceives the IDSS to be useful to the organization
will make more and better-job outcome. Similarly,
the three factors relating to satisfaction with
conditions of use are highly inter-correlated (0.67,
0.87 and 0.51). Thus, user satisfaction with userfriendliness has indirect impacts on perceived job
outcomes and decision quality by perceived improvement to work design. According to our sample and
analysis, these results are different to a comment
made by Seddon [57] on his respected and slightly
extended version of the DeLone and McLean model
of IS success: bOne should not waste time exploring
causal path relationships from User Satisfaction to
Individual Impacts. If IS Use and User Satisfaction
are viewed as proxies for Net Benefits there is no
reason why variance in either of them should have
any causal influence on variance in D&M’s other two
net benefits Impacts measures in Fig. 1a and 4. In
fact, as shown in Fig. 5, the direction of influence is
probably the reverse.Q [57].

5. Conclusion
The goal of this study was to examine the impact
of IDSS use on intellectual work. Data were collected
by means of a questionnaire from 187 users of
intelligent decision support systems. The research
has some significant consequences for information
systems researchers as well as for practitioners and
managers. The use of second-generation statistics in
the structural equation analysis allowed us to test an
entire model to determine intellectual task success.
Based on the model of DeLone and McLean [19], the
model used for this research identified links between
user satisfaction and modifications to work design,

603

and intellectual task success. It also showed that task
success is composed of two inter-correlated factors, as
Sanders and Courtney [55] pointed out, and that user
satisfaction can be assessed in relation to user-friendliness and output data or report quality. All the
variables and constructs adapt to a situation that
emphasizes the importance of the systems studied
(IDSS) and the intellectual tasks for which they are
used.
Part of DeLone and McLean’s model was
explained with regard to the direction of the links
between satisfaction and net individual benefit, unlike
Seddon’s model [19,57]. We in no way wish to claim
that these models are of lesser value. We are simply
stating that, in our own model, the structural equation
analysis clearly identifies the causal links. The study
of perceived improvement to work design shows one
of the consequences of IDSS use, as specified by
Seddon [57].
Overall, our findings have extended those
obtained by a number of researchers using the
studies of Sanders and Courtney [55] on information
system success factors. Our findings showed that
satisfaction with user-friendliness influences the
user’s perception of improvements to work design,
which in turn has a positive influence on perceived
job outcomes. This finding offers managers some
interesting avenues for IDSS use and development. It
appears vital for IDSS usefulness to be perceived
positively, and for the output data or reports and
user-friendliness to meet user needs and expectations. Although users perceive the task to be more
interesting and feel dependent on IDSS, they are also
able to work more accurately and more competently.
In short, IDSS users are more competent and work
more, and are satisfied with their work. Moreover,
they perceive the IDSS as being useful to the
organization, and as allowing them to improve the
quality of their own job. Thus, IDSS use allows
them to achieve task success, and the systems are
perceived as positive elements in the workplace.
IDSS designers must meet users’ needs to ensure
that users are satisfied with the product. If they do so,
users will perceive the IDSS as being useful, and will
obtain other benefits including better quality decisions. Managers must also provide users with the
required training and explain the work design changes
generated by the new system. Good training on IDSS

604

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

operation and logical reasoning also helps demystify
the system and will give users more flexibility in
using it.
The model used for this study does, however, need
further work. For example, other measures should be
tested for the work design construct. The satisfaction
with use construct should also be tested in the model,
in situations where IDSS use is more recent. The
model needs to be tested again with a larger sample.
Ideally it would be preferable to be able to split the
sample into a test sample and holdback sample. At the
same time, the job satisfaction construct should be
examined using other measures, and perhaps from a
different angle, such as a moderator variable in the
relationship between satisfaction with IDSS use and
perceived task success. Other research into IT,
including Internet networks, is required to test
whether the model can apply to different contexts.
All these studies, taken together, will provide sound
empirical confirmation of the causal hypotheses
proposed in the intellectual task success model used
for this research.

Appendix A
Variables of structural model
F1 satisfaction with management
support
V6 = I have received sufficient
training on IDSS
V7= I feel that the resources spent
on the purchase of IDSS models
is wisely invested
V8 = Top management is strongly
in favor of the concept of IDSS
V9 = A person or group has been
appointed to help me use IDSS
V10 = I obtained support and
encouragement for use of IDSS
in my work
V11 = Management has given me
the help and resources I need to
use IDSS properly
V12 = Top management is strongly
interested in my satisfaction
when I use IDSS
F2 Satisfaction with
user-friendliness
V13 = Value ascribed to the IDSS

Mean

Standard Factor
deviation loadings

5.47 1.4

0.53

5.03 1.54

0.53

5.42 1.36

0.86

5.20 1.32

0.77

5.23 1.33

5.45 1.16

0.63

0.75

Appendix A (continued)
Variables of structural model
V14 = IDSS running time to obtain
a result
V15 = IDSS is ease to use
V16 = IDSS is ease to learn
V17 = Usefulness of system
documentation
V18 = Freedom to browse
F3 Satisfaction with report or
output data quality
V19 = Timing (data available
at the right time)
V20 = Reliability of output
V21 = Accuracy of output
V22 = Completeness of data
V23 = Relevance of data
V24 = Conformity of the system’s
decisions
F4 Perceptions of Work design
V40 = Importance of the task
performing using an IDSS
V41 = Volume of work demanded
V42 = Required level of precision
V43 = Level of competency
required to perform my duties
V44 = Intrinsic interest of skills
V45 = Feedback concerning
performance of the work
V46 = Autonomy by the level of
responsibility and freedom in
work planning and methods
F5 Perceived job outcomes
V26 = I have become dependent
on IDSS
V27 = I am now regarded as a
more valuable member of the
organization because I use
IDSS
F5 Perceived job outcomes
V28 = I personally benefited from
the existence of IDSS in this
organization
V29 = In future, I will be relying
on IDSS to perform my duties
V30 = All in all, I think IDSS is
important to the organization
V31 = IDSS is extremely useful
F6 Decision quality
V32 = Utilization of IDSS has
enabled me to make better
decisions
V33 = As a result of IDSS, I am
better able to set my priorities in
decision making

Mean

Standard Factor
deviation loadings

5.60 1.05
5.46 1.04
4.54 1.46

0.63
0.63
0.51

4.91 1.45

0.55

5.78 1.01

0.64

5.66
5.70
5.63
5.59
5.40

1.01
0.93
1.05
0.98
1.06

0.80
0.80
0.86
0.77
0.75

0.76 0.80

0.53

0.51 0.80
0.42 0.70

0.82
0.63

0.19 0.94

0.49

5.00 1.45

0.57

4.25 1.55

0.64

4.54 1.55

0.70

5.09 1.31

0.79

5.91 0.93

0.70

5.95 0.94

0.54

5.42 1.22

0.93

5.03 1.26

0.81

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607
Appendix A (continued)
Variables of structural model
V34 = Use of data generated by
IDSS has enabled me to present
my arguments more
convincingly
V35 = IDSS has improved the
quality of decisions I make in
this organization
V36 = As a result of IDSS, the
speed at which I analyze
decisions has increased
V37 = As a result of IDSS, more
relevant information has been
available to me for decision
making
V38 = IDSS has led me to greater
use of analytical aids in my
decision making

Mean

Standard Factor
deviation loadings

5.39 1.17

0.88

5.17 1.31

0.85

5.17 1.37

0.80

5.56 1.15

0.79

5.20 1.25

0.64

N.B.: The variables shown in italics were non-significant and were
removed from the model at the beginning of the analyses.

References
[1] R. Agarwal, M. Tanniru, Assessing the organization impacts of
information technology, International Journal of Technology
Management 7 (1992) 626 – 643.
[2] J.C. Anderson, D.W. Gerbing, Structural equation modeling in
practice: a review and recommended two-step approach,
Psychological Bulletin 103 (3) (1988) 411 – 423.
[3] P. Attewell, J. Rule, Computing and organizations: what we
know and what we don’t know, Communications of the ACM
27 (12) (1984) 1184 – 1192.
[4] H. Barki, S.L. Huff, Implementing decision support system:
correlates of user satisfaction and system usage, Information
Systems and Operational Research 28 (2) (1990) 89 – 101.
[5] I. Benbasat, B.R. Nault, An evaluation of empirical research in
managerial support systems, Decision Support Systems 6 (3)
(1990) 203 – 226.
[6] P.M. Bentler, Comparative fit indexes in structural models,
Psychological Bulletin 107 (1988) 238 – 246.
[7] P.M. Bentler, Fit indexes, lagrange multipliers, constraint
changes, and incomplete data in structural models, Multivariate Behavioral Research 25 (1990) 163 – 172.
[8] P.M. Bentler, On the fit of models to covariances and
methodology to the bulletin, Psychological Bulletin 112
(1992) 400 – 404.
[9] P.M. Bentler, EQS Structural Equations Program Manual,
Multivariate Software Inc., Encino, California, 1995.
[10] P.M. Bentler, D.G. Bonett, Significance tests and goodness-offit in the analysis of covariance structures, Psychological
Bulletin 88 (1980) 588 – 606.

605

[11] P.M. Bentler, D.G. Weeks, Linear structural equations with
latent variables, Psychometrika 45 (1980) 289 – 308.
[12] D. Bourcier, La De´cision Artificielle: Le Droit, La Machine et
L’humain, Presses Universitaires de France, Paris, 1995.
[13] M.W. Browne, R. Cudeck, Alternative ways of assessing
model fit, in: K.A. Bollen, J.S. Long (Eds.), Testing Structural
Equation Models, Sage Publications, Newbury Park, California, 1993, pp. 136 – 162.
[14] T.A. Byrd, Implementation and use of expert systems in
organizations: perceptions of knowledge engineers, Journal of
Management Information Systems 8 (4) (1992) 97 – 116.
[15] Y.E. Chan, Business Strategy, Information Systems Strategy
and Strategic Fit: Measurement and Performance Impacts,
Thesis of doctorate (University of Western Ontario, 1992).
[16] L. Chen, K.S. Soliman, E. Mao, M.N. Frolick, Measuring user
satisfaction with data warehouses: an exploratory study,
Information and Management 37 (3) (2000) 103 – 110.
[17] R. Coll, J.H. Coll, D. Rein, The effect of computerized
decision aids on decision time and decision quality, Information and Management 20 (2) (1991) 75 – 81.
[18] R.V. Davis, Information technology and white collar productivity, Academy of Management Executive 5 (1) (1991)
55 – 67.
[19] W.H. DeLone, E.R. McLean, Information systems success: the
quest for the dependent variable, Information Systems
Research 3 (1) (1992) 60 – 95.
[20] W.H. DeLone, E.R. McLean, Information systems success
revisited, Proceedings of the 35th Hawaii International
Conference on System Sciences, 2002.
[21] D.H. Drury, A. Farhoomand, A hierarchical structural model
of information systems success, Information Systems and
Operational Research 36 (1/2) (1998) 25 – 40.
[22] J.S. Edwards, Experts systems in management and administration. Are they really different from decision support
systems?, European Journal of Operational Research 61
(1992) 114 – 121.
[23] P. Ein-Dor, E. Segev, Organizational context and the success
of management information systems, Management Science 24
(10) (1978) 1064 – 1077.
[24] S.B. Eom, A survey of operational expert systems in business:
1980–1993, Interfaces 26 (5) (1996) 50 – 70.
[25] G.E. Evans, J.R. Riha, Assessing DSS effectiveness using
evaluation research methods, Information and Management 16
(1989) 197 – 206.
[26] H. Farreny, Les syste`mes experts, Principes et Exemples,
Cepadues-Editions, Toulouse, France, 1985.
[27] C. Fornell, D.F. Larcker, Evaluating structural equation models
with unobservable variables and measurement error, Journal of
Marketing Research 18 (1981) 39 – 50.
[28] D.F. Galletta, A.L. Lederer, Some cautions on the measurement of user information satisfaction, Decision Sciences 20 (3)
(1989) 419 – 438.
[29] M. Gelderman, The relation between user satisfaction, usage
of information systems and performance, Information and
Management 34 (1) (1998) 11 – 18.
[30] T.G. Gill, Expert systems usage: task change and intrinsic
motivation, MIS Quarterly 20 (3) (1996) 301 – 329.

606

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607

[31] D.L. Goodhue, R.L. Thompson, Task-technology fit and
individual performance, MIS Quarterly 19 (2) (1995) 213 – 236.
[32] T. Guimaraes, M. Igbaria, M. te Lu, The determinants of DSS
success: an integrated model, Decision Sciences 23 (2) (1992)
409 – 430.
[33] J.R. Hackman, G.R. Oldham, Work Redesign, AddisonWesley, Massachusetts, 1980.
[34] S. Hamilton, N.L. Chervany, Evaluating IS effectiveness—part
I: comparing evaluation approaches, MIS Quarterly 5 (3)
(1981) 55 – 69.
[35] S. Hamilton, N.L. Chervany, Evaluating IS effectiveness—part
II: comparing evaluation viewpoints, MIS Quarterly 5 (4)
(1981) 79 – 86.
[36] R.D. Hauser, F.J. Hebert, Managerial issues in expert system
implementation, SAM Advanced Management Journal (1992)
10 – 15 (Hiver).
[37] B. Ives, M.H. Olson, J.J. Baroudi, The measurement of user
information satisfaction, Communications of the ACM 26 (10)
(1983) 785 – 795.
[38] K.G. Jfreskog, D.G. Sfrbom, LISREL 8: User’s Reference
Guide, Scientific Software, Chicago, 1993.
[39] A. Kambil, J.E. Short, Electronic integration and business
network redesign: a roles–linkages perspective, Journal of
Management Information Systems 10 (4) (1994) 59 – 83.
[40] P.G.W. Keen, M.S. Scott-Morton, Decision Support Systems:
An Organizational Perspective, Addison-Wesley Publishing,
Mass., 1978.
[41] E. Lawler, L. Porter, The Effect of Performance on Job
Satisfaction, Industrial Relation: A Journal of Economy and
Society, vol. 1, 1, 1967, pp. 20 – 28.
[42] R.L. Leitheiser, Computer support for knowledge workers: a
review of laboratory experiments, Data Base 17 (3) (1986)
17 – 45.
[43] H.C. Lucas Jr., J.J. Baroudi, The role of information
technology in organization design, Journal of Management
Information Systems 10 (4) (1994) 9 – 23.
[44] M.A. Mahmood, G.J. Mann, Measuring the organizational
impact of information technology investment: an exploratory
study, Journal of Management Information Systems 10 (1)
(1993) 97 – 122.
[45] C. McCue, G. Gianakis, The relationship between job
satisfaction and performance: the case of local government
finance officer in Ohio, Public Productivity and Management
Review 21 (2) (1997) 170 – 187.
[46] Z. Millman, J. Hartwick, The impact of automated office
systems on middle managers and their work, MIS Quarterly 11
(4) (1987) 479 – 491.
[47] S. Molloy, C.R. Schwenk, The effects of information
technology on strategic decision making, Journal of Management Studies 32 (3) (1995) 283 – 311.
[48] R.J. Morris, Enhancing strategic vision in the strategic
management course: an empirical test of two computerized
case analysis tools, Academy of Management Best Papers
Proceedings (1990) 122 – 126.
[49] T. Mukhopadhyay, S. Kekre, S. Kalathur, Business value of
information technology: a study of electronic data interchange,
MIS Quarterly 19 (2) (1995) 137 – 156.

[50] W.J. Orlikowski, D. Robey, Information technology and the
structuring of organizations, Information Systems Research 2
(2) (1991) 143 – 169.
[51] A. Pinsonneault, A. Bourret, S. Rivard, L’impact des
technologies de l’information sur les taˆches des cadres
interme´diaires: une e´tude empirique des be´ne´fices de l’informatisation, Technologies de l’Information et Socie´te´ 5 (3)
(1993) 301 – 328.
[52] M. Rahman, A. Abdul-Gader, Knowledge workers’ use of
support software in Saudi Arabia, Information and Management 25 (6) (1993) 303 – 311.
[53] L. Raymond, Organizational characteristics and MIS success in the context of small business, MIS Quarterly (1985)
37 – 53.
[54] D. Robey, S. Sahay, Transforming work through information
technology: a comparative case study of geographic information systems in county government, Information Systems
Research 7 (1) (1996) 93 – 110.
[55] L.G. Sanders, J.F. Courtney, A field study of organizational
factors influencing DSS success, MIS Quarterly 9 (1) (1985)
77 – 93.
[56] J.E. Scott, The measurement of information systems effectiveness: evaluating a measuring instrument, DATABASE Advances 26 (1) (1995) 43 – 59.
[57] P.B. Seddon, A respecification and extension of the Delone
and Mclean model of IS success, Information Systems
Research 8 (3) (1997) 240 – 253.
[58] A. Shirani, M. Aiken, B. Reithel, A model of user information
satisfaction, DATABASE 25 (4) (1994) 17 – 23.
[59] R.H. Sprague Jr., E.D. Carlson, Building Effective Decision
Support Systems, Prentice Hall, 1982.
[60] J.J. Sviokla, An examination of the impact of expert systems
on the firm: the case of XCON, MIS Quarterly 14 (2) (1990)
127 – 140.
[61] P. Todd, I. Benbasat, An experimental investigation of the
relationship between decision makers, decision aids and
decision making effort, Information Systems and Operational
Research 31 (2) (1993) 80 – 100.
[62] G. Torkzadeh, W.J. Doll, The development of a tool for
measuring the perceived impact of information technology on
work, Omega 27 (3) (1999) 327 – 339.
[63] E. Turban, P. Watkins, Integrating expert Systems and decision
support systems, MIS Quarterly (1986, June).
[64] G. Udo, Rethinking the effectiveness measures of decision
support systems, Information and Management 22 (1992)
123 – 135.
[65] R. Weber, Computer technology and jobs: an impact assessment
model, Communications of the ACM 31 (1) (1988) 68 – 77.
[66] K.E. Weick, Technology as equivoque: sensemaking in new
technologies, in: S. Goodman, S. Sproull (Eds.), Technology
and Organizations, Jossey-Bass, San Francisco, Ca, 1990,
pp. 1 – 44.
[67] R.P. Will, Individual differences in the performance and use of
an expert system, International Journal of Man Machine
Studies 37 (2) (1992) 173 – 190.
[68] S.J. Winter, S.L. Taylor, The role of IT in the transformation
of work: a comparison of post-industrial, industrial, and

E´.M.-F. Moreau / Decision Support Systems 42 (2006) 593–607
proto-industrial organization, Information Systems Research
7 (1) (1996) 5 – 21.
[69] B. Wong, J. Monaco, Expert systems applications in business:
a review and analysis of the literature (1977–1993), Information and Management 29 (1995) 141 – 152.
[70] Y. Yoon, T. Guimaraes, Assessing expert systems impact on
users’ jobs, Journal of Management Information Systems 12
(1) (1995) 225 – 249.
[71] Y. Yoon, T. Guimaraes, A.B. Clevenson, Assessing determinants of desirable ES impact on end-user jobs, European
Journal of Information Systems 5 (4) (1996) 273 – 285.

607

Eliane Moreau is a professor of Information Systems at the Universite´ du Que´bec
a` Trois-Rivie`res. She holds a PhD in
management information systems from
UQAM. Her current research interests
include intelligent decision support systems, the impacts of the systems on
individual and organizational performance,
and electronic business in SMEs.