Professional Documents
Culture Documents
Expert systems are artificial intelligence (AI) applications that have shown great
promise as decision aids across several functional areas of management
(Feigenbaum, McCorduck & Nii, 1988; Ernst, 1988). An expert system is “a
computer program which attempts to embody the knowledge and decision-
making facilities of a human expert in order to carry out a task.. .requiring
. . .human expertise”(Beardon, 1989, p. 87). Although less extensively used than
in other areas of management, expert systems are now making their way into
the human resource management (HRM) field (Kirrane & Kirrane, 1990;
Ceriello, 1991; Lawler, 1992). Hannon, Milkovich and Sturman (1990)
identified over thirty published expert system applications in HRM and the
number has very likely grown considerably since then. Expert system
applicationshave been developed for many different HRM activities, including
compensation, benefits administration, staffing, training, and human resource
planning. Northcraft, Neale and Huber (1988), Besser and Frank (1989), Chu
(1990), and others have argued that expert systems could be more widely utilized
in HRM in order to improve the quality of decision making. However, empirical
evidence supporting this contention is limited (Lawler, 1992), so it is less than
a foregone conclusion that AI technology will have its intended impact on user
performance.
Direct all correspondence to: John J. Lawler, University of Illinois, Institute of Labor and Industrial
Relations, 504 East Armory Street, Champaign, IL 61820.
85
86 JOHN J. LAWLER AND ROBIN ELLIOT
Expert Systems
Expert systems, designed to replicate certain abstract reasoning and
problem-solving capabilities of humans (Simon & Kaplan, 1989), are most
appropriate in helping users cope with semi-structured problems (Simon, 1977).
Semi-structured problems are those for which a considerable body of knowledge
exists as to the ways in which a given problem ought to be tackled. However,
the knowledge base is highly complex and not readily accessible to those without
specialized training. Consequently, organizations must rely on problem solvers
who have accumulated a track record of generating solutions that, while not
necessarily optimal, seem to work well. Expert problem solvers utilize heuristic,
rather than algorithmic, methods. In developing an expert system, the heuristic
methods of acknowledged experts in a specialized problem domain are
incorporated into the program (Buchanan & Smith, 1989).
Expert systems aid non-experts in solving semi-structured problems by
giving them, in effect, on-line access to expertise that may be difficult to develop
and in short supply. In typical programs, designers of expert systems utilize
various behavioral methods (e.g., verbal protocols) to identify the heuristics of
recognized experts. Although architectures vary, such heuristics are
encapsulated within the expert system, usually as a series of if-then rules. Expert
system “consultations” involve the program posing questions to the user at
various points. The answers to these questions, along with information stored
in various databases, are used to deduce solutions to problems consistent with
those that would be generated by an actual expert under similar circumstances.
A good example of an expert system is a program called MYCIN,
developed at Stanford to help physicians diagnose certain relatively rare
infections (Buchanan & Shortliffe, 1984). The heuristic rules of expert
diagnosticians were generated through interviews and other knowledge
acquisition methods. The resulting program could then be used by physicians
with limited knowledge of the infections in question. In consultations with the
program, users would be asked a series of questions regarding the patient. The
program would respond by suggesting additional diagnostic steps. When all
relevant information had been provided, the program would provide a diagnosis
and suggest a course of treatment. MYCIN incorporated uncertainty handling
measures and would indicate a degree of confidence in the proposed diagnosis.
Theory and Hypotheses
The principal objective of this study is to discern the impact of expert
system utilization on problem-solving outcomes within an HRM context. As
others have noted , research dealing in general with the effects of expert systems
is both limited and often uninformed by behavioral decision theory (Milkovich,
Sturman & Hannon, 1993; Shanteau & Stewart, 1992). Behavioral decision
theory is concerned, among other things, with the impact of problem complexity
on information processing and problem solving. As uncertainty and/or
complexity increase, problems become less structured and the ability of problem
solvers to engage in rational decision-making processes become compromised
(Simon, 1960). On the other hand, various decision aids, such as expert systems,
may mitigate the effects of complexity and uncertainty. We incorporate elements
of behavioral decision theory into this study by examining the impact of both
problem-solving method (with or without the aid of an expert system) and
information-processing difficulty on choices made by experimental subjects.
Outcomes
Prior research related to the impact of computer-based decision aids has
examined both task performance and psychological outcomes. A subject’s task
performance when employing a particular problem-solving method is normally
evaluated in terms of the accuracy of solutions generated and the efJicciencvof
the process (Lamberti & Wallace, 1990; Coll, Co11& Rein, 1991; Sharda, Barr
& McDonnell, 1988). In the type of study undertaken here, accuracy can be
assessed by comparing a subject’s solution to a problem to that of an expert
problem solver. A standard measure of efficiency is the time required by the
subject to solve a problem (or reach an impasse).
Considerable research in the information systems field concerns the
propensity of users and potential users of an application to employ the program
effectively. Such studies are often rooted in cognitive process models of
motivation and behavior (Zmud, 1979; Melone, 1990; Doll & Torkzadeh, 1991).
Research focuses on a variety of constructs, including user beliefs, attitudes,
intended behavior, and actual behavior, as these relate to the application in
question (Thompson, Higgins & Howell, 1991). Davis, Bagozzi and Warshaw
(1989) evaluated different general models of user attitudes and behavior. One
of the models examined, termed the “technology acceptance model,” was
especially effective in explaining the voluntary use of a particular piece of
software in terms of two underlying user perceptions: the perceived usefulness
of the program for accomplishing its objectives and the perceived ease-of--use
of the program.’ These two constructs are increasingly employed by other
researchers in the information systems field (Adams, Nelson & Todd, 1992).
A subject’s overall task satisfaction is another psychological outcome
extensively analyzed in prior research (Aldag & Power, 1986). Task satisfaction
can be viewed as an intermediate outcome, a consequence of the perceived ease-
of-use and usefulness of a problem solving method, which, in turn, influences
a subject’s proclivity to employ the method in similar circumstances in the
future.
Problem-Solving Method
Huber (1990) provides a general theoretical analysis of the likely impact
of information technology on various aspects of organizational structure and
performance. He defines computer-based decision aids as applications designed
to facilitate an individual’s ability to accomplish a range of information
processing and decision making tasks (e.g., storing, retrieving, and reconfiguring
information). His typology includes expert systems, along with decision support
systems and information retrieval systems. Huber posited that, other things
equal, computer-based decision aids will increase both the quality (i.e., accuracy)
and efficiency of decision making. That is, the decision aid is assumed to mitigate
the impact of complexity and/ or uncertainty on the quality of problem-solving
activities. From a behavioral decision theory perspective, the extent of
improvement in accuracy and efficiency is likely to depend on the problem
context. The crucial issue, then, is the interaction of problem solving method
and problem complexity/uncertainty. We develop this argument more fully
below. However, we first consider issues relating to the main effect of problem-
solving method on accuracy and efficiency. Our first two hypotheses follow
directly from Huber’s arguments:
Hl: Expert system use will increase the accuracy of HRM decisions
made by non-experts.
H2: Expert system use will decrease the time required by non-experts
to make HRM decisions.
0.8
xpert System J
r 0.8
8
5 0.5 1 Paper-and-Pencil I
0
2 0.4
Medium High
Complexity Level
study by Lamberti and Wallace (1990) that concerned the impact of uncertainty
on problem-solving accuracy and efficiency:
Research Methods
This study involves the assessment of an expert system designed to facilitate
decision making in a classification-based job evaluation program. The program
was developed to aid those responsible for classifying clerical positions in a large,
public sector organization operating under a state civil service system. Data
on the program’s effectiveness, collected during training sessions for users, are
analyzed here.
Research Design
To test our hypotheses, an experimental study was conducted in connection
with a training program for the intended users of the expert system. All subjects
were departmental classifiers who were required to attend the training sessions.
None had been exposed to the expert system prior to training. However, all
of the subjects had at least some experience doing classification work via the
conventional paper-and-pencil method. This study utilized a 2x3 within-subjects
factorial design (two levels for problem-solving method (expert system-aided
versus paper-and-pencil) and three levels for complexity (low, medium, and
high)). Each of forty-eight subjects completed six job classification exercises
(one for each cell in the design), so there was a total of 288 observations. Subjects
were randomly assigned to one of twelve training groups. A Latin square
approach served to counterbalance the order in which the problem-solving
methods and complexity levels were presented to the subjects.
Training involved groups of four individuals who attended two training
sessions one week apart, with each session lasting approximately two hours. The
same trainers conducted all of the sessions. At various points during training,
subjects completed the job classification exercises. They were provided job
descriptions that contained information relating to the activities performed by a
job incumbent. These forms were similar to those normally used by departmental
classifiers to record information obtained while interviewing job incumbents in
the first stage of the job classification process. Subjects were asked to classify these
positions without guidance from the trainers, using either the expert system or
the conventional paper-and-pencil approach. Subjects had access to the same
written documentation they used under normal circumstances. At the end of each
classification exercise, subjects were asked to recommend both skill level and
occupational series classifications for the position. In addition, the subjects
completed the battery of questions used to construct perceived usefulness,
perceived ease-of-use, task satisfaction, and perceived complexity scales.
We used six different job descriptions in the experiment. All were of actual
positions that had already been classitied by analysts in the personnel office.
The job descriptions were selected by the three skilled analysts in the personnel
department. The analysts reviewed a number of different job analyses that had
already been classified and that were drawn randomly from the department’s
files. They were asked first to select a subsample of cases that all three agreed
were correctly classified. They were then asked to choose two job descriptions
for each of the three complexity levels. The analysts were told to select cases
that reflected, in their subjective opinions, the range of task complexity typically
encountered in classification work. Only job descriptions for which the analysts
were in general agreement as to complexity level were used. In addition, experts
were able to use the program to reproduce the classification decisions for all
six job descriptions.
‘f
OS8
E
a 0.6 _Ic__L
. .._ -. -- --_
.- _
1 Peer-and-Pen~ii
8
2 -. ‘..
Q: 0.4 “,I .,,
*~..
_
‘-. .._
0.2
0
LOW Medium
Complexity Level
I
0
LOW Medium High
Complexity Level
35
5 30
I-m
a 25
zl _
z20
E
$5 [Expert S&em ]
co 10 -.-.
Paper-and-Pencil 1
ii
z5
f,
Low Medium High
Complexity Level
:i
0’
Low Medium High
Complexity Level
Figure 5. Ease-of-Use
_._~_
------------. ___[Expefl System ]
--. .__
[Paper-and-Pencil
0
LOW Medium High
Complexity Level
Complexity 1.00
Usefulness -.48 1.00
Ease-of-Use -.75 .58 1.oo
Task Attitude -.41 .47 .64 1.00
Statistical Analysis
The psychological outcome scales (usefulness, ease-of-use, task attitude,
and perceived complexity) are all continuous variables and were analyzed using
the appropriate ANOVA method for a two-factor within-subjects design
(Keppel & Zedick, 1989). Within-subjects ANOVA was also used in the case
of the efficiency measure (task time). For each of these variables, statistical tests
were performed for the complexity and method main effects and for the
complexity-method interaction effect.
As the two task performance variables (skill level accuracy and
occupational series accuracy) are dichotomous, logit analysis was used rather
than ANOVA. Logit is based on a function of the form:
Results
Manipulation Check
Within-subjects ANOVA was used to determine if there was a relationship
between the subject’s perceived complexity and the assigned complexity level.
We found a strong and significant complexity main effect (Table 2), with
perceived complexity increasing as a function of the assigned complexity level.
Hence the complexity manipulation appears to be valid. However, we also
found that perceived complexity was higher for problems solved using the expert
system rather than the pencil-and-paper method. The implications of this
finding are considered below. There was no statistically significant method x
complexity interaction effect.
Method
Paper-and-Pencil 5.45 5.06 4.67 3.22 10.08
Expert System 4.94 4.96 4.98 3.65 21.56
F-Ratio (df = 1,47) 18.31” 0.64 5.38b 9.95” 70.86”
Complexity
Low 5.69 5.62 5.07 2.28 8.81
Medium 5.21 5.04 4.97 3.58 16.65
High 4.68 4.38 4.44 4.44 22.00
F-Ratio (df = 2,94) 16.63” 56.85” 13.16” 152.98” 91.42”
Method X Complexity
F-Ratio (df = 2,94) 2.23 2.32’ 2.37’ .79 10.82”
Notes: * The numbers in brackets [ ] for task complexity and the interaction terms are Wald statistics
for that set of variables.
a. Significant at the .Ol level
b. Significant at the .05 level
c. Significant at the .10 level
Task Performance
Logit results for the skill level and occupational series decisions are reported
in Table 3 (supplementary logit analysis, described in the discussion section,
is reported in Table 4).
Skill level decisions. Expert system utilization exerted neither a
significant nor positive effect on the accuracy of skill level decisions, thus
contradicting Hl. On the other hand, the task complexity and the method x
complexity interaction effects are statistically significant at the .Ol level.
Assessing these results in relation to H3 and H4 requires further interpretation
of the coding scheme.
The expert system variable contrasts expert system use (greater value) with
the paper-and-pencil approach (lesser value). CA contrasts moderate
complexity (greater value) to low complexity (lesser value), while CB contrasts
high complexity (greater value) to the average effect of moderate and low
complexity (lesser value). Thus the negative coefficients associated with CA and
CB (both significant) indicate a general decline in the accuracy of skill level
decisions with increasing complexity, which supports H4.
Hypothesis 3 posits expert system use might dampen the adverse impact
of complexity on accuracy in the mid-range of complexity, with the expert
system and paper-and-pencil approaches converging at higher levels of
complexity (Figure 1). For this to be the case, the CA X ES coefficient should
be positive, while the CB X ES coefficient should be negative. Although the
ability of users to make accurate level classification decisions varied depending
upon whether or not they used the expert system, the coefficients for the
interaction terms are both negative, so that the interaction pattern for accuracy
of skill level decisions (Figure 2) is clearly different from the h~othetical
pattern.
A possible explanation for this is that complexity is measured in relative,
rather than absolute, terms. That is, both the analysts who initially judged task
complexity and the classifiers who responded to the complexity questions were
evaluating this factor in relation to their experiences. Hence, what they might
have viewed as classification tasks of low complexity may, in some objective
sense, be quite involved. In other words, “true” complexity may be truncated
in the lower range for skill level decisions. Consequently, the interaction effect
depicted in Figure 2 could be seen as partially consistent with H3 (in that Figure
2 corresponds somewhat to the right half of Figure 1).
Another anomalous finding depicted in Figure 2 is the relatively flat
relationship between complexity and accuracy for the paper-and-pencil solutions
(despite a negative and significant main effect). Perhaps subjects invoked personal
heuristics to cope with increasing complexity which were not incorporated into
the expert system and were only warranted at higher levels of complexity. While
our subjects were not experts in the sense that the analysts were, they were not
completely naive users either. All had some experience doing basic classification
work and had probably learned problem-solving techniques that might not have
been familiar to the personnel office analysts. This interpretation is consistent
with prior research on the effects of increasing complexity on decision-making
accuracy (Paquette & Kida, 1988). The results obtained for the efficiency measure
provide additional evidence as to what might have been occurring here.
Occupational series decisions. As with skill level decisions, the main effect
for expert system use is negative and insignificant for occupational series
decisions. Both task complexity main effect and the method x complexity
interaction are significant (at the .Ol and .lO levels, respectively).
An anomalous result here is that the main effect for complexity is positive
(Figure 3), though H4 anticipates a negative relationship. A possible
explanation may be that, at least for occupational series decisions, classifiers
have learned that relatively complex problems often fall into particular
categories and use this as a problem-solving heuristic. This heuristic, though
not obvious, could have been incorporated into the expert system, thus
explaining why complexity seems to impact accuracy in the same way under
both the expert system and paper-and-pencil conditions.
As Figure 3 indicates, accuracy under the expert system condition is less
than under the paper-and-pencil condition for the low complexity condition.
As complexity increases, accuracy under the two conditions converges, with
expert system-aided problem solving improving markedly in comparison to the
paper-and-pencil approach. This could be explained by a variant on the
argument presented above regarding the method x complexity interaction for
skill level decisions. Again, if we think of the complexity range as relative, then
job series decisions could be at the lower (rather than higher) end of some
absolute continuum of complexity, thus generating an interaction pattern that
corresponds to the left half of Figure 1.
Given this conjecture, the results are more consistent with H3. That is, at
low complexity levels, the expert system approach may be excessively difficult
to use compared to the task at hand, so it is outperformed by the paper-and-
pencil approach. As complexity increases, expert system accuracy gains in
relation to paper-and-pencil accuracy. H3 also suggests that the expert system
method overtakes the paper-and-pencil approach at some point. Although this
is not demonstrated to occur within the complexity range considered here, the
ES X CB interaction term is both significant and positive. Thus, the slope for
the expert system graph exceeds that of the paper-and-pencil graph in the higher
complexity region.
Discussion
problems, then perhaps there ought to have been no difference between choices
made with the expert system and the paper-and-pencil approach. Yet the
method x complexity interaction was significant for the accuracy and efficiency
measures. In fact, the subjects seemed wedded to the expert system’s
recommendations when the program was used, only rarely disagreeing with the
program. This aspect of subject behavior when utilizing an expert system
warrants further study.
Our discussion of the findings also suggests the possibility that the results
could be consistent with theoretical expectation if the skill level and
occupational series were on opposite sides of some absolute complexity scale.
We speculated that the results might suggest the skill level decisions are at the
upper end of such a distribution and the occupational series decisions are at
the lower end. If this were reversed, then our conjecture would not hold.
However, based on discussions with analysts from the personnel department,
we feel that our interpretation of the complexity dimension is reasonable.
Analysts suggested that occupational series decisions are usually easier to make
than skill level decisions. The reason for this is that much more information
must be processed and assimilated for the latter than the former type of decision.
Occupational series decisions generally rest on establishing whether the job
incumbent performs certain very specific tasks and usually only a few pieces
of information are required (see Appendix for a more extensive discussion of
the classification task). For example, if the job incumbent does any more than
a very limited amount of typing, the job will be classified as a secretary. If any
dictation work is at all required, then the job will be classified as a stenographer.
In contrast, skill level decisions require the acquisition and analysis of extensive
amounts of information relating to all aspects of the jobs. Skill level decisions
are more quantitative in character, comparing the skill levels of the various tasks
performed by an incumbent in order to ascertain the preponderant skill level
for the job as a whole. Occupational series decisions are more qualitative and
also less critical (as salary depends on skill level alone) and presumably less
stressful to make. It would, however, make sense in further research to utilize
a design that differentiated between the complexity levels of each facet of the
task at hand.
What policy implications might we draw from this study? For one thing,
expert systems cannot be viewed as panaceas for managerial problems. We are
certainly a long way from being able to create automatons which can replace
humans as problem solvers in the HRM field. Managers may see expert systems
as a means of economizing on labor costs, much as robotic systems are used
in manufacturing. Expert system development costs may be substantial and the
resulting product may not be sufficiently accurate to justify the investment. Yet
it is clear that, under certain circumstances, an expert system can exceed, or
at least equal, the accuracy of a conventional problem-solving approach. Thus,
despite somewhat ambivalent results, this study does indicate that is feasible
to develop expert systems that replicate some nontrivial problem-solving
competencies in the HRM field.
APPENDIX
Description of Task and Expert System
This appendix provides a more detailed description of the classification
task and the problem-solving methods used by the experimental subjects.
Organize Tasks
Coordinate workflow
- Plan workflow
Personnel Functions
IUse ARROW KEYS t0 move IPreSS <ESC> to exit1 Press <Fl> for HELP SCREEN1
Figure Al. Sample Expert System Screen: Tree Menu for a Major Job Task
with the job and maintains a running tally of the tasks performed and the
amount of time devoted to each task.
The job description section of the program largely mechanizes the
information gathering process. However, unlike the paper-and-pencil approach,
the expert system provides position classification recommendations. The second
major part of the program does this. Once the job description phase is
References
Adams, D.A., Nelson, R.R. & Todd, P.A. (1992). Confidence, ease of use, and usage of information
technology: A replication. MIS Quarrerly, 16: 227-247.
Aldag, R.J. & Power, D.J. (1986). An empirical assessment of computer-assisted decision analysis. Decision
Sciences, 17,: 572-588.
Adelson, B. (1984). When novices surpass experts: The difficulty of a task may increase with expertise. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 10: 483495.
Beardon, C. (1989). Artificial inrelligence terminology: A reference guide. New York: Halsted Press.
Besser, L.J. & Frank, G.B. (1989). Artificial intelligence and the flexible benefits decision: Can expert systems
help employees make rational choices? SAM Advanced Management Journal, 54 (Spring): 4-13.
Bozeman, W.C. & Olsen, C. (1987). Effectiveness of a decision aid for selection of statistical procedures.
Psychological Reports, 60: 835-838.
Buchanan, B.G. & Shortliffe, E.H. (1984). Rule-based expert sysiems: 7he MYCZN experiments of the
Stanford heuristicprogrammingproject. Reading, MA: Adisson-Wesley.
Buchanan, B.G. & Smith, R.G. (1989). Fundamentals of expert systems. Pp. 149-192 in A. Barr, P.R. Cohen
& E.G. Feigenbaum (Eds.), Z7re handbook of artiJicia1 intelligence, Vol. 4. Reading, MA: Addison-
Wesley.
Carey, J.M. (1988). Understanding resistance to system change: An empirical study. Pp. 195-206 in J.M.
Carey (Ed.), Human facrors in management information systems. Norwood, NJ: Ablex.
Cats-Baril, W.L. & Huber, G.P. (1987). Decision support systems for ill-structured problems: An empirical
study. Decision Sciences, 18: 350-372.
Ceriello, V. (1991). Human resource management systems: Strategies, tactics, and techniques. Lexington,
MA: Lexington Books.
Cham~rl~n, G. (1984). Panel data. Pp. 1247-1318 in A. Griliches & MD. Intrihgator (Eds.), handbook
ofeconometrics, Vol. 2: 1247-1318. Amsterdam: North-Holland.
Chu, P. (1990). Developing expert systems for human resource planning and management. Human Resource
Planning, 13: 159-178.
Coll, R., Coll, J.H. & Rein, D. (1991). The effect of computerized decision aids on decision time and decision
quality. information and Management, 20: 75-8 1.
Davis, F.D. (1986). A technology acceptance model for ernp~c~ly testing new end-user isolation systems:
Theory and results. Doctoral dissertation, Sloan School of Management, MIT.
Davis, F.D., Bagozzi, R.P. & Warshaw, P.R. (1989). User acceptance of computer technology: A comparison
of two theoretical models. Munagemenr Science, 35: 982-1003.
Dickmeyer, N. (1983). Measuring the effects of a university planning decision aid. Management Science,
29: 673-685.
Doll, W. & Torkzadeh, G. (1991). The me~u~ment of end-user satisfaction: Theoretical and methodolo~cal
and issues. MIS Quarterly, 15: 5-10.
Ernst, C. (Ed.) (1988). Management expert systems. Reading, MA: Addison-Wesley.
Fedorowicz, J. (1992). A learning curve analysis of expert system use. Decision Sciences, 23: 797-818.
Feigenbaum, E., McCorduck, P. & Nii, H.P. (1988). The rise of the expert company: How visionary
companies are using artificial intelligence to achieve hjgherprod~ctivity andprofits. New York: Times
Books.
Gardner, D.G. (1990). Task complexity effects on non-task-related movements: A test of activation theory.
Organizational Behavior and Human Decision Processes, 45: 209-231.
Harmon, J., Milkovich, G. & Sturman, M. (1990). Using expert systems in the management of human
resources. Working Paper No. 90-19. Center for Advanced Human Resource Studies, Cornell
University.
Hauser, R.D. & Herbert, F.J. (1992). Managerial issues in expert system implementation. SAM Advanced
Management Journal, 57(Winter): 10-1.5.
Hogarth, R. (1987). Judgment and choice, 2nd ed. New York: John Wiley.
Huber, G. (1990). A theory of the effects of advanced information technologies on organizational design,
intelligence, and decision making. Academy of Management Review, 1.5: 47-7 1.
Joyner, R. & Tunstall, K. (1970). Computer augmented organizational problem solving. Ma~agem~t
Science, 17: 212-225.
Kahneman, D., Slavic, P. & Tversky, A. (Eds.) (1982). Judgment under uncertainty: Heuristics and biases.
Cambridge, UK: Cambridge University Press.
Keppel, G. & Zedeck, S. (1989). Data analysisfor research designs. New York: W.H. Freeman.
Kirrane, D.E. & Kirrane, P.R. (1990). Managing by expert systems: There’s no replacement for the HR
professional, but “big brain” systems can make the job high-tech and more efficient. HR Maguzine
(March): 37-39.
Kottermann, J.E. L Davis, F.E. (1991). Decisional con&et and user acceptance of multicriteria decision-
making aids. Decision Sciences, 22: 9 18-926.
Lamberti, D.M. &Wallace, Wm. A. (1990). Intelligent interface design: An empirical assessment of knowledge
presentation in expert systems. MIS Qaarrerly, 14: 279-311.
Lawler, J.J. (1992). Computer-me~ated information processing and decision making in human resource
management. Pp. 301-345 in G.R. Ferris & K.M. Rowland (Ed%), Research inpersonne~ und human
resources management, Vol. 10. Greenwich, CT: JAI Press.
Lucas, H.C., Jr. &Nielsen, N.R. (1980). The impact of the mode of information presentation on performance.
Management Science, 26: 982-993.
Melone, N. (1990). A theoretical assessment of the user-satisfaction construct in information systems research.
Management Science, 31: 76-91.
Milkovich, GI, Sturman, M. & Hannon, _I.(1993). Effects of a flexible benefits expert system on employee
decisions and satisfaction. Working Paner No. 93-16. Center for Advanced Human Resource Studies,
Cornell University.
Motowildo, S.J. (1986). Information processing in personnel decisions. Pp. l-44 in G.R. Ferris & K.M.
Rowland (Eds.), Research in personnel and human resources management, Vol. 4. Greenwich, CT:
JAI Press.
Northcraft, G.B., Neale, M.A. & Huber, V.L. (1988). The effects of cognitive bias and social influence on
human resources management decisions. Pp. 157-189in G.R. Ferris & K.M. Rowland (Eds.),Research
in personnel und human resources management, Vol. 6. Greenwich, CT: JAI Press.
Paquette, L. & Kida, T. (1988). The effect of decision strategy and task complexity on decision performance.
Organizational Behavior and Human Decision Processes, 41: 128-142.
Peterson, C.M. L Peterson, T.O. (1988). The darkside of office automation: How people resist the
introduction of office automation technology. Pp. 183-194 in J.M. Carey (Ed.), Human Factors in
Management Information Systems. Not-wood, NJ: Ablex.
Shanteau, J. & Stewart, T.R. (1992). Why study expert decision making? Some historical perspectives and
comments. Organizational Behavior and Human Decision Processes, 53: 95-106.
Sharda, R., Barr, S.H. & McDonnell, J.C. (1988). Decision support system effectiveness: A review and
empirical test. Management Science, 34: 139-159.
Simon, H.A. (1960). 7?re new science of manugemenf decision. New York: Harper and Row.
Simon, H.A. (1977). The new science of management decision, rev. ed. Englewood-Cliffs, NJ: Prentice-Hall.
Simon, H.A. & Kaplan, C.A. (1989). Foundations of cognitive science. Pp. 147 in MI. Posner
(Ed.),Foundutions of cognitive science. Cambridge, MA: MIT Press.
Sturman, M.C. & Milkovich, G.T. (1992). Validation of expert systems: PERSONAL CHOICE--A flexible
employee benefit system. Working Paper No. 92-33. Center for Advanced Human Resource Studies,
Cornell University.
Thompson, R.L., Higgins, C.A. & Howell, J.M. (1991). Personal computing: Toward a conceptual model.
MIS Quarterly, 1.5: 125-143.
Waterman, D.A. (1986). A Guide to Experf Systems. Reading, MA: Addison-Wesley.
Wood, R.E. (1986). Task complexity: Definition of a construct. Orgunizufional Behavior and Human
Decision Processes, 37: 60-82.
Zmud, R.W. (1979). Individual differences and MIS success: A review of the empirical literature. Management
Science, 25: 966-979.