You are on page 1of 27

Journal of Management

1996, Vol. 22, No. I, 85-111

Artificial Intelligence in HRM:


An Experimental Study
of an Expert System
John J. Lawler
University of Illinois
Robin Elliot
San Francisco

This study investigates the impact of an expert system used as


a decision aid in a job evaluation system. Both performance outcomes
and psychological outcomes are analyzed in an experiment in which
the intended users of the expert system served as subjects. The study
draws largely from behavioral decision theory for its theoretical
support. Although this study examines an expert system within an
HRM context, the results are useful as one test of expert system
efJcacy within the more general area of managerial decision making.

Expert systems are artificial intelligence (AI) applications that have shown great
promise as decision aids across several functional areas of management
(Feigenbaum, McCorduck & Nii, 1988; Ernst, 1988). An expert system is “a
computer program which attempts to embody the knowledge and decision-
making facilities of a human expert in order to carry out a task.. .requiring
. . .human expertise”(Beardon, 1989, p. 87). Although less extensively used than
in other areas of management, expert systems are now making their way into
the human resource management (HRM) field (Kirrane & Kirrane, 1990;
Ceriello, 1991; Lawler, 1992). Hannon, Milkovich and Sturman (1990)
identified over thirty published expert system applications in HRM and the
number has very likely grown considerably since then. Expert system
applicationshave been developed for many different HRM activities, including
compensation, benefits administration, staffing, training, and human resource
planning. Northcraft, Neale and Huber (1988), Besser and Frank (1989), Chu
(1990), and others have argued that expert systems could be more widely utilized
in HRM in order to improve the quality of decision making. However, empirical
evidence supporting this contention is limited (Lawler, 1992), so it is less than
a foregone conclusion that AI technology will have its intended impact on user
performance.

Direct all correspondence to: John J. Lawler, University of Illinois, Institute of Labor and Industrial
Relations, 504 East Armory Street, Champaign, IL 61820.

Copyright @ 1996 by JAI Press Inc. 0149-2063

85
86 JOHN J. LAWLER AND ROBIN ELLIOT

Expert Systems
Expert systems, designed to replicate certain abstract reasoning and
problem-solving capabilities of humans (Simon & Kaplan, 1989), are most
appropriate in helping users cope with semi-structured problems (Simon, 1977).
Semi-structured problems are those for which a considerable body of knowledge
exists as to the ways in which a given problem ought to be tackled. However,
the knowledge base is highly complex and not readily accessible to those without
specialized training. Consequently, organizations must rely on problem solvers
who have accumulated a track record of generating solutions that, while not
necessarily optimal, seem to work well. Expert problem solvers utilize heuristic,
rather than algorithmic, methods. In developing an expert system, the heuristic
methods of acknowledged experts in a specialized problem domain are
incorporated into the program (Buchanan & Smith, 1989).
Expert systems aid non-experts in solving semi-structured problems by
giving them, in effect, on-line access to expertise that may be difficult to develop
and in short supply. In typical programs, designers of expert systems utilize
various behavioral methods (e.g., verbal protocols) to identify the heuristics of
recognized experts. Although architectures vary, such heuristics are
encapsulated within the expert system, usually as a series of if-then rules. Expert
system “consultations” involve the program posing questions to the user at
various points. The answers to these questions, along with information stored
in various databases, are used to deduce solutions to problems consistent with
those that would be generated by an actual expert under similar circumstances.
A good example of an expert system is a program called MYCIN,
developed at Stanford to help physicians diagnose certain relatively rare
infections (Buchanan & Shortliffe, 1984). The heuristic rules of expert
diagnosticians were generated through interviews and other knowledge
acquisition methods. The resulting program could then be used by physicians
with limited knowledge of the infections in question. In consultations with the
program, users would be asked a series of questions regarding the patient. The
program would respond by suggesting additional diagnostic steps. When all
relevant information had been provided, the program would provide a diagnosis
and suggest a course of treatment. MYCIN incorporated uncertainty handling
measures and would indicate a degree of confidence in the proposed diagnosis.
Theory and Hypotheses
The principal objective of this study is to discern the impact of expert
system utilization on problem-solving outcomes within an HRM context. As
others have noted , research dealing in general with the effects of expert systems
is both limited and often uninformed by behavioral decision theory (Milkovich,
Sturman & Hannon, 1993; Shanteau & Stewart, 1992). Behavioral decision
theory is concerned, among other things, with the impact of problem complexity
on information processing and problem solving. As uncertainty and/or
complexity increase, problems become less structured and the ability of problem
solvers to engage in rational decision-making processes become compromised

JOURNAL OF MANAGEMENT, VOL. 22, NO. I, 1996


ARTIFICIAL INTELLIGENCE IN HRM 87

(Simon, 1960). On the other hand, various decision aids, such as expert systems,
may mitigate the effects of complexity and uncertainty. We incorporate elements
of behavioral decision theory into this study by examining the impact of both
problem-solving method (with or without the aid of an expert system) and
information-processing difficulty on choices made by experimental subjects.

Outcomes
Prior research related to the impact of computer-based decision aids has
examined both task performance and psychological outcomes. A subject’s task
performance when employing a particular problem-solving method is normally
evaluated in terms of the accuracy of solutions generated and the efJicciencvof
the process (Lamberti & Wallace, 1990; Coll, Co11& Rein, 1991; Sharda, Barr
& McDonnell, 1988). In the type of study undertaken here, accuracy can be
assessed by comparing a subject’s solution to a problem to that of an expert
problem solver. A standard measure of efficiency is the time required by the
subject to solve a problem (or reach an impasse).
Considerable research in the information systems field concerns the
propensity of users and potential users of an application to employ the program
effectively. Such studies are often rooted in cognitive process models of
motivation and behavior (Zmud, 1979; Melone, 1990; Doll & Torkzadeh, 1991).
Research focuses on a variety of constructs, including user beliefs, attitudes,
intended behavior, and actual behavior, as these relate to the application in
question (Thompson, Higgins & Howell, 1991). Davis, Bagozzi and Warshaw
(1989) evaluated different general models of user attitudes and behavior. One
of the models examined, termed the “technology acceptance model,” was
especially effective in explaining the voluntary use of a particular piece of
software in terms of two underlying user perceptions: the perceived usefulness
of the program for accomplishing its objectives and the perceived ease-of--use
of the program.’ These two constructs are increasingly employed by other
researchers in the information systems field (Adams, Nelson & Todd, 1992).
A subject’s overall task satisfaction is another psychological outcome
extensively analyzed in prior research (Aldag & Power, 1986). Task satisfaction
can be viewed as an intermediate outcome, a consequence of the perceived ease-
of-use and usefulness of a problem solving method, which, in turn, influences
a subject’s proclivity to employ the method in similar circumstances in the
future.
Problem-Solving Method
Huber (1990) provides a general theoretical analysis of the likely impact
of information technology on various aspects of organizational structure and
performance. He defines computer-based decision aids as applications designed
to facilitate an individual’s ability to accomplish a range of information
processing and decision making tasks (e.g., storing, retrieving, and reconfiguring
information). His typology includes expert systems, along with decision support
systems and information retrieval systems. Huber posited that, other things
equal, computer-based decision aids will increase both the quality (i.e., accuracy)

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


88 JOHN J. LAWLER AND ROBIN ELLIOT

and efficiency of decision making. That is, the decision aid is assumed to mitigate
the impact of complexity and/ or uncertainty on the quality of problem-solving
activities. From a behavioral decision theory perspective, the extent of
improvement in accuracy and efficiency is likely to depend on the problem
context. The crucial issue, then, is the interaction of problem solving method
and problem complexity/uncertainty. We develop this argument more fully
below. However, we first consider issues relating to the main effect of problem-
solving method on accuracy and efficiency. Our first two hypotheses follow
directly from Huber’s arguments:

Hl: Expert system use will increase the accuracy of HRM decisions
made by non-experts.
H2: Expert system use will decrease the time required by non-experts
to make HRM decisions.

There is an extensive body of empirical literature dealing with impact of


computer-based decision aids on problem-solving performance.2 Some studies
concern expert systems, though most focus on other types of decision aids (such
as decision support systems).
In a validation study of an expert system designed to help employees
allocate credits in a flexible benefits program, Sturman and Milkovich (1992)
reported considerable success on the part of employees (i.e., non-experts) in
generating allocations consistent with what benefits counselors (i.e., experts)
would have recommended. In a related study, Milkovich et al. (1993) found
that users of this expert system were apt to change decisions that they had made
regarding benefits choices when they received negative feedback from the expert
system, thus presumably moving more in the direction of a “correct” decision.
The use an expert system to help technicians diagnose problems in a computer
system resulted in both an increase in problem-solving accuracy and a reduction
in the amount of time required to solve problems (Lamberti & Wallace, 1990).
Federowicz (1992) reports an upward shift in the learning curves of novice
problem solvers when they are allowed to utilize an expert system.
There are, however, other studies that indicate expert systems may have
no impact on, or even decrease, problem-solving performance. Coll, Co11and
Rein (1991) studied the impact of an expert system designed to aid managers
identify and clarify decision priorities. The program was applied to certain
personnel decisions (retain versus discharge a low performer). Use of the expert
system both decreased accuracy and substantially increased decision time.
Empirical studies of other types of computer-based decision aids have
generated somewhat mixed results regarding performance effects. Sharda et al.
(1988) cite several laboratory studies that directly assess the impact of use versus
non-use of computer-based decision aids on problem-solving performance.
Only about half of the studies demonstrated statistically significant
improvements in performance when such decision aids were employed.
Examples of studies demonstrating positive performance effects (accuracy or

JOURNAi OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 89

efficiency measures) include those by Bozeman and Olsen (1987), Dickmeyer


(1983), and Lucas and Nielsen (1980). Studies failing to discern performance
effects include those by Joyner and Tunstall (1970), Cats-Baril and Huber
(1987), and Aldag and Power (1986).
In regard to the psychological outcomes under study, most prior research
has involved analyses of the impact of subjects’ perceptions and attitudes on
their subsequent use, or intention to use, the application in question. Perceptions
and attitudes are often treated as exogenous and the impact of application
utilization per se on these factors is not explicitly considered. Studies that
examine determinants of perceptions and attitudes mostly focus on contextual
factors and the personal characteristics of the subjects, rather than exposure
or nonexposure to an application (Zmud, 1979; Adams et al., 1992). We know
from prior research that people may respond rather negatively to the
introduction of a computer-based decision aid (Carey, 1988; Peterson &
Peterson, 1988). Computer applications may be viewed as a threat to employee
autonomy, status, even job security. This argument is made specifically in regard
to the introduction of expert systems by Hauser and Hebert (1992). Yet
Milkovich et al. (1993) found that an expert system increased user satisfaction
with the quality of the choices he or she made.
Decision aids that improve task performance and pose no immediate threat
can still be perceived negatively by users. Prior to the introduction of the
decision aid, problems may have been solved by applying rather simple, though
inaccurate, heuristics. The decision aid in such circumstances may be disruptive.
It may also increase options and thus serve to confuse the user. Thus, we have
the paradox of a system that does, in some objective sense, simplify a problem,
but nonetheless serves to increase the user’s perceived complexity. This
phenomenon is well documented in the literature and research has often shown
that subjects will avoid otherwise effective decision aids in favor of what they
see to be a simpler, though less accurate, mode of problem solving (Kottermann
& Davis, 1991). Since the perceived ease-of-use of an application is argued to
be causally related to perceived usefulness (Davis et al., 1989), an adverse
reaction to the mechanics of working with an application may also result in
a diminished perception of its problem-solving usefulness. However, the reverse
argument seems just as reasonable. A decision aid that does not work well may
be positively perceived by users, which is precisely what Aldag and Power (1986)
found. The response of the user might, in part, depend on the consequences
of the decision. If consequences are minor, then he or she may not be as
motivated to use the decision aid as in circumstances in which the consequences
are great.
As the literature is somewhat ambiguous regarding the likely impact of
computer-based decision aids on user perceptions and attitudes, we do not
present any formal hypotheses concerning the relation of problem-solving
method to the psychological outcomes. We do, however, analyze perceptions
and attitudes in terms of problem-solving method and will relate our findings
to the issues raised above.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


90 JOHN J. LAWLER AND ROBIN ELLIOT

Interaction of Problem-Solving Method and Task Complexity


The complexity of a problem or task is generally seen as a major
determinant, other things equal, of the degree that a problem is or is not well-
structured. Problems become more complex as the information to be processed
increases and as the relationships among separate pieces of information become
more interconnected. As the complexity of a problem increases, the perceptual
and information-processing requirements for performing that task also increase
(Wood, 1986).
We anticipate that the impact of problem-solving method on performance
and psychological outcomes will be moderated by problem complexity
(Lamberti & Wallace, 1990). For tasks of low complexity, users with some
domain knowledge might be expected to perform at least as well without the
aid of the expert system as with it. In such cases, the solution to the problem
could be rather obvious. In fact, use of an expert system might even reduce
performance for straightforward tasks by making them excessively involved.
As task complexity increases, then the benefits of the expert system ought to
become more pronounced. Yet for extremely complex problems, the accuracy
of expert system-aided decisions could again converge with unaided decisions,
since an expert system’s limited problem-solving scope may well be exceeded;
under conditions of high complexity, non-experts should have very low
performance levels regardless of problem-solving method, This relationship for
accuracy outcomes is depicted in Figure 1.3

H3: As the complexity of a problem increases, the problem-solving


performance (both accuracy and efficiency measures> of subjects when
utilizing the expert system should improve relative to performance
without the expert system. At very high complexity levels,
performance with the expert system may deteriorate and the
performance levels under the two approaches may again converge.

As we lack a sufficient theoretical base to formulate definite hypotheses


regarding the impact of problem-solving method on the psychological outcomes
under study, we do not present hypotheses regarding method-complexity
interaction effects for the psychological outcomes. However, we examine these
relationships and report those results below.
Problem Complexity
We have included complexity as a factor in our study primarily because
we are interested in the extent to which it moderates the impact of problem-
solving method on both performance and psychological outcomes. However,
complexity is also be expected to demonstrate main effects for these outcomes.
More complex problems should be more difficult to solve and thus more prone
to error, a view consistent with behavioral decision theory (Hogarth, 1987;
Kahneman, Slavic & Tversky, 1982; Paquette & Kida, 1988). Similarly, other
things equal, complexity should decrease problem-solving efficiency. Both of
the following hypotheses are also consistent with the results of an expert system

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 91

0.8

xpert System J
r 0.8
8
5 0.5 1 Paper-and-Pencil I
0
2 0.4

Medium High
Complexity Level

Figure 1. Hypothetical Interaction Effect

study by Lamberti and Wallace (1990) that concerned the impact of uncertainty
on problem-solving accuracy and efficiency:

H4: The accuracy of HRMdecisions will decrease as the complexity


of the problem increases.
HS: The time required to reach HRM decisions will increase as the
complexity of the problem increases.

Problem complexity ought to impact perceived usefulness and ease-of-use


in a similar way. As complexity increases, then, other things equal, the problem
should be perceived as more challenging. Subjects should also presumably be
less confident in the solutions that they generate.

H6: A subject S perception of the ease-of-use of a problem-solving


method will decrease as the complexity of the problem increases.
H7: A subjects perception of the usefulness of a problem-solving
method will decrease as the complexity of the problem increases.

Unfortunately, the likely impact of complexity on task satisfaction is not


so immediately clear. Intrinsically motivated individuals may find more
challenging tasks to be more satisfying, even though task performance declines.
For example, Gardner (1990), arguing from an activation theory perspective,
found task satisfaction increased with task complexity (although this study did
not involve computer-related tasks). Yet in some circumstances we might expect
that extrinsically motivated individuals, absent from higher external rewards,
are apt to become less satisfied as task complexity increases. Consequently, the
complexity-satisfaction relation is not easily predicted.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


92 JOHN J. LAWLER AND ROBIN ELLIOT

Research Methods
This study involves the assessment of an expert system designed to facilitate
decision making in a classification-based job evaluation program. The program
was developed to aid those responsible for classifying clerical positions in a large,
public sector organization operating under a state civil service system. Data
on the program’s effectiveness, collected during training sessions for users, are
analyzed here.

The Job ClassiJication Task


Clerical positions in this organization are classified according to both skill
level and occupationalseries (e.g., Clerk II, Secretary III, Clerk-Typist II). Skill
levels range from entry-level positions to those involving extensive experience
and responsibility. Occupational series are differentiated largely in terms of core
activities (e.g., filing, typing, dictation, data entry). Classifiers consequently
must make two relatively independent decisions. In order to make these
decisions, classifiers must analyze information on as many as eight different
job factors (e.g., document production, document processing, oral
communication, supervisory responsibilities), each of which is composed of
several specific tasks. The tasks are associated with both skill level and
occupational characteristics.
Job classification decisions were once exclusively the responsibility of
professional analysts in the organization’s personnel office. For budgetary
reasons, the personnel office extended limited authority to many of the
organization’s departments to classify certain clerical positions according to
established, though somewhat ambiguous, criteria. Non-HRM supervisors
within the departments (designated as “departmental classifiers”) were then
trained to make these decisions using a “paper-and-pencil” approach, The
paper-and-pencil approach requires the classifiers to work with a variety of
documents and instruction manuals, though the classification decision is
ultimately a judgment call.
Despite training and monitoring, administrators in the personnel office
were concerned about the quality of the decisions made by departmental
classifiers. An expert system was seen as a means of exercising better control
over actions taken by the classifiers. The expert system studied here was
developed over a two year period and involved extensive interaction between
the developers and the personnel analysts (who served as the experts upon whose
judgmental processes the program was based). The program (see Appendix)
was designed to capture both formal classification standards and the heuristic
rules of the personnel analysts. It aids classifiers in identifying job tasks and
associating those tasks with the appropriate job factor. In the end, the program
recommends both a skill level and occupational series, providing a rationale
for both. However, the recommendations are not binding and may be
overridden by the user.
Pilot studies with expert analysts from the personnel office demonstrated
high consistency between the recommendations of the experts and those

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 93

generated by the program. Such consistency serves to establish an expert


system’s construct validity (Sturman & Milkovich, 1992). Once the program
worked well in that setting, it was distributed to departmental classifiers.

Research Design
To test our hypotheses, an experimental study was conducted in connection
with a training program for the intended users of the expert system. All subjects
were departmental classifiers who were required to attend the training sessions.
None had been exposed to the expert system prior to training. However, all
of the subjects had at least some experience doing classification work via the
conventional paper-and-pencil method. This study utilized a 2x3 within-subjects
factorial design (two levels for problem-solving method (expert system-aided
versus paper-and-pencil) and three levels for complexity (low, medium, and
high)). Each of forty-eight subjects completed six job classification exercises
(one for each cell in the design), so there was a total of 288 observations. Subjects
were randomly assigned to one of twelve training groups. A Latin square
approach served to counterbalance the order in which the problem-solving
methods and complexity levels were presented to the subjects.
Training involved groups of four individuals who attended two training
sessions one week apart, with each session lasting approximately two hours. The
same trainers conducted all of the sessions. At various points during training,
subjects completed the job classification exercises. They were provided job
descriptions that contained information relating to the activities performed by a
job incumbent. These forms were similar to those normally used by departmental
classifiers to record information obtained while interviewing job incumbents in
the first stage of the job classification process. Subjects were asked to classify these
positions without guidance from the trainers, using either the expert system or
the conventional paper-and-pencil approach. Subjects had access to the same
written documentation they used under normal circumstances. At the end of each
classification exercise, subjects were asked to recommend both skill level and
occupational series classifications for the position. In addition, the subjects
completed the battery of questions used to construct perceived usefulness,
perceived ease-of-use, task satisfaction, and perceived complexity scales.
We used six different job descriptions in the experiment. All were of actual
positions that had already been classitied by analysts in the personnel office.
The job descriptions were selected by the three skilled analysts in the personnel
department. The analysts reviewed a number of different job analyses that had
already been classified and that were drawn randomly from the department’s
files. They were asked first to select a subsample of cases that all three agreed
were correctly classified. They were then asked to choose two job descriptions
for each of the three complexity levels. The analysts were told to select cases
that reflected, in their subjective opinions, the range of task complexity typically
encountered in classification work. Only job descriptions for which the analysts
were in general agreement as to complexity level were used. In addition, experts
were able to use the program to reproduce the classification decisions for all
six job descriptions.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


94 JOHN J. LAWLER AND ROBIN ELLIOT

‘f
OS8
E
a 0.6 _Ic__L
. .._ -. -- --_
.- _
1 Peer-and-Pen~ii

8
2 -. ‘..
Q: 0.4 “,I .,,
*~..
_
‘-. .._

0.2

0
LOW Medium

Complexity Level

Figure 2. Level Classification Decisions

I
0
LOW Medium High

Complexity Level

Figure 3. Series Classification Decisions

Problem-solving performance was measured both in terms of accuracy and


efficiency. Separate measures were constructed for the skill level and
occupational series decisions and each of these two decisions was coded as a
dichotomous variable (1 if the subject had correctly classified the position and
0 if not). Although a controlled expe~ment, subjects were not constrained as
to the amount of time they had to perform a classification. For each problem,
a subject’s efficiency was measured as the number of minutes elapsing between
initiating the task and reaching a decision.

JOURNAL OF MANA~EME~, VOL. 22, NO. 1,19%


ARTIFICIAL INTELLIGENCE IN HRM 95

35

5 30

I-m
a 25
zl _
z20
E
$5 [Expert S&em ]

co 10 -.-.
Paper-and-Pencil 1
ii
z5

f,
Low Medium High

Complexity Level

Figure 4. Task Efficiency

:i

0’
Low Medium High

Complexity Level

Figure 5. Ease-of-Use

Scales were constructed to measure the three psychological outcomes


described above. The subject’s general perception of the ease of completing the
task with the problem-solving method utilized (i.e., expert system or paper-and-
pencil) was adapted from the an ease-of-use scale developed by Davis (1986).
The items used were the same, save for wording changes to reflect differences
in the task. The alpha coefficient for this scale was .87.
We did not have a direct measure of usefulness in the study, so we have
used a three-item Likert scale to measure a subject’s confidence in the solution
reached for a given problem. Subjects were asked their degree of confidence
in each of the two aspects of the classification decisions and their overall

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


96 JOHN J. LAWLER AND ROBIN ELLIOT

_._~_
------------. ___[Expefl System ]
--. .__
[Paper-and-Pencil

0
LOW Medium High

Complexity Level

Figure 6. Task Attitude

confidence in the solution. This confidence scale is similar to those used by


Lamberti and Wallace (1990) and Aldag and Power (1986). The alpha coefficient
for the scale was .95. This confidence scale served as an indicator of a subject’s
perceived usefulness of the method employed for solving a given classification
problem. That is, the more confident a subject is that she or he has reached
a correct classification decision, then, other things equal, the more likely it is
that the subject will perceive the problem-solving method employed as useful.
Task satisfaction was measured by means of a semantic differential scale
described by Gardner ( 1990).4 Four task satisfaction items were used from the
Gardner scale. The anchors for these items were: good versus bad, unpleasant
versus pleasant, frustrating versus gratifying, and boring versus interesting. The
alpha coefficient for the scale was .8 1. A fourth scale, perceived task complexity,
was constructed as a manipulation check. Perceived complexity was measured
with a Likert scale derived from a task stimulation scale reported by Gardner
(1990).’ Three items from the Gardner scale measure task complexity. The
anchors for these items range from very simple to somewhat complex to very
complex and refer to the complexity of the classification decisions, the
complexity of thought processes required to perform the task, and the
complexity of various subtasks required to complete the classification. The
alpha coefficient for this scale was .92. The scale measured perceived complexity
for the overall task, not each of the two subtasks.
As a test of construct validity, we intercorrelated the four scales (Table
1). All correlation coefficients are statistically significant at the .OOl level. As
would be anticipated, the ease-of-use and usefulness scales were negatively
correlated with perceived complexity. Consistent with the work of Davis et al.
(1989), the perceived ease-of-use and confidence scales are positively correlated.
The task attitude variable also behaves in a theoretically reasonable manner

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 91

Table 1. Scale Intercorrelations


Complexity Usefulness Ease-of- Use Task Attitude

Complexity 1.00
Usefulness -.48 1.00
Ease-of-Use -.75 .58 1.oo
Task Attitude -.41 .47 .64 1.00

in that it is positively correlated with both ease-of-use and usefulness scales.


Task attitude is negatively related to complexity. Though inconsistent with
Gardner’s (1990) work, we noted earlier that the task attitude-complexity
relationship is conceptually ambiguous. Taken as a whole, then, the pattern
of correlations among these variables is suggestive of the construct validity of
this set of scales.

Statistical Analysis
The psychological outcome scales (usefulness, ease-of-use, task attitude,
and perceived complexity) are all continuous variables and were analyzed using
the appropriate ANOVA method for a two-factor within-subjects design
(Keppel & Zedick, 1989). Within-subjects ANOVA was also used in the case
of the efficiency measure (task time). For each of these variables, statistical tests
were performed for the complexity and method main effects and for the
complexity-method interaction effect.
As the two task performance variables (skill level accuracy and
occupational series accuracy) are dichotomous, logit analysis was used rather
than ANOVA. Logit is based on a function of the form:

Prob(DV= 1) = 1 / (1 •l- e-“) (1.1)

where: Prob(DV = 1) = probability dependent variable = 1 and e = natural


base. 2 is a linear combination of the independent variables in the analysis,
which here include dummy variables representing the main effects, along with
interaction terms. The dummy variables for the main effects were scored to
assure orthogonal contrasts. Hence, the analysis is quite similar to ANOVA
via multiple regression. 2 can be viewed as the propensity of subjects to classify
a position correctly and is written as:

z = a + h(ES) + CI (CA) + c2(CB) + dl(ES X CA) + ddES X C@ (1.2)


where: Prob(D V = 1) = probability dependent variable = 1; a, b, c., d., f =
parameters; ES = dummy variable indicating expert system use; CA = dummy
variable for moderate versus low complexity contrast; CB = dummy variable
for high complexity versus moderate/low complexity contrast; ES X CA, ES
X CB = interaction terms.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


98 JOHN J. LAWLER AND ROBIN ELLIOT

A maximum likelihood approach was utilized to estimate the parameters


for Equation 1.2. Statistical tests of significance for individual parameters and
groups of parameters are conducted very much as in regression analysis.6 One
problem with conventional logit estimates in repeated measures designs is that
unobserved subject-specific effects may be correlated with the independent
variables in the analysis, thus biasing parameter estimates (Chamberlain, 1984).
While there are estimation techniques to handle this complication, random
assignment and counterbalancing eliminated the problem in this study. Hence,
conventional logit estimates are reported.7

Results

Manipulation Check
Within-subjects ANOVA was used to determine if there was a relationship
between the subject’s perceived complexity and the assigned complexity level.
We found a strong and significant complexity main effect (Table 2), with
perceived complexity increasing as a function of the assigned complexity level.
Hence the complexity manipulation appears to be valid. However, we also
found that perceived complexity was higher for problems solved using the expert
system rather than the pencil-and-paper method. The implications of this
finding are considered below. There was no statistically significant method x
complexity interaction effect.

Table 2. Analysis of Variance Results


(Within-Group Averages and F-Ratios)
Factors
Task
Perceived Perceived Task Perceived Completion
Usefulness Ease-of- Use Attitude Complexity Time

Method
Paper-and-Pencil 5.45 5.06 4.67 3.22 10.08
Expert System 4.94 4.96 4.98 3.65 21.56
F-Ratio (df = 1,47) 18.31” 0.64 5.38b 9.95” 70.86”

Complexity
Low 5.69 5.62 5.07 2.28 8.81
Medium 5.21 5.04 4.97 3.58 16.65
High 4.68 4.38 4.44 4.44 22.00
F-Ratio (df = 2,94) 16.63” 56.85” 13.16” 152.98” 91.42”

Method X Complexity
F-Ratio (df = 2,94) 2.23 2.32’ 2.37’ .79 10.82”

Notes: a. Significant at .01 level


b. Significant at .OSlevel
c. Significant at .10level

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 99

Table 3. Results of Logit Analyses for Accuracy of Classification Decisions


Job Level Job Series
Independent Coefficient * Coefficient*
Variables (Standard Error) (Standard Error)

Expert System Used in Classification (ES) -.02 -.31


(.25) (.27)

Task Complexity: [18.72] a


Moderate vs. Low (CA) -.54
(.32)’ (.29)
High vs. Moderate and Low (CB) -1.11 1.38
(.27) a (.31)”

Complexity x Method: [ 13.931 a [5.29] ’


ESxCA -.1.45 .85
(.64) b (.59)
ESxCB -1.71 1.12
(.54)” (.62)’

Constant -.33 -2.70


(.13)” (.63) a

2x Change in Log-likelihood Ratio 33.65 a 32.94”

Notes: * The numbers in brackets [ ] for task complexity and the interaction terms are Wald statistics
for that set of variables.
a. Significant at the .Ol level
b. Significant at the .05 level
c. Significant at the .10 level

Task Performance
Logit results for the skill level and occupational series decisions are reported
in Table 3 (supplementary logit analysis, described in the discussion section,
is reported in Table 4).
Skill level decisions. Expert system utilization exerted neither a
significant nor positive effect on the accuracy of skill level decisions, thus
contradicting Hl. On the other hand, the task complexity and the method x
complexity interaction effects are statistically significant at the .Ol level.
Assessing these results in relation to H3 and H4 requires further interpretation
of the coding scheme.
The expert system variable contrasts expert system use (greater value) with
the paper-and-pencil approach (lesser value). CA contrasts moderate
complexity (greater value) to low complexity (lesser value), while CB contrasts
high complexity (greater value) to the average effect of moderate and low
complexity (lesser value). Thus the negative coefficients associated with CA and
CB (both significant) indicate a general decline in the accuracy of skill level
decisions with increasing complexity, which supports H4.
Hypothesis 3 posits expert system use might dampen the adverse impact
of complexity on accuracy in the mid-range of complexity, with the expert
system and paper-and-pencil approaches converging at higher levels of

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


100 JOHN J. LAWLER AND ROBIN ELLIOT

Table 4. Results of Lo& Analyses for Accuracy of C~~s~cation Decisions


(Controlling for Classifier Experience)
Job Level Job Series
Independent Coefficient* Coefficient*
Variables (Standard Error) (Standard Error)

Expert System Used in Classification (ES) -.31 -.51


c.36) t.371
Task Complexity: [ 19.781a 119:;1 a
Moderate vs. Low (CA) -.46
c.32) c.29)
High vs. Moderate and Low (CB) -1.19 1.35
(.27)’ (.31)”

Complexity x Method: f14.291a [5.46] ’


ESxCA -. I .47 .88
(.64) b (59)
ESxCB -1.76 1.14
(.5qa (.62)’

Prior Classifications .003 .Ol


(.Ol) (.Ol)
Prior Classifications x ES .03 .02
t.021 (.02)
Constant .27 .56
(.18)’ (.18)b

2x Change in Log-likelihood Ratio 33.65” 32.94”


Notes: * The numbers in brackets [ ] for task complexity and the interaction terms are Wald statistics
for that set of variables.
a. Significant at the .Ol level
b. Significant at the .05 level
c. Significant at the . 10 level

complexity (Figure 1). For this to be the case, the CA X ES coefficient should
be positive, while the CB X ES coefficient should be negative. Although the
ability of users to make accurate level classification decisions varied depending
upon whether or not they used the expert system, the coefficients for the
interaction terms are both negative, so that the interaction pattern for accuracy
of skill level decisions (Figure 2) is clearly different from the h~othetical
pattern.
A possible explanation for this is that complexity is measured in relative,
rather than absolute, terms. That is, both the analysts who initially judged task
complexity and the classifiers who responded to the complexity questions were
evaluating this factor in relation to their experiences. Hence, what they might
have viewed as classification tasks of low complexity may, in some objective
sense, be quite involved. In other words, “true” complexity may be truncated
in the lower range for skill level decisions. Consequently, the interaction effect

JOURNAL OF MANAGEMENT, VOL. 22, NO. I, 1996


ARTIFICIAL INTELLIGENCE IN HRM 101

depicted in Figure 2 could be seen as partially consistent with H3 (in that Figure
2 corresponds somewhat to the right half of Figure 1).
Another anomalous finding depicted in Figure 2 is the relatively flat
relationship between complexity and accuracy for the paper-and-pencil solutions
(despite a negative and significant main effect). Perhaps subjects invoked personal
heuristics to cope with increasing complexity which were not incorporated into
the expert system and were only warranted at higher levels of complexity. While
our subjects were not experts in the sense that the analysts were, they were not
completely naive users either. All had some experience doing basic classification
work and had probably learned problem-solving techniques that might not have
been familiar to the personnel office analysts. This interpretation is consistent
with prior research on the effects of increasing complexity on decision-making
accuracy (Paquette & Kida, 1988). The results obtained for the efficiency measure
provide additional evidence as to what might have been occurring here.
Occupational series decisions. As with skill level decisions, the main effect
for expert system use is negative and insignificant for occupational series
decisions. Both task complexity main effect and the method x complexity
interaction are significant (at the .Ol and .lO levels, respectively).
An anomalous result here is that the main effect for complexity is positive
(Figure 3), though H4 anticipates a negative relationship. A possible
explanation may be that, at least for occupational series decisions, classifiers
have learned that relatively complex problems often fall into particular
categories and use this as a problem-solving heuristic. This heuristic, though
not obvious, could have been incorporated into the expert system, thus
explaining why complexity seems to impact accuracy in the same way under
both the expert system and paper-and-pencil conditions.
As Figure 3 indicates, accuracy under the expert system condition is less
than under the paper-and-pencil condition for the low complexity condition.
As complexity increases, accuracy under the two conditions converges, with
expert system-aided problem solving improving markedly in comparison to the
paper-and-pencil approach. This could be explained by a variant on the
argument presented above regarding the method x complexity interaction for
skill level decisions. Again, if we think of the complexity range as relative, then
job series decisions could be at the lower (rather than higher) end of some
absolute continuum of complexity, thus generating an interaction pattern that
corresponds to the left half of Figure 1.
Given this conjecture, the results are more consistent with H3. That is, at
low complexity levels, the expert system approach may be excessively difficult
to use compared to the task at hand, so it is outperformed by the paper-and-
pencil approach. As complexity increases, expert system accuracy gains in
relation to paper-and-pencil accuracy. H3 also suggests that the expert system
method overtakes the paper-and-pencil approach at some point. Although this
is not demonstrated to occur within the complexity range considered here, the
ES X CB interaction term is both significant and positive. Thus, the slope for
the expert system graph exceeds that of the paper-and-pencil graph in the higher
complexity region.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


102 JOHN J. LAWLER AND ROBIN ELLIOT

Efficiency. Task completion time was used to measure the efficiency


dimension of task performance. Both of the main effects were found to be
statistically significant (Table 2). As expected, task completion time increased
with complexity (H5). Completion time was greater when the expert system
was used, which runs counter to the anticipated relationship (H2). Though the
complexity x method interaction effect is statistically sign~cant, the pattern
of the interaction effect (Figure 4) was not as anticipated (H3). For those
problems solved using the expert system, completion time is almost a linear
function of complexity level. However, for the paper-and-pencil approach, the
complexity-completion time relationship is kinked. Completion time for tasks
of moderate complexity is around twice what it is for low complexity tasks,
yet completion time rises only slightly between the moderate and high
complexity conditions.
The results obtained for the efficiency measure, though at odds with
theoretical expectation, lend support to our explanation of the ~omalous
findings for accuracy measures, particularly the job level decisions. The expert
system utilizes the same, relatively rigorous analytical method across all
complexity levels. Suppose classifiers had learned that a rigorous approach does
not work as well as some short-cut technique for more complex classification
problems. That is, the more rigorous approach might become unwieldy as the
information required increases (as when problem complexity increases). Simpler
methods may then be substituted, which achieve about the same degree of
accuracy for more complex decisions, but require relatively less time. Such a
process would be consistent with the relationships depicted in Figure 2 and
Figure 4.
Psychological Outcomes
The results for the three principal psychological outcomes considered in
this study (confidence, perceived ease-of-use, and task attitude) are somewhat
similar (Table 2). All three decrease in value as complexity increases. These
findings support H6 and H7. Competing theoretical arguments meant that it
was not possible to specify the direction of the complexity effect in the case
of task attitude. Our results indicate that subjects became less satisfied with
the task as complexity increased.
Conflicting findings in prior research made it difficult to specify
unequivocal hypotheses regarding the impact of problem-solving method on
psychological outcomes. A significant (and negative) effect is observed only in
the case of user confidence. Thus the subjects seemed to sense that the expert
system approach was generally less accurate than the paper-and-pencil
approach. However, the absence of a significant method x complexity
interaction effect for user confidence also indicates that subjects did not sense
that the accuracy of the expert system method was contingent on problem
complexity.
Subjects did not seem to find the expert system more difficult to use than
the paper-and-pencil approach, even though expert system use did increase
perceived complexity. And their attitude toward the problem-solving task

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 103

tended to be more favorable in the expert system condition though the


relationship is not significant.
The method x complexity interaction effect was insignificant in the case
of user confidence, but was found to be weakly significant (p < . IO)for perceived
ease-of-use and task attitude. In the case of perceived ease-of-use (Figure S),
the paper-and-pencil method apparently begins to diverge from the expert
system method at higher problem complexity levels. Although the relationship
is weak, it is still consistent with the argument presented above concerning
subjects possibly compensating for the greater difficulty of more complex
problems by substituting personal heuristics that would tend to ease decision
making under the paper-and-pencil approach. Likewise, subjects had a
somewhat more favorable attitude to the overall task under the expert system
condition for problems of low and medium complexity, but this difference is
eliminated for problems of high complexity (Figure 6).

Discussion

We anticipate that the human resource management field will increasingly


depend upon sophisticated information technology applications. Expert
systems and other artificial intelligence programs, still relatively rare in HRM,
are apt to become more commonplace. However, we require further research
on the efficacy of such computer-based decision aids in addressing typical HRM
problems.
The study undertaken here involved a setting in which experimental
subjects had some prior knowledge of the issue. Even though many expert
systems are designed to be used by complete novices, this is not always the case.
Expert systems may be intended for users with some understanding of a
particular area, though not at the level of an acknowledged expert. Some of
the first successful implementations of expert systems were of this type.
Examples include programs to aid physicians diagnose certain diseases (e.g.,
MYCIN) and engineers uncover petroleum deposits (Waterman, 1986).
We have posited relationships between various performance and
psychological outcomes of an HRM problem-solving exercise and both the
complexity of the problem and the use of an expert system as a decision aid.
The hypothesized main effects of the complexity factor were, for the most part,
supported. However, hypotheses relating to the main effects associated with
expert system use were not confirmed. The main effects for expert system use
were not significant for any of the psychological outcomes nor for the accuracy
outcomes. The main effect for the efficiency outcome, while significant, was
opposite the predicted direction.
These findings, though somewhat divergent from our theoretical position,
are not inconsistent with prior research on computer-based decision aids in other
contexts. But unlike many of those studies, our work examines the interaction
of expert system use and complexity, which is an important feature of the
problem context. The interaction effects observed suggest a more complicated

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


104 JOHN J. LAWLER AND ROBIN ELLIOT

explanation, at least in the case of the performance outcomes. We have noted


how these interaction effects are, in certain ways, consistent with H3.
There are some alternative interpretations of our findings that need to be
addressed. First, it might be that the expert system developed for this study
is defective, thus raising questions regarding the study’s internal validity. Yet
the program clearly generated consistent results when used by expert personnel
analysts. It had first gone through several revisions prior to implementation
and had also been pretested by some departmental classifiers (who did not serve
as experimental subjects). As noted above, this process is strongly suggestive
of the program’s construct validity. Consequently, we can rule that out as an
explanation for our findings.
A second issue concerns the departmental classifiers, who served as
experimental subjects. Our results may be the consequence of the confounding
impact of their prior training and experience with the paper-and-pencil
approach. Hl, for example, anticipated greater performance under the expert
system than the paper-and-pencil condition. This was not found to be the case,
so perhaps the subjects were just used to the paper-and-pencil approach and
had not yet gained sufficient proficiency with the expert system.
We are inclined to reject such an explanation for a couple of reasons. The
subjects did not find the paper-and-pencil approach easier to use than the expert
system approach. Moreover, we conducted supplementary analysis in which
we included a variable in the logit analysis indicating the classifying experience
each subject had prior to the experiment. This varied considerably across
subjects: nearly half of the subjects had previously conducted five or fewer
classifications, although several classifiers had completed over thirty
classifications. By including the number of prior classifications and the
interaction of prior classifications with problem-solving method, we were able
to control for this effect. That is, we would anticipate a significant experience
x method interaction effect such that expert system and paper-and-pencil
differences in accuracy would be more consistent with Hl for the less
experienced clussz~ers (who were mostly near novices). The results of this
analysis are reported in Table 4. This interaction effect was not found to be
significant for either accuracy measure. Consequently, the experience
explanation for our findings must be rejected.
We have discussed the possibility that the subjects may have utilized
judgmental techniques not known by the personnel analysts and not
incorporated into the expert system. Indeed, there are certain types of problems
contexts in which somewhat informed novice problem solvers may outperform
experts (Adelson, 1984). The reason for this is that an expert (and perhaps an
expert system) may be burdened by excessive and extraneous information, thus
not able to “see the forest for the trees.” As most expert systems in HRM are
likely to be used by subjects like ours rather than total neophytes, research using
subjects somewhere between experts and neophytes consequently seems quite
appropriate.
If subjects were not required to accept the program’s recommendations
and if they were using personal heuristic rules in solving the classification

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 105

problems, then perhaps there ought to have been no difference between choices
made with the expert system and the paper-and-pencil approach. Yet the
method x complexity interaction was significant for the accuracy and efficiency
measures. In fact, the subjects seemed wedded to the expert system’s
recommendations when the program was used, only rarely disagreeing with the
program. This aspect of subject behavior when utilizing an expert system
warrants further study.
Our discussion of the findings also suggests the possibility that the results
could be consistent with theoretical expectation if the skill level and
occupational series were on opposite sides of some absolute complexity scale.
We speculated that the results might suggest the skill level decisions are at the
upper end of such a distribution and the occupational series decisions are at
the lower end. If this were reversed, then our conjecture would not hold.
However, based on discussions with analysts from the personnel department,
we feel that our interpretation of the complexity dimension is reasonable.
Analysts suggested that occupational series decisions are usually easier to make
than skill level decisions. The reason for this is that much more information
must be processed and assimilated for the latter than the former type of decision.
Occupational series decisions generally rest on establishing whether the job
incumbent performs certain very specific tasks and usually only a few pieces
of information are required (see Appendix for a more extensive discussion of
the classification task). For example, if the job incumbent does any more than
a very limited amount of typing, the job will be classified as a secretary. If any
dictation work is at all required, then the job will be classified as a stenographer.
In contrast, skill level decisions require the acquisition and analysis of extensive
amounts of information relating to all aspects of the jobs. Skill level decisions
are more quantitative in character, comparing the skill levels of the various tasks
performed by an incumbent in order to ascertain the preponderant skill level
for the job as a whole. Occupational series decisions are more qualitative and
also less critical (as salary depends on skill level alone) and presumably less
stressful to make. It would, however, make sense in further research to utilize
a design that differentiated between the complexity levels of each facet of the
task at hand.
What policy implications might we draw from this study? For one thing,
expert systems cannot be viewed as panaceas for managerial problems. We are
certainly a long way from being able to create automatons which can replace
humans as problem solvers in the HRM field. Managers may see expert systems
as a means of economizing on labor costs, much as robotic systems are used
in manufacturing. Expert system development costs may be substantial and the
resulting product may not be sufficiently accurate to justify the investment. Yet
it is clear that, under certain circumstances, an expert system can exceed, or
at least equal, the accuracy of a conventional problem-solving approach. Thus,
despite somewhat ambivalent results, this study does indicate that is feasible
to develop expert systems that replicate some nontrivial problem-solving
competencies in the HRM field.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


106 JOHN J. LAWLER AND ROBIN ELLIOT

Clearly, more research is needed in HRM, both on the impact of expert


system use on outcomes and on the efficacy of various features of expert system
applications (e.g., graphical versus textual displays). Future research should
explore other dimensions of the problem context and the interaction of context
and expert system use. Another significant area of inquiry is the impact of the
personal characteristics of users on performance and psychological outcomes.
The technology of expert system development also is a potentially fruitful area.
There is now considerable experimentation with computer learning systems, so
we might envision a generation of artificial intelligence applications that both
apply heuristic rules and learn from the consequences of decisions. Finally, the
experimental method used here might be enhanced by the use of verbal
protocols, which could be used to assess the impact of different problem-solving
techniques on the manner in which subjects approach and analyze a problem.
For research just employing an expert system, it would be possible to build
routines into the program that would capture the manner in which each subject
worked through each problem (e.g., the number of steps used to reach adecision,
the order in which elements of the program were accessed). Such data could
be utilized to develop and analyze a typology of problem-solving styles
(including analysis of how such styles vary as a function of, say, task
complexity).

Acknowledgment: The authors acknowledge the helpful comments of the


reviewers. In addition, suggestions relating to this study were provided by Harry
Triandis, Jerry Ferris, and Joe Martocchio. Support for this project came, in
part, from grants by IBM Corporation and the University of Illinois Campus
Research Board.

APPENDIX
Description of Task and Expert System
This appendix provides a more detailed description of the classification
task and the problem-solving methods used by the experimental subjects.

Paper-and- Pencil Approach


As described in the body of the paper, the personnel department of the
organization studied had recruited a number of administrative staff to serve
as “departmental classifiers,” who were given the authority to make job
classification decisions for certain clerical job categories. Classifiers undergo
periodic training in which they are given examples of job classification problems
and shown strategies for making those decisions. They also have documentation
to which they can refer in order to aid in the classification task. There are eight
principal task categories that are analyzed in the process of completing a
classification problem: document production, written communication, oral
communication, scheduling/ coordination/ liaison work, filing and records
management, calculations and fiscal record keeping, document processing, and
supervision of others. These general categories are defined in the guidebooks

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 107

given to the departmental classifiers. In addition, each general task is divided


into several illustrative subtasks. For example, document production has such
subgroup categories as transcription, dictation, text entry, proofreading,
copying, and so forth. Such subtasks might be further subdivided into very
specific activities. In the case of text entry, the employee may have to follow
detailed instructions regarding formatting of the document, may have
considerable discretion choosing from among standard formats, or may be able
to devise his or her own formats for the document. These different approaches
subtasks are of varying (increasing) levels of responsibility and, therefore, skill
level.
The departmental classifier must, based on a detailed written job
description, ascertain the skill level and occupational series for the job in
question. The strategy is to compare the tasks listed in the job description to
sample activities in each of the major categories. Example tasks are associated
with skill level designations. The job description also contains information on
the proportion of work time the employee typically spends on each major work
activity. Skill level determination involves ascertaining the level in which the
preponderance ofjob activities fall. Thus, it is necessary to have a comprehensive
listing of activities pursued in connection with the job. Occupational series
decisions require discerning the extent to which certain key tasks are pursued,
which differentiate, for example, clerks from secretaries. If the classifier is unsure
of the appropriate job classification, or if the occupational series or skill level
is outside the limits within which classifiers are allowed to make classification
decisions, then the job is referred to a personnel analyst.
Expert System Approach
The expert system used in this study was designed to simplify the process
by which classifiers assemble and evaluate job information. The paper-and-
pencil approach is complicated because there are more than a hundred different
subtasks that a classifier might have to consider in order to make a proper job
classification decision. The program was written in an artificial intelligence
program called PROLOG and runs on DOS-based personal computers. It has
two principal sections. The first part of the program helps the classifier develop
a detailed job description that can then be analyzed by the system’s inference
system.
In the job description section of the program, the user is first presented
with a menu listing the eight major task categories described above. Having
chosen a category, the user is then presented with a tree-like diagram listing
the major subtasks under that major task. Figure A.1 shows such a screen for
the document production section. The user moves the cursor to different boxes
and can display information associated with the boxes (including a detailed
description of the task). When an item is chosen as a task identical or similar
to one performed by the job incumbent, questions are asked regarding the
amount of time the incumbent spends on the job (additional questions help
the user frame the time question in terms of the incumbent’s typical workday,
workweek, or other time period). The program indicates the skill level associated

JOURNAL OF MANAGEMENT, VOL. 22, NO. I,1996


108 JOHN J. LAWLER AND ROBIN ELLIOT

Identify tasks for SUPERVISING OTHERS

Organize Tasks
Coordinate workflow
- Plan workflow

Supervising others Provide Training


F
Review Work

Personnel Functions

IUse ARROW KEYS t0 move IPreSS <ESC> to exit1 Press <Fl> for HELP SCREEN1

Figure Al. Sample Expert System Screen: Tree Menu for a Major Job Task

current Position: CS36638 Name Of Current File: CS36638.JOB


Position Classification: Level
According to the information provided
in the job analysis, the proportion
of total work time typically spent at
different job levels Is: -7

I : 0.0% POSSIBLE LEVEL for position: III


II : 18.6%
III : 58.0% EXPLANATION:
IV 12.3% The job analysis indicates that MOST work is performed
"NKNOWNi 11.1% at the recommended level.

HOweVer, since only slightly more than 50% of


work is done at that level, It will be necessary to
ask you some additional questions In order to determine
rf this is the appropriate classification level for
this position.

IPress any key to continue1

Figure A2. Sample Expert System Screen: Job Classification Recommendation

with the job and maintains a running tally of the tasks performed and the
amount of time devoted to each task.
The job description section of the program largely mechanizes the
information gathering process. However, unlike the paper-and-pencil approach,
the expert system provides position classification recommendations. The second
major part of the program does this. Once the job description phase is

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


ARTIFICIAL INTELLIGENCE IN HRM 109

completed, the user requests a classification recommendation. Figure A.2


illustrates the screen displays provided in this section of the program. Note that
both skill level decision and occupational series recommendations are given.
The user is also provided an explanation of the decision and a breakdown of
job activities related to each decision.
Users are free to return to the job description section and modify their
responses. In addition to making recommendations, the program also indicates
when a decision is not possible (because of insufficient information or because
the job classification is outside of the program’s domain) and then refers the
user to a personnel analyst. Finally, the program contains a context-sensitive
help system.
Notes
1. The u.sejiiZness of an application refers to its instrumentality as a problem-solving method, while ease-
of-use relates to the extent to which a user felt comfortable working with the application. Ease-of-use
and usefulness are not unrelated constructs, as Davis et al. (1989) posit that increased perceived ease-
of-use leads to increased perceived usefulness.
2. Decision aids, of course, consist of a range of techniques, including many that are not necessarily computer
based (e.g., diagrams, check lists). Although there is an extensive literature dealing with decision aids
that are not computer based, research dealing with computer-based decision aids is more relevant to
the question at hand.
3. Since efficiency is measured by the time required to complete the problem-solving task, the relationships
in Figure I would be inverted for that variable.
4. See Appendix B of Gardner’s (1990) paper.
5. See Appendix A of Gardner’s (I 990) paper.
6. A t-test is used to assess the significance of individual parameters. The overall significance of the equation
and significance tests for groups of variables are baaed on the logarithm of the likelihood ratio and are
analogous to F-ratio tests in analysis of variance and regression.
7. Both conventional and conditional logit (Chamberlain, 1984) estimates were obtained in the initial
analysis. However, the two sets of parameter estimates were identical.

References
Adams, D.A., Nelson, R.R. & Todd, P.A. (1992). Confidence, ease of use, and usage of information
technology: A replication. MIS Quarrerly, 16: 227-247.
Aldag, R.J. & Power, D.J. (1986). An empirical assessment of computer-assisted decision analysis. Decision
Sciences, 17,: 572-588.
Adelson, B. (1984). When novices surpass experts: The difficulty of a task may increase with expertise. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 10: 483495.
Beardon, C. (1989). Artificial inrelligence terminology: A reference guide. New York: Halsted Press.
Besser, L.J. & Frank, G.B. (1989). Artificial intelligence and the flexible benefits decision: Can expert systems
help employees make rational choices? SAM Advanced Management Journal, 54 (Spring): 4-13.
Bozeman, W.C. & Olsen, C. (1987). Effectiveness of a decision aid for selection of statistical procedures.
Psychological Reports, 60: 835-838.
Buchanan, B.G. & Shortliffe, E.H. (1984). Rule-based expert sysiems: 7he MYCZN experiments of the
Stanford heuristicprogrammingproject. Reading, MA: Adisson-Wesley.
Buchanan, B.G. & Smith, R.G. (1989). Fundamentals of expert systems. Pp. 149-192 in A. Barr, P.R. Cohen
& E.G. Feigenbaum (Eds.), Z7re handbook of artiJicia1 intelligence, Vol. 4. Reading, MA: Addison-
Wesley.
Carey, J.M. (1988). Understanding resistance to system change: An empirical study. Pp. 195-206 in J.M.
Carey (Ed.), Human facrors in management information systems. Norwood, NJ: Ablex.
Cats-Baril, W.L. & Huber, G.P. (1987). Decision support systems for ill-structured problems: An empirical
study. Decision Sciences, 18: 350-372.
Ceriello, V. (1991). Human resource management systems: Strategies, tactics, and techniques. Lexington,
MA: Lexington Books.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996


110 JOHN J. LAWLER AND ROBIN ELLIOT

Cham~rl~n, G. (1984). Panel data. Pp. 1247-1318 in A. Griliches & MD. Intrihgator (Eds.), handbook
ofeconometrics, Vol. 2: 1247-1318. Amsterdam: North-Holland.
Chu, P. (1990). Developing expert systems for human resource planning and management. Human Resource
Planning, 13: 159-178.
Coll, R., Coll, J.H. & Rein, D. (1991). The effect of computerized decision aids on decision time and decision
quality. information and Management, 20: 75-8 1.
Davis, F.D. (1986). A technology acceptance model for ernp~c~ly testing new end-user isolation systems:
Theory and results. Doctoral dissertation, Sloan School of Management, MIT.
Davis, F.D., Bagozzi, R.P. & Warshaw, P.R. (1989). User acceptance of computer technology: A comparison
of two theoretical models. Munagemenr Science, 35: 982-1003.
Dickmeyer, N. (1983). Measuring the effects of a university planning decision aid. Management Science,
29: 673-685.
Doll, W. & Torkzadeh, G. (1991). The me~u~ment of end-user satisfaction: Theoretical and methodolo~cal
and issues. MIS Quarterly, 15: 5-10.
Ernst, C. (Ed.) (1988). Management expert systems. Reading, MA: Addison-Wesley.
Fedorowicz, J. (1992). A learning curve analysis of expert system use. Decision Sciences, 23: 797-818.
Feigenbaum, E., McCorduck, P. & Nii, H.P. (1988). The rise of the expert company: How visionary
companies are using artificial intelligence to achieve hjgherprod~ctivity andprofits. New York: Times
Books.
Gardner, D.G. (1990). Task complexity effects on non-task-related movements: A test of activation theory.
Organizational Behavior and Human Decision Processes, 45: 209-231.
Harmon, J., Milkovich, G. & Sturman, M. (1990). Using expert systems in the management of human
resources. Working Paper No. 90-19. Center for Advanced Human Resource Studies, Cornell
University.
Hauser, R.D. & Herbert, F.J. (1992). Managerial issues in expert system implementation. SAM Advanced
Management Journal, 57(Winter): 10-1.5.
Hogarth, R. (1987). Judgment and choice, 2nd ed. New York: John Wiley.
Huber, G. (1990). A theory of the effects of advanced information technologies on organizational design,
intelligence, and decision making. Academy of Management Review, 1.5: 47-7 1.
Joyner, R. & Tunstall, K. (1970). Computer augmented organizational problem solving. Ma~agem~t
Science, 17: 212-225.
Kahneman, D., Slavic, P. & Tversky, A. (Eds.) (1982). Judgment under uncertainty: Heuristics and biases.
Cambridge, UK: Cambridge University Press.
Keppel, G. & Zedeck, S. (1989). Data analysisfor research designs. New York: W.H. Freeman.
Kirrane, D.E. & Kirrane, P.R. (1990). Managing by expert systems: There’s no replacement for the HR
professional, but “big brain” systems can make the job high-tech and more efficient. HR Maguzine
(March): 37-39.
Kottermann, J.E. L Davis, F.E. (1991). Decisional con&et and user acceptance of multicriteria decision-
making aids. Decision Sciences, 22: 9 18-926.
Lamberti, D.M. &Wallace, Wm. A. (1990). Intelligent interface design: An empirical assessment of knowledge
presentation in expert systems. MIS Qaarrerly, 14: 279-311.
Lawler, J.J. (1992). Computer-me~ated information processing and decision making in human resource
management. Pp. 301-345 in G.R. Ferris & K.M. Rowland (Ed%), Research inpersonne~ und human
resources management, Vol. 10. Greenwich, CT: JAI Press.
Lucas, H.C., Jr. &Nielsen, N.R. (1980). The impact of the mode of information presentation on performance.
Management Science, 26: 982-993.
Melone, N. (1990). A theoretical assessment of the user-satisfaction construct in information systems research.
Management Science, 31: 76-91.
Milkovich, GI, Sturman, M. & Hannon, _I.(1993). Effects of a flexible benefits expert system on employee
decisions and satisfaction. Working Paner No. 93-16. Center for Advanced Human Resource Studies,
Cornell University.
Motowildo, S.J. (1986). Information processing in personnel decisions. Pp. l-44 in G.R. Ferris & K.M.
Rowland (Eds.), Research in personnel and human resources management, Vol. 4. Greenwich, CT:
JAI Press.
Northcraft, G.B., Neale, M.A. & Huber, V.L. (1988). The effects of cognitive bias and social influence on
human resources management decisions. Pp. 157-189in G.R. Ferris & K.M. Rowland (Eds.),Research
in personnel und human resources management, Vol. 6. Greenwich, CT: JAI Press.
Paquette, L. & Kida, T. (1988). The effect of decision strategy and task complexity on decision performance.
Organizational Behavior and Human Decision Processes, 41: 128-142.

JOURNAL OF MANAGEME~, VOL. 22, NO. 1,1996


ARTIFICIAL INTELLIGENCE IN HRM 111

Peterson, C.M. L Peterson, T.O. (1988). The darkside of office automation: How people resist the
introduction of office automation technology. Pp. 183-194 in J.M. Carey (Ed.), Human Factors in
Management Information Systems. Not-wood, NJ: Ablex.
Shanteau, J. & Stewart, T.R. (1992). Why study expert decision making? Some historical perspectives and
comments. Organizational Behavior and Human Decision Processes, 53: 95-106.
Sharda, R., Barr, S.H. & McDonnell, J.C. (1988). Decision support system effectiveness: A review and
empirical test. Management Science, 34: 139-159.
Simon, H.A. (1960). 7?re new science of manugemenf decision. New York: Harper and Row.
Simon, H.A. (1977). The new science of management decision, rev. ed. Englewood-Cliffs, NJ: Prentice-Hall.
Simon, H.A. & Kaplan, C.A. (1989). Foundations of cognitive science. Pp. 147 in MI. Posner
(Ed.),Foundutions of cognitive science. Cambridge, MA: MIT Press.
Sturman, M.C. & Milkovich, G.T. (1992). Validation of expert systems: PERSONAL CHOICE--A flexible
employee benefit system. Working Paper No. 92-33. Center for Advanced Human Resource Studies,
Cornell University.
Thompson, R.L., Higgins, C.A. & Howell, J.M. (1991). Personal computing: Toward a conceptual model.
MIS Quarterly, 1.5: 125-143.
Waterman, D.A. (1986). A Guide to Experf Systems. Reading, MA: Addison-Wesley.
Wood, R.E. (1986). Task complexity: Definition of a construct. Orgunizufional Behavior and Human
Decision Processes, 37: 60-82.
Zmud, R.W. (1979). Individual differences and MIS success: A review of the empirical literature. Management
Science, 25: 966-979.

JOURNAL OF MANAGEMENT, VOL. 22, NO. 1, 1996

You might also like