You are on page 1of 47

Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,

Predictor Measures, and Performance Outcomes

Oxford Handbooks Online


Job Analysis for Knowledge, Skills, Abilities, and Other
Characteristics, Predictor Measures, and Performance
Outcomes  
Michael T. Brannick, Adrienne Cadle, and Edward L. Levine
The Oxford Handbook of Personnel Assessment and Selection
Edited by Neal Schmitt

Print Publication Date: Mar 2012


Subject: Psychology, Organizational Psychology, Psychological Methods and Measurement
Online Publication Date: Nov 2012 DOI: 10.1093/oxfordhb/9780199732579.013.0007

Abstract and Keywords

Job analysis is the process of discovering the nature of a job. It typically results in an
understanding of the work content, such as tasks and duties, understanding what people
need to accomplish the job (the knowledge, skills, abilities, and other characteristics),
and some formal product such as a job description or a test blueprint. Because it forms
the foundation of test and criterion development, job analysis is important for personnel
selection. The chapter is divided into four main sections. The first section defines terms
and addresses issues that commonly arise in job analysis. The second section describes
common work-oriented methods of job analysis. The third section presents a taxonomy of
knowledge, skills, abilities, and other characteristics along with worker-oriented methods
of job analysis. The fourth section describes test validation strategies including
conventional test validation, synthetic validation, and judgment-based methods (content
validation and setting minimum qualifications), emphasizing the role of job analysis in
each. The last section is a chapter summary.

Keywords: job analysis, work analysis, content validity, synthetic validity, minimum qualifications

Purpose and Definitions


Job analysis refers to a broad array of activities designed to discover and document the
essential nature of work; it is a process of systematic inquiry (Brannick, Levine, &
Morgeson, 2007; Guion, 1998). Although job analysis is used for many activities such as
training, compensation, and job design, in this chapter we will be concerned with
personnel selection. In personnel selection, we want to choose from among a pool of
applicants those people best suited to the work. Job analysis provides the foundation for
such efforts by illuminating the nature of the job, and thus provides a platform for

Page 1 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
examining both the products of work and the individual differences thought to separate
those well suited to the work from those poorly suited to the work. In other words, job
analysis tells us what to look for to select the best people. It also helps us document the
reasons for our choices and marshal empirical support for subsequent decisions by
setting the stage for validation studies. Should selection procedures be challenged under
equal employment laws, job analysis is a cornerstone of the legal defensibility of the
procedures (Brannick et al., 2007; Gatewood & Field, 2001).

In what follows, we first provide definitions of some essential terms used in job analysis.
We then describe some of the most consequential decisions that must be confronted when
completing a job analysis for selection. Next, we describe some of the most useful
conventional methods of work- and worker-oriented job analysis, noting the strengths and
weaknesses of each. Finally, we consider in more detail various test validation strategies
and how job analysis relates to each. In our treatment of job analysis, we have covered
the logic, purpose, and practice of the discovery of knowledge, skills, abilities, and other
characteristics at work. We also have (p. 120) provided links between job analysis and test
use that are organized in a way we believe to be useful to readers from diverse
backgrounds and interests.

Two Branches of Descriptors

There are many ways of organizing the business of job analysis (e.g., Brannick, Levine, &
Morgeson, 2007, use four sets of building blocks: descriptors, methods of data collection,
sources of data, and units of analysis). For this chapter, it will be useful to focus mainly on
two sets of descriptors: work activities and worker attributes. Work activities concern
what the worker does on the job. For example, an auto mechanic replaces a worn tire
with a new one, a professor creates a Powerpoint slideshow for a lecture, a salesperson
demonstrates the operation of a vacuum cleaner, and a doctor examines a patient. Worker
attributes are characteristics possessed by worker that are useful in completing work
activities. For example, our auto mechanic must be physically strong enough to remove
and remount the tire, the professor needs knowledge of computer software to create the
slideshow, the salesperson should be sociable, and the doctor must possess hearing
sufficiently acute to use the stethoscope. For each of these jobs, of course, the worker
needs more than the characteristic just listed. The important distinction here is that work
activities describe what the worker does to accomplish the work, whereas the worker
attributes describe capacities and traits of the worker.

Work activities.
The most central of the work activities from the standpoint of job analysis is the task. The
task is a unit of work with a clear beginning and end that is directed toward the
accomplishment of a goal (e.g., McCormick, 1979). Example tasks for an auto mechanic
might include adjusting brakes or inflating tires; for the professor, a task might involve
writing a multiple choice examination. Tasks are often grouped into meaningful
collections called duties when the tasks serve a common goal. To continue the auto
mechanic example, a duty might be to tune an engine, which would be composed of a
Page 2 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
number of tasks, such as changing spark plugs. Some methods of job analysis focus on
specific tasks to build a detailed picture of the job such that a job may be described by
100 or more tasks (e.g., the task inventory method; Christal & Weissmuller, 1988). Others
use broader task descriptions that are unique to the job but fewer in number, so that
perhaps a dozen or fewer tasks can describe a job (e.g., functional job analysis; Fine &
Cronshaw, 1999). Some conventional job analysis systems use broad work activities as
descriptors to describe essentially all jobs. For example, O*NET (short for Occupational
Information Network; Peterson, Mumford, Borman, Jeanneret, & Fleishman 1999) uses a
set of 42 generalized work activities such as “documenting and recording information”
and “teaching others” as descriptors.

Worker attributes.
Worker attributes are conventionally described as KSAOs, for knowledge, skills, ability,
and other characteristics. The definition of these is typically somewhat vague, but we
shall sketch concepts and list examples of each. Knowledge concerns factual, conceptual,
and procedural material, what might be termed declarative and procedural knowledge in
cognitive psychology. Examples include knowledge of what software will accomplish what
function on the computer (e.g., which program will help create a manuscript, analyze
data, or create a movie), historical facts (e.g., Washington was the first President of the
United States), and knowledge of algebra (e.g., what is the distributive property?). Skill is
closely related to procedural knowledge, in that actions are taken of a kind and in
sequences coded in the knowledge bases. Skill is thus often closely allied with
psychomotor functions. Examples of skill include competence in driving a forklift or
playing a flute. Abilities refer to capacities or propensities that can be applied to many
different sorts of knowledge and skill. Examples include verbal, mathematical, and
musical aptitudes. Other characteristics refer to personal dispositions conventionally
thought of as personality or more specialized qualities related to a specific job. Examples
of other characteristics include resistance to monotony, willingness to work in dangerous
or uncomfortable environments, and extroversion.

Job specification.
Some authors reserve the term job analysis for work activities, and use the term job
specification to refer to inferred worker personal characteristics that are required for job
success (e.g., Harvey, 1991; Harvey & Wilson, 2000). Cascio (1991) split job analysis into
job descriptions and job specifications. Here we acknowledge the important distinction
between work and worker-oriented approaches, but prefer to label the process of
discovery of both using the term “job analysis.” The essential difference between the two
types of descriptors is that work behaviors tend to be more observable (but recognize
that some behaviors, such as making up one's mind, cannot be readily observed—only the
result can be observed).

(p. 121) Position and job.


Each person at work holds a position, which is defined by the formal tasks and duties
assigned to that person. A position is objectively defined and can be directly observed by

Page 3 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
a job analyst. A job is an abstraction illustrated by a collection of positions sufficiently
similar to one another to be considered the same for some organizational purpose, such
as personnel selection. Although a job is typically described in some detail at the end of
the job analysis, the job is an abstraction based on multiple positions and cannot be
directly observed.

The KSAOs are critical for personnel selection. The logic of the psychology of personnel
selection is (1) to identify those KSAOs that are important for the performance of a job,
(2) to select those KSAOs that are needed when the new hire begins work, and which are
practical and cost effective to measure, (3) to measure applicants on the KSAOs, and (4)
to use the measurements thus gathered in a systematic way to select the best people.
From a business standpoint, there must be an applicant pool, and those selected must
come to work after being selected. Such practicalities point to important processes
involved in recruiting, hiring, and retaining people in organizations. Those aspects are
not covered in this chapter, which is focused mainly on identifying the KSAOs.

Decisions
Completing a job analysis usually requires many decisions. Unlike buying a book, where
the contents of the book are pretty much standard no matter where you buy it, the job
analysis is constructed based on what you are trying to accomplish. In a sense, job
analysis is more like writing a book than reading one. In addition to discovering the
KSAOs, you will want to document what you did in order to support the choice of KSAOs
and their measures. Some decisions are made rather early in the process and others can
be delayed. In this section, we sketch some of the decisions that need to be confronted.

Whole versus Part; Which Part

The only way to know for certain how successful a job applicant will be is to hire that
person, get him or her working, and carefully measure performance against some
standard over a sufficient period of time. Such a practice is impractical (if we want the
best, we must hire them all, evaluate them all, and only then select one), and may be
dangerous (consider such a practice for dentists or airline pilots). Even if we could hire
everyone and judge their subsequent performance (not an easy process by any means),
short-term success does not always mean longer-term success. Therefore, we settle for
safer, more practical, but less sure methods of deciding which person to hire.

Although it is desirable to select for the entire job based on the full set of KSAOs required
for success (Equal Employment Opportunity Commission, 1978, p. 38304), we typically
select for only part of the job, and only some of the KSAOs. For example, a test of
knowledge such as is used for certification in nursing will indicate whether a person
knows a resuscitation algorithm. Passing that test does not mean that the person will be
able to perform a resuscitation properly, however. People may be unable to apply what
they know. However, they cannot be expected to apply knowledge that they do not have,

Page 4 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
so it is reasonable to test for the requisite knowledge. The knowledge is necessary but not
sufficient in this case.

There may be a large number of other characteristics that are believed to be important
for success on a job. In addition to subject matter expertise, success in teaching or
coaching may require a number of interpersonal qualities that are difficult to define
clearly (e.g., patience, the ability to explain things in multiple ways, empathy). There may
not be well-developed measures of such constructs readily available for use.

Some attributes are clearly relevant to the job, and measures are available, but their use
is questionable because of the selection context. Characteristics such as on-the-job
motivation and attitudes toward other people are difficult to measure during the job
application process because applicants typically attempt to present a favorable
impression. So although we would like to know whether a faculty member will spend time
writing a manuscript rather than surfing the internet, asking the person about what they
plan to do in this respect during the interview is not likely to provide much useful
information. Similarly, asking a person who will be working closely with others whether
they are a “team player” is likely to result in an affirmative answer during the application
regardless of their subsequent behavior.

For all these reasons, the list of KSAOs that are measured and systematically combined to
make selection decisions is typically smaller than the set that would be used if we were
not burdened with practical constraints. This is one reason that we (p. 122) desire
validation studies for selection. We want to be able to show that the subset of KSAOs for
which we have chosen or developed measures is of value for predicting job performance.
If we do a decent job of KSAO measurement, we should expect good results unless (1) the
process of selection is more expensive than the payoff in terms of job performance, (2)
the KSAOs we chose are the trivial ones rather than the important ones, or (3) the subset
of KSAOs we chose is negatively related to those we omitted (here we are assuming that
aspects beyond the focus of this chapter are taken care of, e.g., there are people who
want the job in question).

Signs and Samples, Contents and Constructs

Because they are attributes and not behaviors, KSAOs are not directly observed. Rather,
they are inferred from behavior. Harvey (1991) described such an inference as a “leap”
and questioned whether KSAOs could be compellingly justified based solely on a job
analysis. For this reason alone, it is tempting to rely on job or task simulations for
selection (see Tenopyr, 1977; Wernimont & Campbell, 1968). For example, suppose that
for the job “welder” we use a welding test. We believe that a welding test will tap
whatever KSAOs are necessary for welding, so that we need not identify the KSAOs,
measure each, and then combine them systematically to select the best candidate. If we
score the test based on the outcome of the task, then we have circumvented the problem
of the inferential leap, at least for the task. Some work samples (assessment centers, for
example) are scored based on underlying KSAOs instead of the task itself, and so do not

Page 5 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
avoid the inferential leap. On the other hand, we still have to determine how to give and
to score the welding test. How many welds and of what kinds? What materials? How will
the quality of the welds be evaluated? How will we use the information to select the best
applicants? In other words, the measurement of applicant attributes and the systematic
use of such information are still required. Regardless of whether the leap is avoided,
choosing work samples as tests appears fair and reasonable to both applicants and hiring
managers.

Necessity of Task Information

When the goal of job analysis is selection, understanding the human requirements of the
job (i.e., the KSAOs) is essential. Regardless of whether the KSAOs are isolated and
measured separately (e.g., with a paper-and-pencil personality test) or implicitly
measured by a work sample (e.g., using a medical simulation to assess physician
competence in the diagnosis of heart diseases), the analysis should result in a description
of the main tasks and/or duties of the job. That is, the main work activities should be
documented even if the goal is to identify worker attributes. The reason for such a
prescription is practical: to defend the use of selection procedures, you must be able to
point to the requirements of the job rather than to generally desirable traits. As the
Supreme Court ruled in Griggs v. Duke Power, “What Congress has commanded is that
any test used must measure the person for the job and not the person in the
abstract” (Griggs v. Duke Power, 1971).

Job Context

Although a work sample or task simulation may appear to contain whatever KSAOs are
necessary for success on the job, the context of the job often requires additional KSAOs
that the task itself does not embody. For example, we have known of several jobs
including welder, distribution center picker, and electrician in which fear of heights
prevented people from doing the job. Some welders work on bridges, ships, boilers, or
other objects where they are essentially suspended several stories up with minimal
safeguards and a mistake could result in a fatal fall (a welder who had been working on a
bridge talked about watching his protective helmet fall toward the water for what seemed
like several minutes before it hit; as he saw the splash, he decided to quit). Many
technical jobs (e.g., computer technician) have heavy interpersonal requirements that
might not be tapped in a work sample test that required debugging a program or
troubleshooting a network connection. Of course, work samples can be designed to
include the crucial contextual components. To do so, however, someone must decide to
include the contextual components, and such a decision would likely be based on the idea
that important KSAOs were tapped by doing so. The insight about the importance of the
KSAO would come from a job analysis.

Page 6 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Larger Context

In many cases, jobs are connected. Information, products, or services from one position
are crucial for the performance of another position. In the process of describing a job,
such connections are often neglected unless they are the central function of the job of
interest. However, to the extent that jobs are (p. 123) interconnected for the achievement
of organizational goals, the selection of the best people for the job may depend upon
KSAOs that come into play at the jobs’ intersection. Additionally, as we move from a
manufacturing economy to a service economy, jobs with apparently similar tasks may be
performed in importantly different ways. For example, sales jobs may emphasize different
sorts of behaviors depending upon the host organization (e.g., methods used in
automobile sales can vary quite a bit depending upon the type of car). People are
sensitive to subtle nuances in interpersonal communication, so that apparently minor
differences in behavior may be quite important for job performance when the job involves
providing individual services to clients (e.g., in medicine, law, or hair care).

Choice of Scales

Many systems of job analysis require the analyst or an incumbent to provide evaluative
ratings of aspects of the job. For example, the job elements method (Primoff & Eyde,
1988) requires incumbents to make ratings such as whether trouble is likely if a new
employee lacks a particular characteristic upon arrival. In the task inventory (e.g.,
Christal & Weissmuller, 1988), the incumbent rates each task on one or more scales such
as frequency of performing, difficulty to learn, consequence of error, and importance to
the job. Although many have argued eloquently that the choice of scale should follow the
intended purpose of the use of job analysis information (Christal & Weissmuller, 1988;
McCormick, 1976; Brannick et al., 2007), legal considerations suggest that some measure
of overall importance should be gathered to bolster arguments that the selection
procedures are based on attributes that are important or essential for job performance.

Christal has argued against directly asking for importance ratings because it is not clear
to the incumbent what aspects of the tasks should be used to respond appropriately. A
task might be important because it is done frequently, or because a mistake on an
infrequent task could have dire consequences, or because the incumbent views the task
as most closely related to the purpose of the job. Christal has argued that it is better to
ask directly for the attribute of interest: if you are interested in the consequences of
error, for example, you should ask “what is the consequence if this task is not performed
correctly?” Others have argued for combining multiple attributes into an index of
importance (e.g., Levine, 1983; Primoff & Eyde, 1988). Sanchez and Levine (1989)
recommended that a composite of task criticality and difficulty to learn should serve as an
index of importance. However, Sanchez and Fraser (1994) found that direct judgments of
overall importance were as reliable as composite indices. It is not entirely clear that a
composite yields a more valid index of importance than directly asking the incumbents for
their opinion. On the other hand, in some cases, composites could reduce the number of

Page 7 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
questions to be asked, and thus improve efficiency. You might wish to select people only
for some tasks, and train for other tasks, for example.

Under the Americans with Disabilities Act, a person cannot be rejected for a job based on
job functions that are not essential, and KSAOs that are chosen for selection must be
shown to be related to the job should their use result in adverse impact to a protected
class of job applicants. Therefore, some means of documenting the importance of the
tasks and KSAOs ultimately used in selection is highly recommended.

Time

An often neglected aspect of a job's or task's performance is the dimension of time. Tasks
may require speedy performance as in sports, close attention to detail or vigilant
attention over long periods of time when significant events occur rarely, and the rest is
downtime. Jobs may call for rotating or extended shifts, working especially early or late at
certain points, being on call, and working on weekends or holidays. Attributes such as
energy, tolerance for boredom, conscientiousness, and willingness to work requisite
schedules may be critical for success when elements linked to time are critical in a job,
Many of these attributes fall under the Other Characteristics heading, and deserve
careful consideration in the choice of KSAOs to include in the mix used for selection.

Task Detail

For selection, the description of the task content usually need not be as detailed as it
would be for training. If the task content is to be used to infer standard abilities such as
near vision or arm strength, then the tasks need to be specified in sufficient detail only to
support the necessity of the KSAO. Content is still necessary to sort out differences in
KSAOs, though. It matters if someone digs ditches using a shovel or a backhoe because
the KSAOs are different. On the other hand, if the job analysis is (p. 124) to support a
knowledge test such as might be found in certification, then much greater task detail will
be necessary. In developing a “content valid” test, rather than supporting the inference
that the KSAO is necessary (e.g., the job requires skill in operating a backhoe), it is
necessary to map the knowledge domain onto a test (e.g., exactly what must you know to
operate a backhoe safely and efficiently?).

Abilities and Setting Standards

The statistical model describing functional relations between abilities or capacities and
job performance is rarely specified by theory before the job analysis is begun. When data
are subsequently analyzed for test validation, however, there is usually the implicit
assumption of linear relations between one or more tests and a single measure of
performance. The way in which people describe job analysis and selection, however,
suggests that rather different implicit assumptions are being made about the relations
between ability and performance. Furthermore, such implicit assumptions often appear to
be nonlinear. Here we sketch some common practices and implicit assumptions that are

Page 8 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
congruent with them. We do not claim that people who follow a given practice necessarily
make the assumption, but if they disagree with the assumption, it would be difficult to
justify the practice as best from a selection standpoint.

An implicit assumption that is consistent with setting minimum qualifications for selection
is that some KSAOs are necessary up to a point, but additional benefit does not accrue
from higher standing on the KSAO. For example, a task might require copying words or
numbers from a source to a computer program. The task should require the function
“copying” in the functional job analysis typology, and would probably require cognitive
skill in reading. People lacking the appropriate language, perceptual, and motor skills
would struggle with the task or simply fail to do it. At a certain level (typically attained by
elementary school children), people can master the task. Higher skills such as writing
would be of essentially no benefit in completing the task—being smarter, for example, is
of little help in a task requiring copying sequences of random numbers. The implicit
assumption is that the relation between the ability and performance is essentially a step
function at a low level. Something is needed to carry out the task at all; if someone lacks
that necessary something, they cannot do the work. Otherwise, however, more of the
something is not helpful. Sufficient vision is needed to read, for another example, but
after a certain point, better vision does not yield better reading.

Competencies are often described in a manner consistent with an implicit step function at
a high level. Boyatzis (1982, p. 21) defined a competency as “an underlying characteristic
of a person, which results in an effective and/or superior performance of a job.” In the job
element method, one of the ratings is for “superior.” This is used to identify elements that
distinguish superior workers from other workers. Primoff and Eyde (1988) noted that
breathing might be needed for a job, but it would not distinguish the superior worker, so
it would not be marked using the “superior” scale. Unlike breathing, competencies are
expected to discriminate among workers at the high end rather than the low end of
performance.

Other KSAOs might be expected to be related to performance in a more linear fashion.


Conscientiousness, for example, should be related to the number of errors in typing,
errors in routine calculations in accounting, or counts of items in inventory. In other
words, we might expect fewer mistakes as conscientiousness increases across the scale.
As a side note, many personality traits could be described as having ideal point functions
relating trait standing to job performance, so that a person might have too much or too
little of the trait—consider the trait Agreeableness for a police officer, for example. If
such an hypothesis were taken seriously, it would require two cutoff scores, one for too
little agreeableness and another for too much.

The relation between KSAOs and performance matters in selection for two different
reasons. If a step function is correct, then setting a standard for the KSAO for selection is
critical. Setting the standard too low for minimum qualifications, for example, would
result in hiring people who could not perform the job. The level at which the cutoff should
be set is a matter of judgment, and thus is an additional inferential leap that may be

Page 9 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
attacked as discriminatory without an additional claim of job relatedness. If a linear
relation is correct, then selecting people from the top down can be defended anywhere on
the scale as improving the workforce. Although selecting people on the predictor will
result in range restriction for a validation study, the effect is fairly well understood and
can be corrected using statistical formulas. On the other hand, if a step function is
correct, the range restriction is potentially more serious because if a study (p. 125)
includes only people above the step, then there will be no observable relation between
the KSAO and performance, and no statistical correction will be applicable.

Another way of thinking about abilities and standards is to consider setting standards
from a decision-making perspective. Essentially, we might ask what the employer is trying
to accomplish by setting standards (other than a mean gain in job performance that could
be predicted using a regression equation). At the low end (minimum qualifications), the
employer may set a relatively low bar in order to cast as wide a net as possible so as to
have as many applicants as possible (in case of a labor shortage), or to lower the cost of
labor (less skilled labor tends to receive lower wages), or to minimize the adverse impact
caused by testing. On the other hand, an employer might set the bar as high as possible
to minimize large losses caused by spectacular mistakes, to achieve a competitive
advantage through star employees’ development of exceptional products and services, or
perhaps to gain a reputation for hiring only the best.

At this point it should be clear that there is considerable judgment required for the
establishment of standards for selection, and there could be many different reasons for
choosing a cutoff point for applicants, not all of which depend directly upon the
applicant's ability to succeed on the job. Reasons external to job performance are not
supported by conventional job analysis procedures.

Choice of Procedures

The discovery of KSAOs may proceed in many ways; it is up to the analyst to select an
approach that best suits the needs and resources of the organization faced with a
selection problem. Brannick et al. (2007) provide more detail about selecting an approach
than we can here. However, in the following two sections, we describe some of the job
analysis procedures that are most widely used for personnel selection. The first section
describes those procedures that are primarily task based (work oriented). The second
section describes those that are primarily worker oriented. The first section describes the
methods individually because they are most easily understood when presented in this
manner. The worker-oriented methods are described in less detail and are organized by
type of attribute. This was done for efficiency—many of the worker-oriented methods
cover the same traits.

Page 10 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes

Common Methods of Job Analysis


Conventional Task-Oriented Procedures

Although there are many conventional job analysis procedures, only four methods will be
discussed here. Those methods include the critical incident technique, functional job
analysis, the task inventory, and DACUM or Developing a Curriculum. These four
methods were chosen because they are particularly useful for personnel selection (Knapp
& Knapp, 1995; Raymond, 2001, 2002).

Critical Incident Technique.


The Critical Incident Technique (CIT) is a job analysis method popularized by Flanagan
(1954). The CIT procedure involves observing and interviewing incumbent workers and
developing a task list based on the observations and interviews. Flanagan described the
CIT as consisting of “a set of procedures for collecting direct observations of human
behavior in such a way as to facilitate their potential usefulness in solving practical
problems and developing broad psychological principles” (Flanagan, 1954, p. 327). The
goal of CIT is to identify specific incidents of worker behaviors that were particularly
effective or ineffective. A collection of critical incidents can be used to determine the
most important behaviors and direct attention to the underlying worker characteristics
implicit in such behaviors. CIT can be used for measuring typical performance, measuring
proficiency, training, selection and classification, job design and purification, operating
procedures, equipment design, motivation and leadership, and counseling and
psychotherapy (Flanagan, 1954).

The process of performing the CIT is less formal than other job analysis methods and
should be thought of as a set of guidelines rather than a specific structure. The CIT is
performed either by a job analyst interviewing job incumbents and supervisors, or by job
incumbents and supervisors filling out questionnaires developed by job analysts. The
incidents that are obtained during the process should include an overall description of the
event, the effective or ineffective behavior that was displayed during the event, and the
consequences associated with the individual's behavior. The job analyst performing the
CIT interview should be familiar with the CIT process. The interviewer begins by
explaining the purpose of the CIT interview. The job analyst should be careful in his or
her explanation of the process, and should choose terms carefully. For example, it is
sometimes helpful to describe the incidents in terms of “worker behaviors” rather than
“critical incidents,” as there can be (p. 126) a negative connotation with the term “critical
incidents.” The analyst directs the incumbent workers and supervisors to describe the
incidents in terms of the following:

1. the context or setting in which the incident occurred, including the behavior that
led up to the incident,
2. the specific behavior exhibited by the incumbent worker, and
3. the positive or negative consequences that occurred as a result of the behavior.

Page 11 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Often the job analysis participants will focus their attention on incidents or worker
behaviors that are ineffective rather than those that are effective, as it is often easier to
think of ineffective behaviors. Although this is okay, it is important for the job analyst to
ask the participants to describe what the effective behavior would be, had the individual
being described performed the job effectively.

Because a typical CIT interview will generate hundreds of critical incidents (Brannick et
al., 2007; Knapp & Knapp, 1995), the next step in the process is to analyze the incidents
and organize them in terms of the worker behaviors described during the process. The
analyst performs a content analysis of the incidents, identifying all of the general
behavioral dimensions discussed during the job analysis. On average, the incidents can be
broken down into 5 to 12 general behavioral dimensions. Once the behavioral dimensions
have been identified, a small group of subject matter experts (SMEs) sorts the incidents
into the general different behavioral dimensions.

The CIT is especially useful when the focus is on describing or defining a job in terms of
the most “critical” job elements, rather than describing a job in its entirety. As SMEs tend
to describe jobs in terms of the job tasks that are most frequently performed instead of
focusing on job tasks that are most critical, CIT is useful in obtaining critical job tasks
and the associated worker behaviors that may be missed by other, more holistic job
analysis methods. The list of behavioral dimensions and job tasks derived from the CIT
may not be a complete picture of the job as most jobs require many worker behaviors for
job tasks that are routinely performed, but not considered “critical.” However, as
previously mentioned, we typically select people for some, not all, KSAOs. CIT is designed
to choose the most important behaviors (and thus, in theory at least, the most important
KSAOs) for selection.

A potential downside to CIT is that it may be highly labor intensive. It may take many
observations and interviews to produce enough incidents to fully describe all of the
“critical” tasks. It is possible to miss mundane tasks using critical incidents. However, it
is useful to get quickly to important aspects of performance that may not be observed
very often, so it has advantages over a simple listing of tasks. Focusing on the critical
aspects of work is desirable from the standpoint of selection.

Functional Job Analysis.


Although Functional Job Analysis (FJA; Fine & Cronshaw, 1999) identifies both work
activities and worker attributes, the main focus is on tasks. FJA was first introduced by
the United States Employment Service and Department of Labor. It was used by these
government agencies to classify jobs into categories using a standardized format,
resulting in the Dictionary of Occupational Titles. The process of conducting an FJA that
is outlined in this chapter is based on Fine's description of FJA, rather than the
Department of Labor's description.

FJA begins with the job analyst gathering information about the job in order to determine
the purpose and goal of the job. The job analyst should use multiple sources to gain

Page 12 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
information about the job in order to obtain a clear understanding of the job prior to
beginning the process of interviews. The job analyst must have a very clear
understanding of the job because he or she will be creating the task statements, rather
than the SMEs creating the task statements themselves.

Next, the job analyst collects data about the job from the job incumbents. Typically, data
are collected by seating a panel of SMEs or job incumbents and asking them to describe
the tasks that they perform on the job. Although Fine and Cronshaw (1999) argued that
data should be collected during these focus group meetings, data can also be obtained
through observations and interviews of job incumbents in addition to or in place of a
focus group meeting. The job analyst's role is to turn the descriptions provided by the
SMEs into task statements. FJA requires a very specific structure for formulating task
statements. Each task statement should contain the following elements: the action
performed, the object or person on which the action is performed, the purpose or product
of the action, the tools and equipment required to complete the action, and whether the
task is prescribed or is at the discretion of the worker (Raymond, 2001). Once the job
analyst has created the task statements to (p. 127) describe the job, the SMEs then review
and rate the task statements.

The task statements created by the job analyst are subsequently evaluated for level of
complexity in terms of functioning with three entities: people, data, and things. In the FJA
definitions, people are exactly what we would normally think of as people, but also
includes animals. Data are numbers, symbols, and other narrative information. Finally,
things refer to tangible objects with which one interacts on the job. In addition to levels of
complexity for data, people, and things, FJA provides worker-oriented descriptors as well.
Other characteristics include language development, mathematics development, and
reasoning development (Brannick et al., 2007; Raymond, 2001). The physical strength
associated with each task may also be evaluated.

Like all job analysis methods, FJA has its strengths and weaknesses. A significant
strength and weakness of FJA is the specific way in which task statements are structured.
The structure provides an extremely clear and concise description of a task—what the
worker does, how it is done, and for what purpose. However, it is not easy to write proper
task statements according to the FJA structure (Fine speculated that as much as 6
months of supervised experience is needed for proficiency). Also, the cost associated with
hiring a job analyst who has an extensive background in FJA may be a deterrent for some
organizations. Another weakness of FJA is that it may be overly complex and detailed for
the purpose of selection (Knapp & Knapp, 1995; Raymond, 2001). FJA does provide task
information at an appropriate level of detail for selection, and it also provides some
information about worker attributes as well.

Task Inventory Analysis.


The United States Air Force (USAF) and other branches of the military formalized the
task inventory analysis methodology in the 1950s and 1960s (Christal & Weissmuller,
1988). The method is useful for many purposes, including selection and training. Task

Page 13 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
inventories have also been used extensively for the development of licensure and
certification examinations (Gael, 1983; Raymond, 2002; Raymond & Neustel, 2006). Task
inventories can be thought of as a four-step process: (1) identifying the tasks performed
on a job, (2) preparing a questionnaire including scales selected for the purpose of the
analysis, (3) obtaining tasks ratings through a survey or questionnaire, and (4) analyzing
and interpreting survey data.

Like functional job analysis, task inventory analysis begins with a job analyst developing a
list of tasks based on multiple sources of information. Sources of information include
observations and interviews of job incumbents and supervisors (SMEs), small focus
groups with job incumbents and supervisors (SMEs), and any written descriptions of the
job. Also, like FJA, the task statements used in task inventories follow a specific format.
The format for writing a task statement begins with a verb or action, followed by the
object on which the action is being performed. Task statements often include a qualifier
to describe extra information essential to the task; however task inventories do not
require the use of a qualifier. Compared to FJA, the task statements in task inventory
analysis are shorter and more succinct. Such tasks tend to be narrower in scope than in
FJA. Often a task inventory will be used to gather information about several related jobs
and to make decisions about whether jobs are sufficiently similar to be grouped together.
For these reasons, there tend to be many more tasks in the task inventory approach than
in functional job analysis. A typical task inventory process will produce between 100 and
250 tasks (Brannick et al., 2007; Raymond, 2002).

The level of specificity with which task statements are developed can be hard to define.
General, overarching task statements should be avoided. Only those tasks with a defined
beginning, middle, and end should be included. An example of a task statement that is too
broad and overarching for a nurse would be Provide Patient Care. Although nurses do
provide patient care, the task statement is too general, and does not have a defined
beginning, middle, and end. On the other hand, task statements that describe discrete
physical movements are overly specific. Thinking again about our nurse, a sample task
may be Review the Physician's Order. The task may further be broken down into picking
up the patient's chart and looking at what the physician has ordered, but these steps are
too specific as they start to describe the physical movement of the nurse. If the resulting
task list is a lot shorter than about 100 tasks then it is probably too general. If, however,
the resulting task list has many more than 250 tasks, then it may be too detailed.

As part of the task inventory process, a survey or questionnaire is developed based on the
tasks identified during the analysis. The survey can be broken into two parts. The first
part of the survey asks the respondents to rate each of the tasks based on one (p. 128) or
more scales. As described earlier, there are many types of scales that could potentially be
used in this analysis, but the typical scales include frequency, importance, difficulty,
criticality, and time spent (Brannick et al., 2007; Nelson, Jacobs, & Breer, 1975; Raymond,
2001). The second part of the survey is the demographic section. It is important that the
people who respond to the survey or questionnaire are representative of those who
currently perform the job or those who would like to perform the job. Ideally, the survey

Page 14 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
should include all job incumbents, as the more people that respond to the survey, the
more confident you can be in the results. For practical reasons a sample drawn from the
population of interest may be required. Also electronic administration and use of the Web
can facilitate the process. Of course piloting the questionnaire is a sine qua non.

The last step in the task inventory analysis process is to analyze the survey data. The job
analyst should verify that a representative sample of job incumbents was obtained. If a
subgroup of job incumbents is missing, then the survey should be relaunched with extra
effort to include those people in the survey process. Once a representative sample of job
incumbents has responded to the survey, the task ratings should be analyzed. Typically,
means and standard deviations are calculated. Those tasks that received low ratings on
one or more of the scales should be reviewed further by the job analyst and a group of
SMEs. It is possible that those tasks that received low ratings do not belong on the final
job analysis. In addition to reviewing those tasks that received low ratings, tasks that had
a high standard deviation should be reviewed. It is possible that job incumbents with
specific demographics perform tasks differently than those with other demographics. For
example, job incumbents who have been performing a job for 20 years may skip over
some tasks that new job incumbents perform. Or those who are new to the job may not
have a good grasp of which tasks are more or less important than others and so there
may be a lot of variability in their responses. Or the task statement may be interpreted in
different ways, particularly if it is worded poorly. For these reasons, all tasks that have
high standard deviations should be further reviewed by a group of SMEs.

There are two main limitations of task inventories. First, the KSAOs required to perform
each task are not identified. Job analysts trying to describe jobs that are highly analytical
and less vocational will be at a disadvantage when using task inventory analysis. For
example, it may be very difficult to ask a poet to describe his or her job in terms of the
specific, observable tasks that are performed. The second limitation to using task
inventories is that the rating scales used to evaluate the task statements may be
misinterpreted or ambiguous. If survey participants do not have a clear understanding of
the rating scales then the resulting survey data analysis will be problematic.

There are two main benefits to using task inventories over other job analysis methods.
First, task inventories can be much more efficient in terms of time and cost than other job
analysis methods if there are large numbers of incumbents, particularly when the
incumbents are geographically dispersed. The job analyst can create the initial list of
tasks in a reasonably short period of time, especially considering the simplicity with
which the task statements are structured. Then, the time and cost associated with
administering and analyzing a survey are relatively small. The entire job analysis process
can be completed in a shorter period of time than it might take the same job analyst to
perform the CIT interviews.

The second benefit to using a task inventory analysis over other job analysis methods is
that the results lend themselves to the development of an examination blueprint for
selection. The quantitative task ratings may be easily converted to test weights. Those

Page 15 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
tasks that are rated the highest (performed most frequently, are identified as most
important, or are most critical to the job) may receive the highest overall weighting on
the examination blueprint, whereas those tasks that received low ratings or high standard
deviations may receive little or no weighting on an examination.

DACUM.
DACUM is a systematic, group consensus method used to generate task lists associated
with a job. DACUM is an acronym for Developing A Curriculum. Although this method is
widely used in education, it is not as well known in psychology. We describe it here for
two reasons: (1) it is often used for developing certification tests, and thus can be quite
helpful in developing “content valid” tests, and (2) it incorporates SMEs directly into
linking tasks to KSAOs.

DACUM is based on three principles. The first is that incumbents know their own jobs
best. Many job analysis methods use both job incumbents and supervisors (e.g.,
functional job analysis, critical incident technique), but the DACUM process (p. 129) uses
only job incumbents. Second, the best way to define a job is by describing the specific
tasks that are performed on the job. Third, all tasks performed on a job require the use of
knowledge, skills, abilities, and other characteristics that enable successful performance
of the tasks. Unlike other job analysis methods, DACUM clearly documents the
relationship between each task and the underlying KSAOs.

In its most basic form, the DACUM process consists of a workshop or focus group
meeting facilitated by a trained DACUM facilitator leading 5 to 12 incumbents also known
as subject matter experts or SMEs, followed by some form of job analysis product review.
The primary outcome of the workshop is a DACUM chart, which is a detailed graphic
representation of the job. The DACUM chart divides the whole job into duties and divides
duties into tasks. Each task is associated with one or more KSAOs.

The DACUM process begins with the selection of the focus group panel. A working
definition of the job or occupation to be analyzed is created, and that definition is used to
aid in choosing panel members. The panel members should be full-time employees
representative of those who work in the job or occupation. Whenever possible, SMEs
selected to participate in the DACUM process should be effective communicators, team
players, open-minded, demographically representative, and willing to devote their full
commitment to the process (Norton, 1985). SMEs who are not be able to participate in
the entire process from start to finish should not be included in the DACUM panel, as
building consensus among all of the panel members is a critical element to the DACUM
process.

Following selection of the DACUM panel, the actual workshop is typically a 2-day group
meeting. The workshop begins with an orientation to the DACUM process and an
icebreaker activity. The facilitator then provides a description of the rest of the process.
Upon completion of the orientation, the facilitator leads the group in the development of
the DACUM chart. The SMEs are asked to describe the overall job during an initial

Page 16 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
brainstorming activity, followed by the development of the overall job duties. Duties are
general statements of work, representing a cluster of related job tasks. Duties can usually
stand alone—they are meaningful without reference to the job itself. The reader should be
able to understand the duty clearly without additional reference. For example, Prepare
Family Meals may be a duty for the job of a homemaker.

Once all of the job duties have been identified, each duty is further divided into tasks.
Tasks represent the smallest unit of activity with a meaningful outcome. They are
assignable units of work, and can be observed or measured by another person. Job tasks
have a defined beginning and end and can be performed during a short period of time.
They often result in a product, service, or decision. All tasks have two or more steps
associated with them, so in defining job tasks, if the SMEs are not able to identify at least
two steps for each task, then it is likely that the task in question is not really a task, but
rather a step in another task. Lastly, job tasks are usually meaningful by themselves—they
are not dependent on the duty or on other tasks. Thinking about the previous example,
Bake Dessert, Cook Breakfast, and Make Lunch may all be tasks that fall within the duty
of Preparing Family Meals. Each of these tasks has two or more steps in them (Bake
Dessert may require Preheat the Oven, Obtain the Ingredients, Mix the Ingredients,
Grease Baking Sheet, and Set Oven Timer). And each of the tasks listed can be performed
independently of the other tasks in the overall duty area. Note that the DACUM
definitions appear consistent with those we offered at the beginning of the chapter.

Finally, the associated KSAOs are described for each task. In addition to the knowledge,
skills, abilities, and worker behaviors required for successful performance of the task, a
list of tools, equipment, supplies, and materials is also created for each of the tasks. The
facilitator proceeds through each of the tasks individually, asking the panel what enablers
are required for the successful performance of the task. There should be a direct
relationship between the task and the enablers so that each task has an associated set of
enablers. Such a procedure is intended to document KSAOs that are required for each
task rather than those that are “nice to have” but are not required.

Upon completion of the workshop, the facilitator drafts a DACUM chart and distributes
the draft to a group of stakeholders for additional feedback. Following any corrections to
the draft, the chart is circulated to additional subject matter experts to obtain
quantitative data on importance, time spent, and so forth, that can be used to prepare a
test blueprint (or for other administrative purposes).

Unlike CIT, the DACUM method strives to define all of the duties, tasks, and KSAO
associated with a specific job. Like FJA (Fine & Cronshaw, 1999), DACUM relies upon a
trained facilitator to (p. 130) draw task content from a group of subject matter experts.
Like the task inventory, the tasks tend to be rather specific. Similar to the job element
method (Primoff & Eyde, 1988), but unlike the other methods in this section, DACUM
relies on on-the-job incumbents to identify the KSAOs underlying task performance.

Page 17 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
One criticism of DACUM, therefore, is that it relies on the ability of job incumbents to
identify KSAOs. In our experience, supervisors tend to spend more time than incumbents
in thinking about what traits are associated with successful performance.

The other weakness of the DACUM method for selection is that time is spent defining
duties, tasks, and KSAOs that would never be used in the selection context. For example,
to be a licensed hair stylist, it is necessary to obtain continuing education credits
throughout your career. Because completing continuing education is a required
component of the job, the task of Obtaining Continuing Education Credit would be
identified along with the KSAOs required to perform the task successfully. The task and
the KSAOs associated with it would be included in the job analysis because it is part of
the job, and again, the DACUM process describes all of the job. However, it seems
unlikely we would select hair stylists based on their ability to obtain continuing education
credits as opposed to more immediately applicable KSAOs.

Standard Lists of Traits


There are a very large number of potential individual differences that might be used for
selecting people for jobs. Typically, only a few of these will actually be measured and used
systematically in the selection process. Fortunately, several energetic groups of
individuals have gone about organizing and listing individual differences. In this section,
we will briefly describe a taxonomy and some of the sources of detailed lists. The
taxonomy is designed to provide a quick structure to help the reader think about the
broad array of individual differences. The standardized lists provide additional
information intended to be of practical use; although we do not describe the job analysis
methods in detail, references providing this are given for the interested reader. We
recommend that the analyst be armed with one or more of these lists before beginning
the job analysis, so as to be sure to consider the broad spectrum of possibilities. The job
analysis for selection can be thought of as a sort of process of developing and checking a
set of hypotheses about the human requirements for the job.

Taxonomy

Although there are many different ways of organizing human requirements at work, a
relatively simple, high-level scheme is SMIRP, for Sensory, Motor, Intellectual, Rewards,
and Personality. Because this is a high-level taxonomy, each of these categories can be
further subdivided, and different authors prefer different ways of organizing things, but
the majority of human attributes in most conventional job analysis methods can be fit into
one of these dimensions. The taxonomy is presented only as an aid to memory, not as a
theory of human ability.

Sensory.
The most straightforward of the sets is human sensory ability, which is typically thought
to contain vision, hearing, touch, taste, and smell. Proprioception, i.e., sensing body

Page 18 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
position or movement, may be considered part of this category. Each of these may be
further refined according to the needs of the job. For example, within vision, the ability to
discriminate color may or may not be important for the job. For hearing, it could be
important to notice or discriminate among particularly soft or high-pitched sounds. As we
mentioned earlier, job analysis may be thought of as developing and checking hypotheses
about required abilities. We analyzed the job of photofinisher and discovered that color
vision was not a job requirement for operating the photograph printing machine. All that
was necessary was checking the calibration of the machine, which did not require color
vision.

Motor.
Motor requirements necessitate using the body to achieve the job's goals. Human body
movement varies from relatively skilled to relatively unskilled. Dancing, for example,
requires a great deal of skill, as does playing a guitar. Operating a motor vehicle requires
some skill; operating a mouse to control computer software typically takes little skill. Jobs
may require heavy lifting, standing for long periods, balancing yourself or objects, or
crawling into attics or other tight spaces. Most jobs require the use of the hands (but the
ability to use hands is rarely a criterion for selection).

Provisions of the Americans with Disabilities Act may render these aspects suspect if they
exclude qualified individuals with sensory or motor disabilities. The sensory and motor
specifications used for selection should be associated with essential job tasks, and should
not be easily substituted via (p. 131) alterations in equipment, staffing, or scheduling
(Brannick, Brannick, & Levine, 1992).

Intellectual/Cognitive.
Individual differences in this category have a rich history in psychology. Intellectual
abilities concern information processing, including perception, thinking, and memory.
This category is rather broad, and is further subdivided in different ways by different
authors. One way to organize intellectual traits is to consider whether they refer mainly
to functions or capacities or to contents and specific job knowledge.

Several systems (or at least parts of them) can be thought of as targeting more functional
aspects of the intellect. For example, the Position Analysis Questionnaire (PAQ;
McCormick, Jeanneret, & Mecham, 1972) considers information input and information
transformation. For information input, the PAQ asks whether the job provides numbers,
graphs, dials, printed words, or sounds as information. For information transformation,
the PAQ asks whether the job requires reasoning and problem solving. Fine's functional
job analysis considers a hierarchy of functions using data to describe the level of
intellectual challenge presented by a job. At the lower levels, a job might require
comparing two numbers to see whether they are the same. At a high level, the job might
require the incumbent to create a theory that explains empirical results or to design
research that will answer a question that cannot be answered except by original data
collection and analysis.

Page 19 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
The content or job-specific side of intellectual requirements is also included in many
conventional job analysis systems. The PAQ, for example, asks whether the job requires
knowledge of mathematics. The O*NET lists knowledge of many different disciplines,
including art, chemistry, and psychology. Job analysis for certification tests is often
designed to provide a blueprint or map of the knowledge domain required for a job so
that the test can be shown to map onto the required knowledge for the job.

Cognitive processes cannot be directly observed, and for higher level cognitive functions,
the observable tasks and behaviors may not be very illuminating. For example, a research
scientist may spend time reading books and journal articles. Although an observer may
infer that the scientist is acquiring information, it is not at all clear what the scientist is
doing with the information so acquired. Methods of cognitive task analysis may be used
to better understand the way in which information is acquired, represented, stored, and
used (see, e.g., Seamster, Redding, & Kaempf, 1997). Cognitive task analysis may be used
to distinguish differences between the novice and the expert in approaching a specific
task. However, cognitive task analysis is designed to discover mental activity at a more
molecular level than the trait approaches described here, and does not possess a
standard list of traits to consider at the outset. Therefore, it is not discussed in greater
detail.

Rewards.
This category refers to the human side of job rewards. That is, it describes the interests,
values, and related aspects of people that make work motivating or intrinsically
satisfying. Here reward means a personal attribute that might be considered a need,
interest, or personal value that a job might satisfy. Several job analysis methods contain
lists of such rewards. The Multimethod Job Design Questionnaire (Campion & Thayer,
1985) contains a 16-item “motivational scale” that includes items such as autonomy,
feedback from the job, and task variety. Borgen (1988) described the Occupational
Reinforcer Pattern, which contains a list of job attributes such as social status and
autonomy. The O*NET descriptors for occupational interests and values include items
such as achievement, creativity, and security. Although descriptors we have labeled as
rewards are generally used for vocational guidance, they may be incorporated into the
selection process through recruiting and through measuring individual differences in an
attempt to assess person–job fit. For example, a job that offers low pay but high job
security may be of special interest to some people.

Personality.
Personality refers to traits that are used to summarize dispositions and typical behaviors,
such as conscientiousness, neuroticism, and extroversion. In addition to theories of
personality such as the Big-Five (Digman, 1990; Goldberg, 1993) and to conventional
tests of personality (e.g., the 16PF; Cattell, 1946), by personality we mean a broad
spectrum of noncognitive attributes including self-esteem, willingness to work odd hours
and shifts, and remaining attributes needed for specific jobs, that is, the O in KSAO. At
least one job analysis method was designed specifically for personality (the Personality-

Page 20 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Related Position Requirements Form; Raymark, Schmit, & Guion, 1997; see also the
Hogan Assessment Systems, 2000, as described in Hogan, Davies, & Hogan, 2007). Other
job analysis methods approach the evaluation of the Other requirements in various ways.
The PAQ contains sections devoted to interpersonal activities, (p. 132) work situation and
job context, and miscellaneous aspects. The latter category contains items such as
irregular work hours and externally controlled work pace. The O*NET descriptors for
Work Styles includes personality characteristics such as Concern for Others, Cooperation,
Self-Control, and Persistence.

O*NET

The O*NET is remarkable for its comprehensiveness. The development of the O*NET is
described in Peterson et al. (1999). The O*NET is an excellent source of lists of human
abilities. Its content model is composed of six different sets of descriptors: (1) worker
requirements, (2) experience requirements, (3) worker characteristics, (4) occupational
requirements, (5) occupation-specific requirements, and (6) occupation characteristics.
The first three of these, which are further subdivided into standard lists that may be of
use when conducting job analysis for selection, are described next.

Worker requirements refer to learned individual differences that are applicable to


multiple tasks. These are arranged in O*NET into three categories: (1) basic and cross-
functional skills, (2) knowledge, and (3) education. Examples of basic and cross-functional
skills include reading comprehension and time management. Examples of knowledge
include art, psychology, and transportation. The term education refers to general
educational level, meaning high school, college, and so forth. The O*NET contains 46
descriptors for basic and cross-functional skills and 49 descriptors for knowledge. In our
high-level taxonomy, each of these categories would fall into the intellectual category on
the content side, but notice that the knowledge descriptors fall primarily in the cognitive
psychology domain of declarative knowledge, but the basic and cross-functional skills
tend to fall in the procedural knowledge domain.

Experience requirements refer to specific types of training and licensure. In the previous
item, education refers to broader study that is not intended for a specific occupation. The
O*NET contains six descriptors in this category, including subject area education and
licenses required. In our high-level taxonomy, this category would also fall under the
intellectual category. However, experience and licenses imply competence in particular
tasks, meaning mastery of whatever declarative and procedural skills are needed for task
completion.

Worker characteristics are further subdivided into (1) abilities, (2) occupational values
and interests, and (3) work styles. Examples of abilities in the O*NET include oral
expression, mathematical reasoning, manual dexterity, and night vision. Note that the
O*NET organizes the abilities as capacities, and lists sensory, motor, and intellectual
abilities in the same category. Examples of occupational values and interests include
achievement, responsibility, and security. Occupational values would be considered

Page 21 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
rewards in our taxonomy. Examples of work styles include cooperation, dependability, and
persistence. These would fall under personality in our taxonomy. O*NET contains 52
descriptors for abilities, 21 for occupational values, and 17 for work styles.

The O*NET content model is described online at http://www.onetcenter.org/


content.html#cm1. There are also technical reports dealing with the quality of the data
about occupations in the United States and the maintenance of such data (e.g., http://
www.onetcenter.org/reports/AOSkills_10.html). At this time O*NET should best be used
as a starting point to organize and facilitate a job analysis.

Ability Requirements Scales

Fleishman and Reilly (1992) created a small book that lists a large number of human
abilities along with definitions of each. The abilities are grouped into cognitive (e.g.,
fluency of ideas, number facility), psychomotor (e.g., control precision, multilimb
coordination), physical (e.g., trunk strength, stamina), and sensory/perceptual (e.g., near
vision, sound localization). Note that Fleishman and Reilly (1992) have subdivided our
motor category into psychomotor and physical aspects, so their list may be particularly
useful for jobs with significant physical requirements. Additionally, the listed abilities are
linked to existing measures and test vendors, which is very helpful for the analyst who
has selection in mind.

Threshold Traits Analysis

Lopez (1988) provided a short but comprehensive list of human abilities that can provide
a basis for selection. The 33 listed traits are organized into five areas: physical, mental,
learned, motivational, and social. The first three correspond roughly to our sensory,
motor, and intellectual categories. Examples include strength and vision (physical),
memory and creativity (mental), and numerical computation and craft skill (learned). The
last two categories correspond roughly to our personality characteristics. Examples are
adaptability to (p. 133) change and to repetition (motivational) and personal appearance
and influence (social).

Management Competencies

Because leadership and management are so important to business, the KSAOs required
for success in such jobs is of abiding interest and has a long history in psychology. Many
proprietary systems targeting management competencies are currently available. One
system with some empirical support was described by Bartram (2005) as the “Great
Eight,” for the eight high-level dimensions of managerial functioning. Some of the
competencies included in the Great Eight are leading and deciding, supporting and
cooperating, analyzing and interpreting, and adapting and coping. Some of the attributes
are more intellectual (deciding, analyzing, interpreting) and some have a more social and
personality flavor (supporting and cooperating, adapting and coping). The ability to
handle stress and to cope with failure are noteworthy characteristics that may be more

Page 22 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
important in management than in many other jobs (although sales, sports, and various
other occupations would also involve such capacities to a significant degree). The Great
Eight list may prove especially helpful as a source of traits during a managerial job
analysis.

We have not distinguished between competency and KSAO to this point, and, in fact,
competency has been defined in many different ways (Shippmann et al., 2000). It is
interesting that some view competencies as behaviors, but others view them as
capacities. For example, “a competency is not the behavior or performance itself, but the
repertoire of capabilities, activities, processes and responses available that enable a
range of work demands to be met more effectively by some people than by others” (Kurz
& Bartram, 2002, p. 230). On the other hand, “a competency is a future-evaluated work
behavior” (Tett, Guterman, Bleier, & Murphy, 2000, p. 215). A related issue is whether the
competencies refer to capacities of people or to standards of job performance (Voskuijl &
Evers, 2008). Bartram (2005) considered the managerial competencies to be criteria to be
predicted from test scores, but others have regarded competencies as predictors of
performance (Barrett & Depinet, 1991). Of course, behavioral measures may be used as
either predictors or criteria. There is precedent for making little distinction between
ability and performance. Using the job element method (Primoff & Eyde, 1988), for
example, we might speak of the “ability to drive a car.” Such an element might be defined
in terms of a performance test rather than in terms of perceptual and psychomotor skills
along with knowledge of the rules of the road. Doing so has practical application when
work samples are used in selection. However, failing to distinguish between the
performance of a task and the underlying capacities or processes responsible for task
performance is unsatisfying from a theoretical standpoint. Defining the ability in terms of
the performance is circular; an ability so defined cannot serve to explain the
performance. Furthermore, it is a stretch to use an ability defined by a specific task
performance to explain more distal behaviors. Motor skills might serve to explain the
quality of operating many different kinds of vehicles, but the ability to drive a car would
not be expected to explain the quality of operating other vehicles.

Job Analysis and Test Validation


In this section, we consider job analysis as a basis for selection in greater detail. Levine,
Ash, Hall, and Sistrunk (1983) surveyed job analysts regarding the application of well-
established job analysis methods for multiple purposes. For personnel requirements/
specification the preferred job analysis methods (mean ratings greater than 3.5 on a 5-
point scale) included Threshold Traits Analysis, Ability Requirements Scales, Functional
Job Analysis, and Job Elements. For legal/quasilegal requirements, the only method with a
mean greater than 3.5 was the task inventory (we have mentioned all of the above
mentioned approaches in varying levels of detail earlier in this chapter). As we noted
earlier, some purposes require additional judgments, are consistent with different

Page 23 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
hypotheses about the relations between human capabilities and job performance, or
correspond to different decision problems.

First we consider conventional test validation in which a criterion-related validation study


is conducted. Such studies are not free of human judgments, but they provide direct
empirical data regarding the relations between test scores and job performance scores
for the job of interest. In many cases, such a study is not feasible. Therefore, we then turn
our attention to alternative validation strategies from the perspective of job analysis.
Alternative strategies either involve judgments rather than empirical data regarding the
relations between test scores and job performance, or they involve borrowing strength
from validation data gathered in contexts other than those of immediate interest.

Regardless of the validation strategy adopted, we recommend that when job


(p. 134)

analysis is initiated to support personnel selection, attention be paid to both the work
performed (the tasks and duties) and the worker attributes (worker requirements)
necessary for success on the job. The immediate product of the analysis will be an
understanding of what the worker does and what characteristics are necessary or
desirable in job applicants. The process of the analysis and the resulting understanding
should be detailed and documented in writing as part of the practical and legal
foundation of the subsequent process of personnel selection (Thompson & Thompson,
1982).

Conventional Validation Strategy


One way of showing that the selection (testing) process is job related is to complete a test
validation study. In such a study, the test scores of workers are compared to scores
indicating the level of job performance for those same workers. If there is a statistically
significant relation between test scores and job performance scores, then the test is said
to be a valid predictor of job performance (e.g., Guion, 1998; Ployhart, Schneider, &
Schmitt, 2006).

The logic of test validation as indicated through a series of steps is to (1) discover the
KSAOs needed for successful job performance through job analysis, (2) find tests of the
KSAOs, (3) measure the workers’ KSAOs using the tests, (4) find measures of the
workers’ job performance, (5) measure the workers’ performance on the job, and (6)
compare the test scores to the job performance scores. On the face of it, we would expect
to see a relation between test scores and job performance scores, provided that the
KSAOs identified in the job analysis are in fact the major determinants of individual
differences in both performance on the tests and performance on the job. Experience has
shown that there is reason to believe that a well-executed validation study will provide
support for the job relatedness of a test. However, experience has also shown that there
are many ways in which the study may fail to support the job relatedness of the test;
empirical support for testing is not always easy to obtain.

Page 24 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
The Predictor Side

The logic of test validation suggests that we first discover the KSAOs and then find tests
of them. In practice, we often have prior knowledge of similar jobs, available tests, and
prior test validation studies. One decision that must be confronted is whether to use tests
of KSAOs that have a proven track record or to attempt to develop or buy tests that
appear appropriate but do not have such a record. For example, general cognitive ability
and conscientiousness have shown associations with job performance for a large number
of different jobs (e.g., Schmidt & Hunter, 1998; Barrick & Mount 1991).

A second decision that must be confronted is whether to buy an existing test or to create
one specifically for the job. There are advantages and disadvantages to buying a test. Test
development is time consuming and technically challenging. Many test vendors now
provide online testing capabilities, which is an additional resource to consider if there is
interest in building a test rather than buying one. There are several major test publishers
that sell tests often used in selection. Many of these can be found online at the Pan
Testing organization, http://www.panpowered.com/index.asp. On the other hand, a
properly trained psychologist should be able to develop a sound test given proper
resources, including time, materials, and participants. In the long run, it may be cheaper
to create a test than to continue to pay to use a commercially available product.

In test validation, we must find measures of the important KSAOs, regardless of whether
we build or buy them. As we mentioned earlier, the business of isolating and labeling the
KSAOs is considerably simplified if we use work samples as tests because we can
essentially dispense with anything other than a summary score, at least for the work
sample. On the other hand, if we decide that some trait such as agreeableness is
important for job performance, then we need to build or buy a test of agreeableness.
Unfortunately, it cannot be assumed that a test can be judged by its label. Different tests
that purport to measure agreeableness may yield rather different scores on the same
individuals (Pace & Brannick, 2010). Thus, careful research about the meaning of test
scores for existing tests must be conducted in order to have much confidence about the
congruence of the KSAO in question and what is being measured by the test. When a test
is being built, the content of the test is more fully under the control of the developer, but
the meaning of the resulting scores may not be as clear, particularly if it concerns an
abstract trait such as agreeableness, because the meaning of the scores yielded by
validation studies will not be readily discernible.

(p. 135) The Criterion Side

In theory, the same logic applies to the criterion (the measure of job performance) as to
the predictor (the test). We find one or more measures of job performance that tap the
required KSAOs and measure people on these. At first, our insistence upon considering
the KSAOs may seem silly. If we have a measure of job performance, does it not by
definition embody the KSAOs required for the job? If it has systematic variance that is
also related to the goals of the job, then of course it stands to reason that it must reflect

Page 25 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
at least some of the KSAOs required for success on the job to at least some degree.
However, not all measures of job performance are equal. What should occur in choosing a
criterion measure is consideration both (1) of the relevance and comprehensiveness of the
criterion constructs captured in the job performance measures we anticipate using
relative to what is known as the ultimate criterion (e.g., Cascio & Aguinis, 2011), and (2)
of the degree to which predictor KSAOs map onto the criterion constructs. It would not
enhance our chances of detecting a relationship between predictors and criteria if, for
example, job performance called for mathematical reasoning (for which we tested) but
our criterion measure sampled only simple arithmetic because that measure was readily
available.

Of course in implementing actual criterion measures we must take into account factors
that are irrelevant to the constructs we wish to index or beyond the job holder's control.
For example, the dollar value of goods sold over a given time period in a sales job is
clearly a criterion of interest for a validation study of a sales aptitude test. However, the
variance in the measure, that is, the individual differences in the dollar value of goods
sold for different sales people, may be due primarily to factors such as geographic
location (sales territory), the timing of a company-wide sale, and the shift (time of day) in
which the employee typically works. This is one reason to consider percentage of sales
goal reached or some other measure of sales performance rather than raw dollars that
takes into account some or all of the extraneous factors. Although dollar outcome is
obviously relevant to job performance for sales people, obtaining reliable sales data may
take a surprisingly long period of time because so much of the variance in dollar
outcomes tends to be due to factors outside the sales person's control.

Supervisory ratings of job performance are the most commonly used criterion for test
validation studies. We recommend avoiding existing performance appraisal records as
criteria for test validation studies. Such records are problematic because they typically
document supervisory efforts to keep and promote their people rather than illuminate
individual differences in job performance. It is inevitable that supervisors form personal
ties with their subordinates, and understandable when such relationships influence the
annual evaluations. There are also organizational issues such as the size of a manager's
budget and the manager's ability to move a poorly performing person from one unit to
another that may impact annual evaluations. Finally, our experience has shown that
annual evaluations from performance appraisals rarely show a statistical relation to
applicant test scores.

If supervisory ratings of job performance are to be used in a validation study, then we


recommend the following steps for developing the rating scales. First, the job analysis
should document the job's essential duties and tasks and indicate what KSAOs these
require. Second, ratings should be collected solely for the purpose of test validation, and
the raters (supervisors) should be made aware that the ratings will be confidential and
used solely for the purpose of establishing the job relatedness of the test. Some

Page 26 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
consultants like to hold a meeting with the supervisors and plead with them for honesty
in responding about their subordinates.

A rating form should be created. In the form, the duties should be enumerated, with the
tasks listed beneath them, as one might find in a task inventory. For each task, the
supervisor should be asked to rate the subordinate on task performance. A number of
different rating scales may be appropriate, such as behaviorally anchored rating scales
(BARS; Smith & Kendall, 1963) or behavioral observation scales (BOS; Latham & Wexley,
1977). After rating the tasks, the supervisor is asked to rate the subordinate overall on
the duty. The raters proceed one duty at a time until all the duties for the job are covered,
and then the rater is asked to provide a final rating for the ratee on overall job
performance. Such a method will link the performance rating (outcome measure of job
performance) clearly to the tasks or job content. It also has the benefit of drawing the
supervisor's attention to the task content of the job before asking about overall job
performance. By doing so, we hope to reduce the amount of extraneous variance in the
measure.

If other kinds of performance measures are used, then careful consideration should be
given to their reliability and whether they are likely to be strongly (p. 136) related to the
KSAOs in question. Criterion measures should probably be avoided unless they are
clearly related to the KSAOs captured in the predictors. Both criterion measures and
tests should reflect the KSAOs that determine excellence in job performance. Unless the
criteria and tests are well matched, test validation is a particularly risky business.
Available criteria are usually unreliable and contaminated by factors extraneous to what
the employee actually does. In such a case, even if we were to have a population of
employees with which to work, the association between tests and job performance would
not be strong. When we couple the underlying effect size with the typical sample size in a
validation study, the power to detect an association tends to be low. If the validation study
shows null results, then we have produced evidence that the test is not job related, which
is good ammunition for anyone wishing to attack the use of the test for selection.

Alternative Validation Strategies


Test validation is conventionally approached one job at a time, and typically within a
single organization and location. Careful job analysis leads to the judicious choice of tests
and criteria, and during an empirical test validation study, incumbents and/or applicants
are measured to see how well tests and criteria correspond empirically. However, there
are numerous difficulties in conducting conventional test validation studies, including
numbers of incumbents and/or applicants (i.e., sample size), time and resource
constraints imposed by management, legal requirements, and relations between labor
and management (McPhail, 2007). Professional standards have changed since the
Uniform Guidelines on Employee Selection Procedures (EEOC, 1978) were issued, and
authors have argued that establishing the validity of tests for personnel selection ought to
involve multiple lines of evidence (e.g., Landy, 1986; Binning & Barrett, 1989). For all

Page 27 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
these reasons, alternatives to the conventional test validation study have been developed.
Several of these are briefly described next, with emphasis on the role of job analysis. First
we describe methods that rely, as does the conventional strategy, upon empirically
derived relationships between tests and criteria. These are labeled synthetic validity, and
encompass both individual and job-level studies. Following these, we turn to methods that
rely on the extension of traditional validation results to new contexts, so-called
transportability and validity generalization studies.

Then we turn to methods that rely on judgments of similarity of test content and job
content. This kind of evidence supporting validity is accepted by three of the primary
authorities on test use, the American Educational Research Association, the American
Psychological Association, and the National Council on Measurement in Education as
promulgated in their Standards for Educational and Psychological Testing (1999).
Obviously, job analysis is fundamental to the comparison of job and test content. Where
this approach to validation is commonly employed, and may represent the only feasible
option, is in the domain of (1) developing certification and licensure assessments, the
passing of which is considered to be part of the selection of personnel in numerous
professional and technical jobs, and (2) the setting of minimum qualification
requirements.

Alternative Strategies—Synthetic Validity,


Transportability, and Validity Generalization
Synthetic Validity and Job Component Validity

Synthetic validity studies (Guion, 1965) can be arranged into two major groups based on
the study design (Johnson, 2007). In the ordinary design, the individual worker is the unit
of analysis. In the second design, the job is the unit of analysis. The second, job-level
design is commonly labeled “job component validity” or JCV (Hoffman et al., 2007;
Johnson, 2007). The logic of applying synthetic validity is to borrow test validation data
from other jobs and apply them to the target job.

Individual-level study design.


In the usual synthetic validity study, tasks or duties rather than jobs become the focus of
outcome measurement. Individuals are measured on task performance for each task, and
subsequently grouped by task across jobs when test scores are compared to task
performance scores. For example, different jobs might involve driving a car, loading a
pallet, balancing an account, and so forth. All the individuals in jobs that involve driving a
car would be examined for one correlation between a test of eye–hand coordination and a
measure of driving performance such as a safety record. All individuals in jobs that
involved balancing an account would likewise be considered at once for computing a
correlation between a test and a measure of performance on account balancing. Outcome
measures for different positions are used in only some of the correlations because

Page 28 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
different jobs involve different (p. 137) tasks. Overall job performance is synthesized from
the composite of tasks for each job, and test battery performance is composed of the
applicable tests for each job. The overall correlation between the test battery and job
performance can be computed for each job with a little algebra (there is a history of the
development of the J-coefficient that is of scholarly interest; see, e.g., Primoff, 1959;
Primoff & Eyde, 1988). The advantage of such a study is that much larger sample sizes
may be gained by measuring all employees who perform a task rather than just those who
hold a specific job. Also (at least in theory) as data accrue, a scientific advance is possible
because overall performance measures on new jobs might be synthesized from known
tasks.

Individual-level research findings.


Synthetic validity on the individual level has not been used often (Johnson, 2007).
However, Peterson, Wise, Arabian, and Hoffman (2001) described a successful application
of the approach to several military occupational specialties. Johnson, Carter, and Tippins
(2001, described in Johnson, 2007) applied synthetic validity to a large civilian
organization. They created a job analysis questionnaire that contained 483 task
statements. The results of the survey were used to create 12 job families composed of
jobs in which responses to task statements were similar. The task statements were
reduced to 26 job components that formed the basis of criterion data collection. Different
criterion measures were collected for different job families. Ultimately, test battery data
and supervisory ratings of performance were obtained for nearly 2000 employees and
composites were computed for job-family combinations of tests and criterion measures.
Apparently the result was successful, although the actual correlations are proprietary
(Johnson, 2007). However the job components varied quite a bit in the degree to which
they are concrete and clearly related to tasks. Some of the more clearly defined
components include “Handle bills, payments, adjustments or credit research” and
“Handle telephone calls.” Some rather vague items include “Computer usage” and “Work
with others.” We were tempted to write that enterprising employees can use computers
profitably to squash insects, but we will not belabor the point beyond describing
“computer usage” as a vague task description. Some components with no clear task
reference include “Handle work stress” and “Organizational commitment.”

Job-level studies.
Job component validity studies relate aspects of jobs [typically KSAs; much of this work
has been based on a job analysis method that is based on the PAQ, e.g., McCormick,
DiNisi, & Shaw, (1979)] to a job-level outcome. The two outcomes of interest are typically
either (1) mean test scores of job incumbents or (2) criterion-related validity coefficients
from test validation studies. The justification for using mean test scores as an outcome is
the “gravitational hypothesis” (Wilk, Desmarais, & Sackett, 1995), which states that
workers tend to gravitate to jobs that are appropriate to their level of KSAOs. Therefore,
we should expect to see, for example, brighter people on average in more cognitively
demanding jobs and stronger people on average in more physically demanding jobs. Note
that such a between-jobs finding does not directly show that better standing on the test

Page 29 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
results in superior performance within jobs (Scherbaum, 2005). When test validation
correlations are the dependent variable, however, a positive finding does provide
evidence for the job relatedness of a test within jobs. For example, suppose cognitive
ability test scores are more highly correlated with job performance when the job analysis
shows a higher requirement for cognitive ability. In such a case, the synthetic validity
study indirectly supports the use of a cognitive ability test in a target job for which job
analysis provides a finding of a high requirement for cognitive ability.

JCV research findings.


Several different JCV studies have been completed using the PAQ. Gutenberg, Arvey,
Osburn, and Jeaneret (1983) showed that the PAQ dimension dealing with decision
making and information processing was associated with the magnitude of criterion-
related validity, that is, more cognitively demanding jobs tend to show stronger
correlations between cognitive test scores and job performance scores. Jeanneret (1992)
noted that mean test scores tend to be more predictable from PAQ data than are validity
coefficients. Hoffman and colleagues (Hoffman, Holden, & Gale, 2000; Hoffman &
McPhail, 1998) have shown that PAQ dimensions are related to validity coefficients for
the General Aptitude Test Battery (see also Steel & Kammeyer-Mueller, 2009). There is
more limited support for JCV in noncognitive domains. Hoffman (1999) was able to show
differences in means of physical ability test scores across jobs as a function of PAQ
ratings, but Gutenberg et al. (1983) did not show similar results for the prediction of
correlations. The PAQ has also shown a modest ability to predict means and correlations
for personality tests (Rashkovsky & Hoffman, 2005). Additionally, (p. 138) some JCV
research has used the O*NET rather than the PAQ for JCV studies (e.g., LaPolice, Carter,
& Johnson, 2008). Cognitive test score means were predictable from O*NET skill level
ratings and generalized work activity level ratings. Cognitive test score correlations were
not predicted as well as were the means. Furthermore, personality test means and
correlations were poorly predicted from the O*NET ratings.

Validity Transport

The Uniform Guidelines on Employee Selection Procedures (EEOC, 1978) allows for a
validity study originally conducted in one setting to be applied in another target setting
provided that four general conditions are met: (1) the original study must show that the
test is valid, (2) the original job and the target job must involve “substantially the same
major work behaviors,” (3) test fairness must be considered, and (4) important contextual
factors affecting the validity of the test must not differ from the original and target
settings (Gibson & Caplinger, 2007).

The key question becomes one of whether the current job is similar enough to the
previously studied jobs so that the available evidence is applicable (the jobs share
“substantially the same major work behaviors”). Unfortunately, there is no professional
standard that indicates the required degree of similarity, nor is there an established
procedure that is agreed throughout the profession to yield an unequivocal answer to
whether evidence from another context is applicable. However, it is still possible to do
Page 30 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
such a study; here we focus on the job analysis, which must form the basis for
determining whether the jobs are substantially the same.

Regarding the job analysis, first, the Guidelines require that a job analysis be completed
for both the original and target jobs. Therefore, a transportability study is not feasible
unless the original study included an analysis of the job. Second, it seems very likely that
work-oriented descriptors are needed to describe both jobs in order to establish that the
major work behaviors are similar. Although is it possible to argue that the jobs are
substantially the same based on worker attributes (see, e.g., Gibson & Caplinger, 2007, p.
34), this argument appears risky, particularly in the absence of work descriptions. Third,
some rule or decision criterion must be adopted for establishing whether the jobs are
sufficiently similar.

Gibson and Caplinger (2007) provided an example transportability study that includes
practical suggestions regarding the job analysis. They recommended the development of
a task inventory that contains rating scales for time spent and importance, which they
combined to create a numerical scale of criticality. The task inventory was completed for
both the original and target jobs, and criticality was measured for both jobs. A common
cutoff for criticality was set and applied to both jobs, so that for each job, each task was
counted either as critical or not. The decision rule was a similarity index value of 0.75,
where the similarity index is defined by

where NC is the number of critical tasks common to both jobs, NO is the number of
critical tasks in the original, and NT is the number of critical task in the target job
(Hoffman, Rashkovsky, & D’Egidio, 2007, p. 96 also report a criterion of 75% overlap in
tasks for transportability). When the numbers of tasks in the original and target jobs are
the same, then the similarity index is the ratio of common to total critical tasks. There are
many other ways of assessing job similarity, of course (see, e.g., Gibson & Caplinger,
2007; Lee & Mendoza, 1981).

Gibson and Caplinger (2007) also noted three additional important considerations for the
job analysis. First, the task inventory for the target job should allow for the job experts to
add new tasks that are not part of the original job. Second, the degree of specificity of the
task statement will affect the ease with which it is endorsed, which may in turn affect the
apparent similarity of jobs. Third, where differences between the original and target jobs
are discovered, it is important to consider the KSAOs required and whether the choice of
tests would be affected. In short, diligence is required throughout the transportability
study to avoid misleading results.

Page 31 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Validity Generalization

It has been argued that if a meta-analysis has been conducted for a certain job or job
family comparing a test with overall job performance (and assuming a positive result of
the meta-analysis), then it should be sufficient to complete a job analysis that shows that
the job of interest belongs to the job family in which the meta-analysis was completed
(e.g., (p. 139) Pearlman, Schmidt, & Hunter, 1980; Schmidt & Hunter, 1998). The
argument appears to have been that if the current job belongs to the family where the
validity generalization study has shown the test to be related to overall job performance
(the criterion of interest), then no further evidence of job relatedness is required.

Others have taken issue with the job family argument on various grounds. Some have
argued that the shape of the distribution of true effect sizes could result in erroneous
inferences, particularly if random-effects variation remains large (e.g., Kemery,
Mossholder, & Dunlap, 1989; Kisamore, 2008). Others have worried that unless the tests
and performance measures in the current job can be shown to be in some sense
equivalent to those in the meta-analysis, then the applicability of the meta-analysis to the
local study is doubtful and the argument is not very compelling (Brannick & Hall, 2003).

However, here we are concerned with the job analysis that might be used to support
validity generalization and how the information might be used. The idea is to determine
whether the current job is sufficiently similar to a job (or job family) that has been the
subject of a validity generalization study. One approach would be to match the target job
to the meta-analysis using a classification scheme such as the DOT (Pearlman et al., 1980)
or O*NET. In theory, the classification could be based on a variety of descriptors, both
work and worker oriented. The evaluation of the rule could be based at least in part on
classification accuracy. For example, what is the probability that a job with the title
“school psychologist” and tasks including counseling families and consulting teachers on
instruction of difficult students is in fact a job that fits the O*NET designation 19-3031.1
—School Psychologists? Is a reasonable standard 95% probability?

McDaniel (2007) noted that in addition to the comparability of the work activities of the
target job and the job(s) in the meta-analysis, it is also necessary to consider the
comparability of the tests and criteria used in the meta-analysis. Clearly it is a greater
stretch when the tests and criteria contemplated for the target job diverge from those
represented in the studies summarized in the meta-analysis. Additional issues in applying
a meta-analysis to a target job mentioned by McDaniel (2007) deal with technical aspects
of the meta-analysis, namely the artifact corrections used (range restriction, reliability),
the representativeness of the studies included in the meta-analysis, and the general
competence in completing the meta-analysis (see, e.g., Sackett, 2003; Borenstein,
Hedges, Higgins, & Rothstein, 2009).

Page 32 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes

Alternative Strategies—Judgment Based


Content Validity and Certification Testing

Sometimes an employment test essentially samples job content and presents such content
to the applicant in the form of a test. Such a test is said to be content valid to the degree
that the stimulus materials in the test faithfully represent the task contents of the job. It
seems to us, therefore, that the job analysis in support of claims of content validity would
need to carefully define the work, such as duties and tasks. Otherwise, it will be difficult
to show that the test components mirror (or are sampled from) the content of the job.

The notion of content validity has been somewhat controversial because unlike other
methods of test validation, content validity does not contain an empirical check of the link
between test scores and job performance measures. Tests based on job content offer
some advantages, such as bypassing the naming of KSAOs and appearing fair to
applicants. Some jobs lend themselves well to such a practice. Translating spoken
material from one language to another might make a good test for an interpreter, for
example. For other jobs, such as a brain surgeon, a content valid test is probably a poor
choice. In any event, when the job involves risk to the public, careful tests of competence
of many sorts are used to ensure that the person is able to accomplish certain tasks, or at
least that he or she has the requisite knowledge to do so. In such instances, traditional
validation studies are not feasible (we do not want to hire just anyone who wants to be a
brain surgeon to evaluate a test).

In this section, we describe the development of tests for certification purposes because
they represent an approach to developing a test (employment or not) that can be
defended as representing the job domain of interest. Even though the material is covered
under certification testing, such an approach can be used to create content valid tests
because the job's KSAOs are carefully linked to the content of the test. In our view, a
simple sampling of the content of the job is possible, but the method described here is
likely to be easier to defend and to result in a good test.

Tests used in licensing and certification are almost invariably built and defended based on
their content. The logic is that the content of the test can (p. 140) be linked to the content
of the occupation of interest. Most often, the argument rests upon showing that the
knowledge required by the occupation is being tested in the examination used for
selection. Tests that are designed for certification follow the same logic. For such
instances, there is no empirical link between test scores and job performance scores.
Instead the test developer must provide other data and arguments that support the
choice of KSAOs and the evidence for the pass/fail decision that was made.

Although the distinction between licensure and certification testing has become blurred,
there are differences between the two (Downing, Haldayna, & Thomas, 2006). Generally
speaking, licensure is required to perform a job, whereas certification is often voluntary.
Licensure implies minimal competence, whereas certification implies something higher

Page 33 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
than minimal competence. Licensing is mandated by regulatory bodies or government
agencies, whereas certifications are offered by credentialing bodies or professional
organizations and are typically voluntary. For example, a dentist must have a license to
practice dentistry. The same dentist may then want to become a board-certified general
orthodontist, which would indicate to the public that he or she may practice a specialty
within dentistry at a high level of proficiency.

In both cases, the regulatory body that licenses the dentist and the orthodontic board that
certifies the dentist must provide evidence that the decision to license or certify the
dentist is appropriate. Credentialing organizations can provide evidence of content
validity in a number of ways. One way is to document the relationship between the
assessment used to license or certify the individual and the job in which the individual is
licensed or certified (Kuehn, Stallings, & Holland, 1990). The first step in illustrating the
relationship between the job and the selection instrument to be used for the job is to
conduct a job analysis. Any job analysis method can be used, but the methods most used
by credentialing organizations are task inventories, DACUM, critical incident technique,
functional job analysis, position analysis questionnaire, and the professional practices
model (Knapp & Knapp, 1995; Nelson, Jacobs, & Breer, 1975; Raymond, 2001; Wang,
Schnipke, & Witt, 2005). These methods are preferred over other job analysis methods
because they provide the sort of detail needed for developing a test blueprint for
assessing job knowledge.

The second step in illustrating the relationship between the job and the selection
instrument is to conduct a verification study of the job analysis (in the literature on
certification testing, this step is often referred to as a “validation study,” but we use the
term “verification study” here so as to avoid confusion with traditional labels used in test
validation). The purpose of the verification study is twofold. First, the study is used to
verify that all of the components of the job were described in the job analysis and that no
aspects of the job were missed (Colton, Kane, Kingsbury, & Estes, 1991). This is critical,
as the selection instrument will be based on a test blueprint, and the test blueprint is
based on the job analysis. Second, the study is used to verify that all of the components of
the job analysis are actually required for the job (i.e., the tasks described in the job
analysis are all performed on the job, and the KSAOs required to perform those tasks are
in fact necessary). This is evaluated by asking participants in the verification study to rate
the components of the job analysis using one or more of the rating scales. Note the
similarity to criterion deficiency and contamination.

The third step is to create a test blueprint based on the job analysis. There are a number
of ways to combine ratings of tasks or KSAOs to arrive at an overall test blueprint. Kane,
Kingsbury, Colton, and Estes (1989) recommend using a multiplicative model to combine
task ratings of frequency and importance to determine the weights of the tasks on a test
blueprint (in their example, the tasks identified in the job analysis became the content
areas on the subsequent selection test). Lavely, Berger, Blackman, Bullock, Follman,
Kromrey, and Shibutani (1990) used a factor analysis of importance ratings to determine
the weighting on a test blueprint for a test used to select teachers for certification.

Page 34 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Raymond (2005) recommended using an additive model to combine task ratings of
criticality and frequency, provided that the ratings are on a Likert scale. Spray and Huang
(2000) recommend using the Rasch Rating Scale Model to transform ordinal task ratings
into equal interval ratings and subsequently using the equal interval ratings to obtain the
relative weights for a test blueprint. All of these methods attempt to ensure that the
overall weighting of the content areas on a test blueprint is directly related to the job, as
the blueprint weights come from task ratings and the job tasks are derived from the job
analysis.

In addition to the link provided by the test blueprint, arguments in support of test
(p. 141)

contents are bolstered by the inclusion of SMEs throughout the job analysis and test
development process. SMEs, which include both job incumbents and supervisors, should
be involved in the initial job analysis, the validation study, the development of the
examination blueprint, and the development of the final selection instrument or
assessment. SMEs can affirm that the job analysis resulted in a complete depiction of the
job, that the most important, critical, and frequent tasks have a greater emphasis than
those that are not as important or critical, or performed less frequently, that the
examination blueprint is based on the verification study, and that the content of the test
items is congruent with the examination blueprint.

Minimum Qualifications

We often turn to work samples as a means to avoid making the inferential leap needed for
the precise specification of the required KSAOs. Precise statements about KSAOs can also
be avoided by directly specifying experience and education as a basis for predictions of
job success. Stated amounts and kinds of education and/or experience are almost
universally used in formulating what are referred to as the minimum qualifications (MQs)
for a job.

MQs are usually defined by the specified amounts and kinds of education and experience
deemed necessary for a person to perform a job adequately. The global nature and
complexity of such indicators in terms of the KSAOs they presumably index render them a
real challenge when attempting to develop and justify them by means of conventional job
analysis [see Tesluk & Jacobs (1998) for an elaborate discussion of what work experience
encompasses, and Levine, Ash, & Levine (2004) for both experience and education]. Thus,
it is not surprising that MQs have been set and continue to be set by intuition, tradition,
trial and error, and expectations of yield in terms of numbers of qualified applicants,
including minorities and females. Such unsystematic approaches to developing MQs may
account in large part for the relatively poor levels of validity found in research on the use
of education and experience in selection (Schmidt & Hunter, 1998).

Perhaps because MQs are rarely challenged under Equal Employment Opportunity laws,
and perhaps because managers may have an exaggerated sense of their capacity to set
MQs, little research has been devoted to the development of job analysis methods that
could facilitate the formulation and validation of MQs (Levine, Maye, Ulm, & Gordon,

Page 35 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
1997). Prompted by court cases brought under Title VII of the Civil Rights Act, two
notable attempts have appeared in the literature that partially fill the gap (Buster, Roth,
& Bobko, 2005; Levine et al., 1997).

The approach developed by Levine et al. (1997) relies on the use of evidence from job
content as the basis for setting MQs. The method begins with a job analysis that first
seeks to establish a full inventory of tasks and KSAs. The resulting lists are evaluated and
edited by subject matter experts who rate them using scales that enable the winnowing of
these full lists to only those tasks and KSAs that can be expected to be performed
adequately (tasks) or possessed (KSAs) by barely acceptable employees upon hire. From
this restricted list, human resource specialists conduct careful research, consult with
SMEs, and use their own occupational knowledge to estimate what kinds and amounts of
training, education, and work experience indicate that the applicant possessed the task
proficiencies and/or the KSA levels required for barely acceptable performance. This
process leads to the development of so-called profiles of education and experience, any
one of which would suggest that the applicant may be expected to perform adequately.

In practice, screening of MQs often involves arbitrary substitution of experience for


education, and vice versa, which is not explicitly publicized. The profiles aim to make
these explicit and to avoid outcomes such as the full substitution of one for the other
when research suggests that certain KSAs can be acquired in only one of these domains.
The profiles are then reviewed by SMEs who rate each of them for clarity and whether
the amounts and kinds of training, education, and work experience are too much, too
little, or adequate to expect of at least barely acceptable performers. Those profiles
meeting preset criteria on these scales are then rated against each task and KSA, and
those MQs that rated as matching sufficient numbers of tasks and/or KSAs are retained.

To illustrate the contrast between the old and new MQs developed using the new method,
we cite here the outcomes for one of the jobs analyzed by Levine et al. (1997), Pharmacy
Technician. The original MQs stated the need for “Two years of experience in assisting a
registered pharmacist in the compounding and dispensing of prescriptions.” At the end of
the process six profiles were deemed (p. 142) acceptable. Two of these were: (1) “Eighteen
months of experience assisting a pharmacist in a nonhospital setting. Such duties must
include maintaining patient records, setting up, packaging, and labeling medication
doses, and maintaining inventories of drugs and supplies”; (2) “Completion of a Hospital
Pharmacy Technician program accredited by the American Society of Hospital
Pharmacists.”

Buster et al. (2005) developed a method that shares some elements with the approach of
Levine et al., but differs in significant ways. The approach of Buster et al. (2005) focuses
first on the MQs themselves. Analysts meet with SMEs who are given a list of KSAs (but
not tasks) and are then asked individually to generate potential MQs. Subsequently, they
discuss as a group the MQs offered. A form is provided that specifies various options of
experience and education for SMEs. The selected options are bracketed in an MQ
questionnaire, meaning that options a bit more demanding and a bit less are included for

Page 36 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
rating by SMEs. The questionnaire generally includes 10–20 MQ statements, which are
rated on a scale modified from one used by Levine et al. (1997) asking whether an MQ is
suitable for identifying a barely acceptable applicant with options “Not at all, Not enough,
Appropriate, and More than should be expected.” The MQ statements are also rated on
whether each KSA can be acquired by achieving the minimum qualification. The
questionnaire may ask for supplemental information such as whether licensing is
required. The last step calls for the selection of the MQs by the I-O psychologists who use
a variety of criteria, including the rating data, past adverse impact from MQs used
previously, and the supplemental information.

Outcomes included a smaller number of MQ alternatives in the set accepted for use in
selection compared to options found by Levine et al. (1997), reduced adverse impact
shown for one job, and a successful defense of the method in a court hearing. No
reliability data were provided.

Perhaps the most important difference between these approaches is the extent of reliance
on SMEs. Levine et al. (1997) found across numerous jobs that SMEs were unable to
provide useful MQ options. Instead they seemed to rely on traditional statements using
amounts of education and experience as the primary input, and there was relatively little
agreement across SMEs in the quality and content of the MQ statements. In reviewing
the MQs resulting from the method employed by Buster et al., our judgment is that they
reflect a stereotypical conception, raising the question of whether the elaborate method
resulted in a qualitatively different set of MQs than just asking a few SMEs.

For example, an MQ for a Civil Engineer Administrator asked for a high school diploma/
GED plus 16 years of engineering experience. Such an MQ is problematic in several
respects. First, the use of a high school diploma had been rejected as unlawful in the
landmark Griggs v. Duke Power case testing Title VII of the Civil Rights Act. Second, it is
unclear what a high school diploma actually measures in terms of KSAs (contrast this
with completing a hospital tech pharmacy program). Finally, as Levine, et al. (2004)
stated, “Length of experience beyond five years is unlikely to offer substantial
incremental validity” (p. 293). Human resource analysts known to us, based on their
experience using work experience measures for selection, and our own extensive use of
MQs, also are against very lengthy experience requirements because they may result in
indefensible adverse impact, especially against women.

Reliance on SMEs for developing the MQs may also be counterproductive for various
other reasons. First, SMEs who are not psychometrically sophisticated may recommend
overly simplistic and unjustified substitutions of education for experience or vice versa.
For example, substituting experience for education could result in meeting MQs without
any relevant education, conceivably yielding employees deficient in some KSA that may
be acquired only in a formal educational setting. Second, SMEs may at times attempt to
manipulate the process to achieve a hidden agenda, such as “professionalizing” a job by
requiring more demanding MQs, or seeking to raise the pay level for the job by raising
the MQs, regardless of whether KSAs are being measured validly. Third, SMEs are often

Page 37 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
unfamiliar with new programs such as internship programs that could provide an
alternate way to qualify, or with the current content of educational programs offered by
schools. Fourth, SMEs are often unfamiliar with job requirements and tasks in similar
jobs within other organizations, especially organizations in different industries or
geographic locales.

The reliability and validity of MQs are difficult to assess. We are unaware of research
indicating the degree to which independent efforts to establish MQs for the same job
result in the same MQs. Levine et al. (1997) showed that judges could reliably determine
whether applicants met given MQs, which might seem obvious, but the nature of (p. 143)
experience is actually sometimes difficult to determine. Both methods suffer from the
problem of lack of evidence on the construct validity of key scales, including the scale
used to establish the linkage between MQs and job analysis items.

As yet there is little empirical research on the results of using these methods for setting
MQs. For example, we do not know whether the use of the MQs developed via one of
these methods results in better cohorts of hires than MQs set in the traditional,
superficial fashion. The adverse impact of using MQs as tests also needs attention (Lange,
2006). Clearly this domain is of sufficient importance as an application of job analysis,
and the gaps in our knowledge beg for additional research.

Conclusions
The topic of this chapter is discovering a job's nature, including the tasks and the
knowledge, skills, abilities, and other characteristics believed to provide the underlying
link between employment tests and job performance. The process of discovery is called
job analysis, which may take many different forms, some of which were described as
conventional methods in this chapter (e.g., the task inventory, Functional Job Analysis).
Completing a job analysis study involves making a number of decisions, including which
descriptors to use, which sources of information to use, the amount of detail and context
to include, and whether to identify underlying KSAOs or to assume that tests based on
sampling job content will cover whatever KSAOs are necessary. The purpose or goal of
testing must also be considered (i.e., is this intended to select the best? To screen out
those totally unqualified?).

After defining terms and describing some of the decisions required in a job analysis, we
described some conventional methods of job analysis and provided a taxonomy of KSAOs
along with supporting references for further detail. Because the conventional methods
are described in many different texts, we did not provide a great deal of procedural detail
about them. Instead we emphasized applying the information on KSAOs to the choice or
development of tests and criteria. We stressed that in the conventional criterion-related
validity design, it is important to choose tests and criteria that are matched on the KSAOs
and are saturated with the same KSAOs on both the predictor and criterion side to have a
good chance of finding a positive result. Finally we turned to alternative validation
strategies that might be used when the conventional criterion-related validity study is not
Page 38 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
feasible or is likely to produce information that is inferior to larger scale studies that have
already been completed. One alternative we summarized is labeled synthetic validity,
which encompasses (1) individual level studies based on test score/task performance
relationships for those performing common tasks across jobs; and (2) job-level studies in
which jobs’ KSAO requirements are correlated with mean test scores of incumbents or
validity coefficients for tests of the KSAOs found for each of the jobs. We then briefly
described the role of job analysis for transportability and validity generalization. Another
approach involves judgments of similarity between test content and job specifications.
Two exemplars of this approach, which involve KSAO specification through job analysis
and judgments of linkages between test content and a job's KSAOs, were described—the
development and validation of assessments for licensure and certification and the
formulation and validation of minimum qualifications. An important research topic in
selection surrounds the degree to which MQs assess KSAOs.

PAQ synthetic validity studies suggest that trained job analysts can provide information
about KSAOs that is related to correlations between tests scores and job performance,
particularly for cognitive ability tests. There is also some empirical support for
noncognitive tests. However, legal considerations as well as professional opinion (e.g.,
Harvey, 1991) suggest that work activities (tasks, duties) continue to be an important part
of job analysis used to support personnel selection decisions.

This chapter emphasizes supporting test use though discovery and documentation of the
important underlying KSAOs responsible for job success. The chapter is unique in that it
stresses the decisions and judgments required throughout the job analysis and their
relation to test development and use. It also contributes by considering in one place not
only the conventional test validation design, but also the relations between job analysis
and tests set by judgment (minimum qualifications, certification/content testing) and
alternative validation strategies.

References
American Educational Research Association, American Psychological Association, &
National Council on Measurement in Education. (1999). Standards for educational and
psychological testing. Washington, DC: American Educational Research Association.

Barrett, G., & Depinet, R. (1991). Reconsideration of testing for competence


(p. 144)

rather than intelligence. American Psychologist, 46, 1012–1023.

Barrick, M. R., & Mount, M. D. (1991). The Big Five personality dimensions and job
performance: A meta-analysis. Personnel Psychology, 44, 1–26.

Bartram, D. (2005). The great eight competencies: A criterion-centric approach to


validation. Journal of Applied Psychology, 90, 1185–1203.

Page 39 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual
analysis of the inferential and evidential bases. Journal of Applied Psychology, 74, 478–
494.

Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to
meta-analysis. Chichester, UK: John Wiley & Sons.

Borgen, F. H. (1988). Occupational reinforcer patterns. In S. Gael (Ed.), The job analysis
handbook for business, industry, and government (Vol. II, pp. 902–916). New York: John
Wiley & Sons.

Boyatzis, R. E. (1982). The competent manager. New York: John Wiley & Sons.

Brannick, M. T., Brannick, J. P., & Levine, E. L. (1992). Job analysis, personnel selection,
and the ADA. Human Resource Management Review, 2, 171–182.

Brannick, M. T., & Hall. S. M. (2003). Validity generalization from a Bayesian perspective.
In K. Murphy (Ed.), Validity generalization: A critical review (pp. 339–364). Mahwah, NJ:
Lawrence Erlbaum Associates.

Brannick, M. T., Levine, E. L., & Morgeson, F. P. (2007). Job and work analysis: Methods,
research and applications for human resource management. Thousand Oaks, CA: Sage.

Buster, M. A., Roth, P. L., & Bobko, P. (2005). A process for content validation of education
and experienced-based minimum qualifications: An approach resulting in Federal court
approval. Personnel Psychology, 58, 771–799.

Campion, M. A., & Thayer, P. W. (1985). Development and field evaluation of an


interdisciplinary measure of job design. Journal of Applied Psychology, 70, 29–43.

Cascio, W. F. (1991). Applied psychology in personnel management. London: Prentice-


Hall.

Cascio, W. F., & Aguinis, H. (2011). Applied Psychology in Human Resource Management
(7th ed.). Boston: Prentice Hall.

Cattell, R. B. (1946). The description and measurement of personality. New York:


Harcourt, Brace & World.

Christal, R. E., & Weissmuller, J. J. (1988). Job-task inventory analysis. In S. Gael (Ed.),
The job analysis handbook for business, industry, and government (Vol. II, pp. 1036–
1050). New York: John Wiley & Sons.

Colton, A., Kane, M. T., Kingsbury, C., & Estes, C. A. (1991). A strategy for examining the
validity of job analysis data. Journal of Educational Measurement 28(4), 283–294.

Digman, J. M. (1990). Personality structure: Emergence of the five-factor model. Annual


Review of Psychology, 41, 417–440.

Page 40 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Downing, S. M., Haladyna, T. M., & Thomas, M. (Eds.) (2006). Handbook of test
development. Mahwah, NJ: Lawrence Erlbaum Associates.

Equal Employment Opportunity Commission. (1978). Uniform guidelines on employee


selection procedures. Federal Register, 43, 38290–38315.

Fine, S. A., & Cronshaw, S. F. (1999). Functional job analysis: A foundation for human
resources management. Mahwah, NJ: Lawrence Erlbaum Associates

Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51, 327–
358.

Fleishman, E. A., & Reilly, M. E. (1992). Handbook of human abilities: Definitions,


measurements, and job task requirements. Palo Alto, CA: Consulting Psychologists Press.

Gael, S. (1983). Job analysis: A guide to assessing work activities. San Francisco: Jossey-
Bass.

Gatewood, R. D., & Field, H. S. (2001). Human resource selection (5th ed.). Orlando, FL:
Harcourt.

Gibson, W. M., & Caplinger, J. A. (2007). Transportation of validation results. In S. M.


McPhail (Ed.), Alternative validation strategies: Developing new and leveraging existing
validity evidence (pp. 29–81). San Francisco: John Wiley & Sons.

Goldberg, L. R. (1993). The structure of phenotypic personality traits. American


Psychologist, 48, 26–34.

Griggs v. Duke Power Co. (1971). 401 U.S. 424 (1971) 91 S.Ct. 849 Certiorari to the
United States Court of Appeals for the Fourth Circuit No. 124.

Guion, R. M. (1965). Synthetic validity in a small company: A demonstration. Personnel


Psychology, 18, 49–63.

Guion, R. M. (1998). Assessment, measurement, and prediction for personnel decisions.


Mahwah, NJ: Lawrence Erlbaum Associates.

Gutenberg, R. L., Arvey, R. D., Osburn, H. G., & Jeaneret, P. R. (1983). Moderating effects
of decision-making/information processing job dimensions on test validities. Journal of
Applied Psychology, 36, 237–247.

Harvey, R. J. (1991). Job analysis. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of


industrial and organizational psychology (Vol. 2, pp. 71–163). Palo Alto, CA: Consulting
Psychologists Press.

Harvey, R. J., & Wilson, M. A. (2000). Yes Virginia, there is an objective reality in job
analysis. Journal of Organizational Behavior, 21, 829–854.

Page 41 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Hoffman, C. C. (1999). Generalizing physical ability test validity: A case study using test
transportability, validity generalization, and construct-related evidence. Personnel
Psychology, 52, 1019–1041.

Hoffman, C. C., Holden, L. M., & Gale, E. K. (2000). So many jobs, so little “N”: Applying
expanded validation models to support generalization of cognitive test validity. Personnel
Psychology, 53, 955–991.

Hoffman, C. C., & McPhail, S. M. (1998). Exploring options for supporting test use in
situations precluding local validation. Personnel Psychology, 51, 987–1003.

Hoffman, C. C., Rashkovsky, B., & D’Egidio, E. (2007). Job component validity:
Background, current research, and applications. In S. M. McPhail (Ed.), Alternative
validation strategies: Developing new and leveraging existing validity evidence (pp. 82–
121). San Francisco: John Wiley & Sons.

Hogan Assessment Systems. (2000). Job Evaluation Tool manual. Tulsa, OK: Hogan
Assessment Systems.

Hogan, J., Davies, S., & Hogan, R. (2007). Generalizing personality-based validity
evidence. In S. M. McPhail (Ed.), Alternative validation strategies: Developing new and
leveraging existing validity evidence (pp. 181–229). San Francisco: John Wiley & Sons.

Jeanneret, P. R. (1992). Applications of job component/synthetic validity to construct


validity. Human Performance 5, 81–96.

Johnson, J. W. (2007). Synthetic validity: A technique of use (finally). In S. M. McPhail


(Ed.), Alternative validation (p. 145) strategies: Developing new and leveraging existing
validity evidence (pp. 122–158). San Francisco: John Wiley & Sons.

Johnson, J. W., Carter, G. W., & Tippins, N. T. (2001, April). A synthetic validation approach
to the development of a selection system for multiple job families. In J. W. Johnson & G. W.
Carter (Chairs), Advances in the application of synthetic validity. Symposium conducted
at the 16th Annual Conference of the Society for Industrial and Organizational
Psychology, San Diego, CA.

Kane, M. T., Kingsbury, C., Colton, D., & Estes, C. (1989). Combining data on criticality
and frequency in developing test plans for licensure and certification examinations.
Journal of Educational Measurement, 26(1), 17–27.

Kemery, E. R., Mossholder, K. W., & Dunlap, W. P. (1989). Meta-analysis and moderator
variables: A cautionary note on transportability. Journal of Applied Psychology, 74, 168–
170.

Kisamore, J. L. (2008). Distributional shapes and validity transport: A comparison of lower


bounds. International Journal of Selection and Assessment, 16, 27–29.

Page 42 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Knapp, J., & Knapp, L. (1995). Practice analysis: Building the foundation for validity. In J.
C. Impara (Ed.), Licensure testing: Purposes, procedures, and practices (pp. 93–116).
Lincoln, NE: Buros Institute of Mental Retardation.

Kuehn, P. A., Stallings, W. C., & Holland, C. L. (1990). Court-defined job analysis
requirements for validation of teacher certification tests. Educational Measurement:
Issues and Practice 9(4), 21–24.

Kurz, R., & Bartram, D. (2002). Competency and individual performance: Modeling the
world of work. In I. T. Robertson, M. Callinan, & D. Bartram (Eds.), Organizational
effectiveness: The role of psychology (pp. 227–255). New York: John Wiley & Sons.

Landy, F. L. (1986). Stamp collecting versus science: Validation as hypothesis testing.


American Psychologist, 41, 1183–1192.

Lange, S. (2006). Content validity of minimum qualifications: Does it reduce adverse


impact? Dissertation Abstracts International: Section B: The Sciences and Engineering,
66(11-B), 6322.

LaPolice, C. C., Carter, G. W., & Johnson, J. W. (2008). Linking O*NET descriptors to
occupational literacy requirements using job component validation. Personnel Psychology,
61, 405–441.

Latham, G. P., & Wexley, K. N. (1977). Behavioral observation scales for performance
appraisal purposes. Personnel Psychology, 30, 355–368.

Lavely, C., Berger, N., Blackman, J., Bullock, D., Follman, J., Kromrey, J., & Shibutani, H.
(1990). Factor analysis of importance of teacher initial certification test competency
ratings by practicing Florida teachers. Educational and Psychological Measurement, 50,
161–165.

Lee, J. A., & Mendoza, J. L (1981). A comparison of techniques which test for job
differences. Personnel Psychology, 34, 731–748.

Levine, E. L. (1983). Everything you always wanted to know about job analysis. Tampa,
FL: Mariner.

Levine, E. L., Ash, R. A., Hall, H., & Sistrunk, F. (1983). Evaluation of job analysis
methods by experienced job analysts. Academy of Management Journal, 26, 339–348.

Levine, E. L., Ash, R. A., & Levine, J. D. (2004). Judgmental evaluation of job-related
experience, training, and education for use in human resource staffing. In J. C. Thomas
(Ed.), Comprehensive handbook of psychological assessment, Vol. 4, Industrial and
organizational assessment (pp. 269–296). Hoboken, NJ: John Wiley & Sons.

Levine, E. L., Maye, D. M., Ulm, R. A., & Gordon, T. R. (1997). A methodology for
developing and validating minimum qualifications (MQs). Personnel Psychology, 50, 1009–
1023.

Page 43 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Lopez, F. M. (1988). Threshold traits analysis system. In S. Gael (Ed.), The job analysis
handbook for business, industry, and government (Vol. II, pp. 880–901). New York: John
Wiley & Sons.

McCormick, E. J. (1976). Job and task analysis. In M. D. Dunnette (Ed.), Handbook of


industrial and organizational psychology (pp. 651–697). Chicago: Rand McNally.

McCormick, E. J. (1979). Job analysis: Methods and applications. New York: AMACOM.

McCormick, E. J., DiNisi, A. S., & Shaw, J. B. (1979). Use of the Position Analysis
Questionnaire for establishing the job component validity of tests. Journal of Applied
Psychology, 64, 51–56.

McCormick, E. J., Jeanneret, P. R., & Mecham, R. C. (1972). A study of job characteristics
and job dimensions as based on the position analysis questionnaire (PAQ). Journal of
Applied Psychology, 56, 347–368.

McDaniel, M. A. (2007). Validity generalization as a test validation approach. In S. M.


McPhail (Ed.), Alternative validation strategies: Developing new and leveraging existing
validity evidence (pp. 159–180). San Francisco: John Wiley & Sons.

McPhail, S. M. (2007). Development of validation evidence. In S. M. McPhail (Ed.),


Alternative validation strategies: Developing new and leveraging existing validity
evidence (pp. 1–25). San Francisco: John Wiley & Sons.

Nelson, E. C., Jacobs, A. R., & Breer, P. E. (1975). A study of the validity of the task
inventory method of job analysis. Medical Care, 13(2), 104–113.

Norton, R. E. (1985). DACUM handbook. Columbus, OH: Ohio State University National
Center for Research in Vocational Education.

Pace, V. L., & Brannick, M. T. (2010). How similar are personality scales of the ‘same’
construct? A meta-analytic investigation. Personality and Individual Differences, 49, 669–
676.

Pearlman, K., Schmidt, F. L., & Hunter, J. E. (1980). Validity generalization results for
tests used to predict job proficiency and training criteria in clerical occupations. Journal
of Applied Psychology, 65, 373–406.

Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., & Fleishman, E. A.
(Eds.). (1999). An occupational information system for the 21st century: The development
of O*NET. Washington, DC: American Psychological Association.

Peterson, N. J., Wise, L. L., Arabian, J., & Hoffman, G. (2001). Synthetic validation and
validity generalization: When empirical validation is not possible. In J. P. Campbell & D. J.
Knapp (Eds.), Exploring the limits of personnel selection and classification (pp. 411–451).
Mahwah, NJ: Lawrence Erlbaum Associates

Page 44 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Ployhart, R. E., Schneider, B., & Schmitt, N. (2006). Staffing organizations: Contemporary
practice and theory (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.

Primoff, E. S. (1959). Empirical validation of the J-coefficient. Personnel Psychology, 12,


413–418.

Primoff, E. S., & Eyde, L. D. (1988). Job element analysis. In S. Gael (Ed.), The job analysis
handbook for business, industry, and government (Vol. II, pp. 807–824). New York: John
Wiley & Sons.

Rashkovsky, B., & Hoffman, C. C. (2005, April). Examining a potential extension of


(p. 146)

the JCV model to include personality predictors. Paper presented at the annual meeting of
the Society for Industrial and Organizational Psychology, Los Angeles.

Raymark, P. H., Schmit, M. J., & Guion, R. M. (1997). Identifying potentially useful
personality constructs for employee selection. Personnel Psychology, 50, 723–736.

Raymond, M. R. (2001). Job analysis and the specification of content for licensure and
certification examinations. Applied Measurement in Education 14(4), 369–415.

Raymond, M. R. (2002). A practical guide to practice analysis for credentialing


examinations. Educational Measurement: Issues and Practice, 21, 25–37.

Raymond, M. R. (2005). An NCME instructional module on developing and administering


practice analysis questionnaires. Educational Measurement: Issues and Practice, 24, 29–
42.

Raymond, M. R., & Neustel, S. (2006). Determining the content of credentialing


examinations. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development
(pp. 181–223). Mahwah, NJ: Lawrence Erlbaum Associates.

Sackett, P. R. (2003). The status of validity generalization research: Key issues in drawing
inferences from cumulative research findings. In K. R. Murphy (Ed.), Validity
generalization: A critical review (pp. 91–114). Mahwah, NJ: Lawrence Erlbaum
Associates.

Sanchez, J. I., & Fraser, S. L. (1994). An empirical procedure to identify job duty-skill
linkages in managerial jobs: A case example. Journal of Business and Psychology, 8, 309–
326.

Sanchez, J. I., & Levine, E. L. (1989). Determining important tasks within jobs: A policy-
capturing approach. Journal of Applied Psychology, 74, 336–342.

Scherbaum, C. A. (2005). Synthetic validity: Past, present and future. Personnel


Psychology, 58, 481–515.

Page 45 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection models in
personnel psychology: Practical and theoretical implications of 85 years of research
findings. Psychological Bulletin, 124, 262–274.

Seamster, T. L., Redding, R. E., & Kaempf, G. L. (1997). Applied cognitive task analysis in
aviation. Brookfield, VT: Ashgate.

Shippmann, J. S., Ash, R. A., Battista, M., Carr, L. Eyde, L. D., Hesketh, B., Kehoe, J.,
Pearlman, K., Prien, E. P., & Sanchez, J. I. (2000). The practice of competency modeling.
Personnel Psychology, 53, 703–740.

Smith, P. C., & Kendall, L. M. (1963). Retranslation of expectations: An approach to the


construction of unambiguous anchors for rating scales. Journal of Applied Psychology, 47,
149–155.

Spray, J. A., & Huang, C. (2000). Obtaining test blueprint weights from job analysis
surveys. Journal of Educational Measurement, 27(3), 187–201.

Steel, P., & Kammeyer-Mueller, J. (2009). Using a meta-analytic perspective to enhance


job component validation. Personnel Psychology, 62, 533–552.

Tenopyr, M. L. (1977). Content-construct confusion. Personnel Psychology, 30, 47–54.

Tesluk, P. E., & Jacobs, R. R. (1998). Toward an integrated model of work experience.
Personnel Psychology, 51, 321–355.

Tett, R. P., Guterman, H. A., Bleier, A., & Murphy, P. J. (2000). Development and content
validation of a “hyperdimensional” taxonomy of managerial competence. Human
Performance, 13, 205–251.

Thompson, D. E., & Thompson, T. A. (1982). Court standards for job analysis in test
validation. Personnel Psychology, 35, 865–874.

Voskuijl, O. F., & Evers, A. (2008). Job analysis and competency modeling. In S.
Cartwright & C. L. Cooper (Eds.), The Oxford handbook of personnel psychology (pp. 139–
162). New York: Oxford University Press.

Wang, N., Schnipke, D., & Witt, E. A. (2005). Use of knowledge, skill, and ability
statements in developing licensure and certification examinations. Educational
Measurement: Issues and Practice, 24(1), 15–22.

Wernimont, P. F., & Campbell, J. P. (1968). Signs, samples, and criteria. Journal of Applied
Psychology, 52, 372–376.

Wilk, S. L., Desmarais, L., & Sackett, P. R. (1995). Gravitation to jobs commensurate with
ability: Longitudinal and cross-sectional tests. Journal of Applied Psychology, 80, 79–85.

Page 46 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019


Job Analysis for Knowledge, Skills, Abilities, and Other Characteristics,
Predictor Measures, and Performance Outcomes

Michael T. Brannick

Michael T. Brannick, Department of Psychology, University of South Florida, Tampa,


FL

Adrienne Cadle

Adrienne Cadle, Department of Educational Measurement and Research, University


of South Florida, Tampa, FL

Edward L. Levine

Edward L. Levine, Department of Psychology, University of South Florida, Tampa, FL

Page 47 of 47

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: Goldsmiths College, University of London; date: 27 February 2019

You might also like