Professional Documents
Culture Documents
Job analysis is the process of discovering the nature of a job. It typically results in an
understanding of the work content, such as tasks and duties, understanding what people
need to accomplish the job (the knowledge, skills, abilities, and other characteristics),
and some formal product such as a job description or a test blueprint. Because it forms
the foundation of test and criterion development, job analysis is important for personnel
selection. The chapter is divided into four main sections. The first section defines terms
and addresses issues that commonly arise in job analysis. The second section describes
common work-oriented methods of job analysis. The third section presents a taxonomy of
knowledge, skills, abilities, and other characteristics along with worker-oriented methods
of job analysis. The fourth section describes test validation strategies including
conventional test validation, synthetic validation, and judgment-based methods (content
validation and setting minimum qualifications), emphasizing the role of job analysis in
each. The last section is a chapter summary.
Keywords: job analysis, work analysis, content validity, synthetic validity, minimum qualifications
Page 1 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
In what follows, we first provide definitions of some essential terms used in job analysis.
We then describe some of the most consequential decisions that must be confronted when
completing a job analysis for selection. Next, we describe some of the most useful
conventional methods of work- and worker-oriented job analysis, noting the strengths and
weaknesses of each. Finally, we consider in more detail various test validation strategies
and how job analysis relates to each. In our treatment of job analysis, we have covered
the logic, purpose, and practice of the discovery of knowledge, skills, abilities, and other
characteristics at work. We also have (p. 120) provided links between job analysis and test
use that are organized in a way we believe to be useful to readers from diverse
backgrounds and interests.
There are many ways of organizing the business of job analysis (e.g., Brannick, Levine, &
Morgeson, 2007, use four sets of building blocks: descriptors, methods of data collection,
sources of data, and units of analysis). For this chapter, it will be useful to focus mainly on
two sets of descriptors: work activities and worker attributes. Work activities concern
what the worker does on the job. For example, an auto mechanic replaces a worn tire
with a new one, a professor creates a Powerpoint slideshow for a lecture, a salesperson
demonstrates the operation of a vacuum cleaner, and a doctor examines a patient. Worker
attributes are characteristics possessed by worker that are useful in completing work
activities. For example, our auto mechanic must be physically strong enough to remove
and remount the tire, the professor needs knowledge of computer software to create the
slideshow, the salesperson should be sociable, and the doctor must possess hearing
sufficiently acute to use the stethoscope. For each of these jobs, of course, the worker
needs more than the characteristic just listed. The important distinction here is that work
activities describe what the worker does to accomplish the work, whereas the worker
attributes describe capacities and traits of the worker.
Work activities.
The most central of the work activities from the standpoint of job analysis is the task. The
task is a unit of work with a clear beginning and end that is directed toward the
accomplishment of a goal (e.g., McCormick, 1979). Example tasks for an auto mechanic
might include adjusting brakes or inflating tires; for the professor, a task might involve
writing a multiple choice examination. Tasks are often grouped into meaningful
collections called duties when the tasks serve a common goal. To continue the auto
mechanic example, a duty might be to tune an engine, which would be composed of a
Page 2 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Worker attributes.
Worker attributes are conventionally described as KSAOs, for knowledge, skills, ability,
and other characteristics. The definition of these is typically somewhat vague, but we
shall sketch concepts and list examples of each. Knowledge concerns factual, conceptual,
and procedural material, what might be termed declarative and procedural knowledge in
cognitive psychology. Examples include knowledge of what software will accomplish what
function on the computer (e.g., which program will help create a manuscript, analyze
data, or create a movie), historical facts (e.g., Washington was the first President of the
United States), and knowledge of algebra (e.g., what is the distributive property?). Skill is
closely related to procedural knowledge, in that actions are taken of a kind and in
sequences coded in the knowledge bases. Skill is thus often closely allied with
psychomotor functions. Examples of skill include competence in driving a forklift or
playing a flute. Abilities refer to capacities or propensities that can be applied to many
different sorts of knowledge and skill. Examples include verbal, mathematical, and
musical aptitudes. Other characteristics refer to personal dispositions conventionally
thought of as personality or more specialized qualities related to a specific job. Examples
of other characteristics include resistance to monotony, willingness to work in dangerous
or uncomfortable environments, and extroversion.
Job specification.
Some authors reserve the term job analysis for work activities, and use the term job
specification to refer to inferred worker personal characteristics that are required for job
success (e.g., Harvey, 1991; Harvey & Wilson, 2000). Cascio (1991) split job analysis into
job descriptions and job specifications. Here we acknowledge the important distinction
between work and worker-oriented approaches, but prefer to label the process of
discovery of both using the term “job analysis.” The essential difference between the two
types of descriptors is that work behaviors tend to be more observable (but recognize
that some behaviors, such as making up one's mind, cannot be readily observed—only the
result can be observed).
Page 3 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
The KSAOs are critical for personnel selection. The logic of the psychology of personnel
selection is (1) to identify those KSAOs that are important for the performance of a job,
(2) to select those KSAOs that are needed when the new hire begins work, and which are
practical and cost effective to measure, (3) to measure applicants on the KSAOs, and (4)
to use the measurements thus gathered in a systematic way to select the best people.
From a business standpoint, there must be an applicant pool, and those selected must
come to work after being selected. Such practicalities point to important processes
involved in recruiting, hiring, and retaining people in organizations. Those aspects are
not covered in this chapter, which is focused mainly on identifying the KSAOs.
Decisions
Completing a job analysis usually requires many decisions. Unlike buying a book, where
the contents of the book are pretty much standard no matter where you buy it, the job
analysis is constructed based on what you are trying to accomplish. In a sense, job
analysis is more like writing a book than reading one. In addition to discovering the
KSAOs, you will want to document what you did in order to support the choice of KSAOs
and their measures. Some decisions are made rather early in the process and others can
be delayed. In this section, we sketch some of the decisions that need to be confronted.
The only way to know for certain how successful a job applicant will be is to hire that
person, get him or her working, and carefully measure performance against some
standard over a sufficient period of time. Such a practice is impractical (if we want the
best, we must hire them all, evaluate them all, and only then select one), and may be
dangerous (consider such a practice for dentists or airline pilots). Even if we could hire
everyone and judge their subsequent performance (not an easy process by any means),
short-term success does not always mean longer-term success. Therefore, we settle for
safer, more practical, but less sure methods of deciding which person to hire.
Although it is desirable to select for the entire job based on the full set of KSAOs required
for success (Equal Employment Opportunity Commission, 1978, p. 38304), we typically
select for only part of the job, and only some of the KSAOs. For example, a test of
knowledge such as is used for certification in nursing will indicate whether a person
knows a resuscitation algorithm. Passing that test does not mean that the person will be
able to perform a resuscitation properly, however. People may be unable to apply what
they know. However, they cannot be expected to apply knowledge that they do not have,
Page 4 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
There may be a large number of other characteristics that are believed to be important
for success on a job. In addition to subject matter expertise, success in teaching or
coaching may require a number of interpersonal qualities that are difficult to define
clearly (e.g., patience, the ability to explain things in multiple ways, empathy). There may
not be well-developed measures of such constructs readily available for use.
Some attributes are clearly relevant to the job, and measures are available, but their use
is questionable because of the selection context. Characteristics such as on-the-job
motivation and attitudes toward other people are difficult to measure during the job
application process because applicants typically attempt to present a favorable
impression. So although we would like to know whether a faculty member will spend time
writing a manuscript rather than surfing the internet, asking the person about what they
plan to do in this respect during the interview is not likely to provide much useful
information. Similarly, asking a person who will be working closely with others whether
they are a “team player” is likely to result in an affirmative answer during the application
regardless of their subsequent behavior.
For all these reasons, the list of KSAOs that are measured and systematically combined to
make selection decisions is typically smaller than the set that would be used if we were
not burdened with practical constraints. This is one reason that we (p. 122) desire
validation studies for selection. We want to be able to show that the subset of KSAOs for
which we have chosen or developed measures is of value for predicting job performance.
If we do a decent job of KSAO measurement, we should expect good results unless (1) the
process of selection is more expensive than the payoff in terms of job performance, (2)
the KSAOs we chose are the trivial ones rather than the important ones, or (3) the subset
of KSAOs we chose is negatively related to those we omitted (here we are assuming that
aspects beyond the focus of this chapter are taken care of, e.g., there are people who
want the job in question).
Because they are attributes and not behaviors, KSAOs are not directly observed. Rather,
they are inferred from behavior. Harvey (1991) described such an inference as a “leap”
and questioned whether KSAOs could be compellingly justified based solely on a job
analysis. For this reason alone, it is tempting to rely on job or task simulations for
selection (see Tenopyr, 1977; Wernimont & Campbell, 1968). For example, suppose that
for the job “welder” we use a welding test. We believe that a welding test will tap
whatever KSAOs are necessary for welding, so that we need not identify the KSAOs,
measure each, and then combine them systematically to select the best candidate. If we
score the test based on the outcome of the task, then we have circumvented the problem
of the inferential leap, at least for the task. Some work samples (assessment centers, for
example) are scored based on underlying KSAOs instead of the task itself, and so do not
Page 5 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
When the goal of job analysis is selection, understanding the human requirements of the
job (i.e., the KSAOs) is essential. Regardless of whether the KSAOs are isolated and
measured separately (e.g., with a paper-and-pencil personality test) or implicitly
measured by a work sample (e.g., using a medical simulation to assess physician
competence in the diagnosis of heart diseases), the analysis should result in a description
of the main tasks and/or duties of the job. That is, the main work activities should be
documented even if the goal is to identify worker attributes. The reason for such a
prescription is practical: to defend the use of selection procedures, you must be able to
point to the requirements of the job rather than to generally desirable traits. As the
Supreme Court ruled in Griggs v. Duke Power, “What Congress has commanded is that
any test used must measure the person for the job and not the person in the
abstract” (Griggs v. Duke Power, 1971).
Job Context
Although a work sample or task simulation may appear to contain whatever KSAOs are
necessary for success on the job, the context of the job often requires additional KSAOs
that the task itself does not embody. For example, we have known of several jobs
including welder, distribution center picker, and electrician in which fear of heights
prevented people from doing the job. Some welders work on bridges, ships, boilers, or
other objects where they are essentially suspended several stories up with minimal
safeguards and a mistake could result in a fatal fall (a welder who had been working on a
bridge talked about watching his protective helmet fall toward the water for what seemed
like several minutes before it hit; as he saw the splash, he decided to quit). Many
technical jobs (e.g., computer technician) have heavy interpersonal requirements that
might not be tapped in a work sample test that required debugging a program or
troubleshooting a network connection. Of course, work samples can be designed to
include the crucial contextual components. To do so, however, someone must decide to
include the contextual components, and such a decision would likely be based on the idea
that important KSAOs were tapped by doing so. The insight about the importance of the
KSAO would come from a job analysis.
Page 6 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
In many cases, jobs are connected. Information, products, or services from one position
are crucial for the performance of another position. In the process of describing a job,
such connections are often neglected unless they are the central function of the job of
interest. However, to the extent that jobs are (p. 123) interconnected for the achievement
of organizational goals, the selection of the best people for the job may depend upon
KSAOs that come into play at the jobs’ intersection. Additionally, as we move from a
manufacturing economy to a service economy, jobs with apparently similar tasks may be
performed in importantly different ways. For example, sales jobs may emphasize different
sorts of behaviors depending upon the host organization (e.g., methods used in
automobile sales can vary quite a bit depending upon the type of car). People are
sensitive to subtle nuances in interpersonal communication, so that apparently minor
differences in behavior may be quite important for job performance when the job involves
providing individual services to clients (e.g., in medicine, law, or hair care).
Choice of Scales
Many systems of job analysis require the analyst or an incumbent to provide evaluative
ratings of aspects of the job. For example, the job elements method (Primoff & Eyde,
1988) requires incumbents to make ratings such as whether trouble is likely if a new
employee lacks a particular characteristic upon arrival. In the task inventory (e.g.,
Christal & Weissmuller, 1988), the incumbent rates each task on one or more scales such
as frequency of performing, difficulty to learn, consequence of error, and importance to
the job. Although many have argued eloquently that the choice of scale should follow the
intended purpose of the use of job analysis information (Christal & Weissmuller, 1988;
McCormick, 1976; Brannick et al., 2007), legal considerations suggest that some measure
of overall importance should be gathered to bolster arguments that the selection
procedures are based on attributes that are important or essential for job performance.
Christal has argued against directly asking for importance ratings because it is not clear
to the incumbent what aspects of the tasks should be used to respond appropriately. A
task might be important because it is done frequently, or because a mistake on an
infrequent task could have dire consequences, or because the incumbent views the task
as most closely related to the purpose of the job. Christal has argued that it is better to
ask directly for the attribute of interest: if you are interested in the consequences of
error, for example, you should ask “what is the consequence if this task is not performed
correctly?” Others have argued for combining multiple attributes into an index of
importance (e.g., Levine, 1983; Primoff & Eyde, 1988). Sanchez and Levine (1989)
recommended that a composite of task criticality and difficulty to learn should serve as an
index of importance. However, Sanchez and Fraser (1994) found that direct judgments of
overall importance were as reliable as composite indices. It is not entirely clear that a
composite yields a more valid index of importance than directly asking the incumbents for
their opinion. On the other hand, in some cases, composites could reduce the number of
Page 7 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Under the Americans with Disabilities Act, a person cannot be rejected for a job based on
job functions that are not essential, and KSAOs that are chosen for selection must be
shown to be related to the job should their use result in adverse impact to a protected
class of job applicants. Therefore, some means of documenting the importance of the
tasks and KSAOs ultimately used in selection is highly recommended.
Time
An often neglected aspect of a job's or task's performance is the dimension of time. Tasks
may require speedy performance as in sports, close attention to detail or vigilant
attention over long periods of time when significant events occur rarely, and the rest is
downtime. Jobs may call for rotating or extended shifts, working especially early or late at
certain points, being on call, and working on weekends or holidays. Attributes such as
energy, tolerance for boredom, conscientiousness, and willingness to work requisite
schedules may be critical for success when elements linked to time are critical in a job,
Many of these attributes fall under the Other Characteristics heading, and deserve
careful consideration in the choice of KSAOs to include in the mix used for selection.
Task Detail
For selection, the description of the task content usually need not be as detailed as it
would be for training. If the task content is to be used to infer standard abilities such as
near vision or arm strength, then the tasks need to be specified in sufficient detail only to
support the necessity of the KSAO. Content is still necessary to sort out differences in
KSAOs, though. It matters if someone digs ditches using a shovel or a backhoe because
the KSAOs are different. On the other hand, if the job analysis is (p. 124) to support a
knowledge test such as might be found in certification, then much greater task detail will
be necessary. In developing a “content valid” test, rather than supporting the inference
that the KSAO is necessary (e.g., the job requires skill in operating a backhoe), it is
necessary to map the knowledge domain onto a test (e.g., exactly what must you know to
operate a backhoe safely and efficiently?).
The statistical model describing functional relations between abilities or capacities and
job performance is rarely specified by theory before the job analysis is begun. When data
are subsequently analyzed for test validation, however, there is usually the implicit
assumption of linear relations between one or more tests and a single measure of
performance. The way in which people describe job analysis and selection, however,
suggests that rather different implicit assumptions are being made about the relations
between ability and performance. Furthermore, such implicit assumptions often appear to
be nonlinear. Here we sketch some common practices and implicit assumptions that are
Page 8 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
An implicit assumption that is consistent with setting minimum qualifications for selection
is that some KSAOs are necessary up to a point, but additional benefit does not accrue
from higher standing on the KSAO. For example, a task might require copying words or
numbers from a source to a computer program. The task should require the function
“copying” in the functional job analysis typology, and would probably require cognitive
skill in reading. People lacking the appropriate language, perceptual, and motor skills
would struggle with the task or simply fail to do it. At a certain level (typically attained by
elementary school children), people can master the task. Higher skills such as writing
would be of essentially no benefit in completing the task—being smarter, for example, is
of little help in a task requiring copying sequences of random numbers. The implicit
assumption is that the relation between the ability and performance is essentially a step
function at a low level. Something is needed to carry out the task at all; if someone lacks
that necessary something, they cannot do the work. Otherwise, however, more of the
something is not helpful. Sufficient vision is needed to read, for another example, but
after a certain point, better vision does not yield better reading.
Competencies are often described in a manner consistent with an implicit step function at
a high level. Boyatzis (1982, p. 21) defined a competency as “an underlying characteristic
of a person, which results in an effective and/or superior performance of a job.” In the job
element method, one of the ratings is for “superior.” This is used to identify elements that
distinguish superior workers from other workers. Primoff and Eyde (1988) noted that
breathing might be needed for a job, but it would not distinguish the superior worker, so
it would not be marked using the “superior” scale. Unlike breathing, competencies are
expected to discriminate among workers at the high end rather than the low end of
performance.
The relation between KSAOs and performance matters in selection for two different
reasons. If a step function is correct, then setting a standard for the KSAO for selection is
critical. Setting the standard too low for minimum qualifications, for example, would
result in hiring people who could not perform the job. The level at which the cutoff should
be set is a matter of judgment, and thus is an additional inferential leap that may be
Page 9 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Another way of thinking about abilities and standards is to consider setting standards
from a decision-making perspective. Essentially, we might ask what the employer is trying
to accomplish by setting standards (other than a mean gain in job performance that could
be predicted using a regression equation). At the low end (minimum qualifications), the
employer may set a relatively low bar in order to cast as wide a net as possible so as to
have as many applicants as possible (in case of a labor shortage), or to lower the cost of
labor (less skilled labor tends to receive lower wages), or to minimize the adverse impact
caused by testing. On the other hand, an employer might set the bar as high as possible
to minimize large losses caused by spectacular mistakes, to achieve a competitive
advantage through star employees’ development of exceptional products and services, or
perhaps to gain a reputation for hiring only the best.
At this point it should be clear that there is considerable judgment required for the
establishment of standards for selection, and there could be many different reasons for
choosing a cutoff point for applicants, not all of which depend directly upon the
applicant's ability to succeed on the job. Reasons external to job performance are not
supported by conventional job analysis procedures.
Choice of Procedures
The discovery of KSAOs may proceed in many ways; it is up to the analyst to select an
approach that best suits the needs and resources of the organization faced with a
selection problem. Brannick et al. (2007) provide more detail about selecting an approach
than we can here. However, in the following two sections, we describe some of the job
analysis procedures that are most widely used for personnel selection. The first section
describes those procedures that are primarily task based (work oriented). The second
section describes those that are primarily worker oriented. The first section describes the
methods individually because they are most easily understood when presented in this
manner. The worker-oriented methods are described in less detail and are organized by
type of attribute. This was done for efficiency—many of the worker-oriented methods
cover the same traits.
Page 10 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Although there are many conventional job analysis procedures, only four methods will be
discussed here. Those methods include the critical incident technique, functional job
analysis, the task inventory, and DACUM or Developing a Curriculum. These four
methods were chosen because they are particularly useful for personnel selection (Knapp
& Knapp, 1995; Raymond, 2001, 2002).
The process of performing the CIT is less formal than other job analysis methods and
should be thought of as a set of guidelines rather than a specific structure. The CIT is
performed either by a job analyst interviewing job incumbents and supervisors, or by job
incumbents and supervisors filling out questionnaires developed by job analysts. The
incidents that are obtained during the process should include an overall description of the
event, the effective or ineffective behavior that was displayed during the event, and the
consequences associated with the individual's behavior. The job analyst performing the
CIT interview should be familiar with the CIT process. The interviewer begins by
explaining the purpose of the CIT interview. The job analyst should be careful in his or
her explanation of the process, and should choose terms carefully. For example, it is
sometimes helpful to describe the incidents in terms of “worker behaviors” rather than
“critical incidents,” as there can be (p. 126) a negative connotation with the term “critical
incidents.” The analyst directs the incumbent workers and supervisors to describe the
incidents in terms of the following:
1. the context or setting in which the incident occurred, including the behavior that
led up to the incident,
2. the specific behavior exhibited by the incumbent worker, and
3. the positive or negative consequences that occurred as a result of the behavior.
Page 11 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Because a typical CIT interview will generate hundreds of critical incidents (Brannick et
al., 2007; Knapp & Knapp, 1995), the next step in the process is to analyze the incidents
and organize them in terms of the worker behaviors described during the process. The
analyst performs a content analysis of the incidents, identifying all of the general
behavioral dimensions discussed during the job analysis. On average, the incidents can be
broken down into 5 to 12 general behavioral dimensions. Once the behavioral dimensions
have been identified, a small group of subject matter experts (SMEs) sorts the incidents
into the general different behavioral dimensions.
The CIT is especially useful when the focus is on describing or defining a job in terms of
the most “critical” job elements, rather than describing a job in its entirety. As SMEs tend
to describe jobs in terms of the job tasks that are most frequently performed instead of
focusing on job tasks that are most critical, CIT is useful in obtaining critical job tasks
and the associated worker behaviors that may be missed by other, more holistic job
analysis methods. The list of behavioral dimensions and job tasks derived from the CIT
may not be a complete picture of the job as most jobs require many worker behaviors for
job tasks that are routinely performed, but not considered “critical.” However, as
previously mentioned, we typically select people for some, not all, KSAOs. CIT is designed
to choose the most important behaviors (and thus, in theory at least, the most important
KSAOs) for selection.
A potential downside to CIT is that it may be highly labor intensive. It may take many
observations and interviews to produce enough incidents to fully describe all of the
“critical” tasks. It is possible to miss mundane tasks using critical incidents. However, it
is useful to get quickly to important aspects of performance that may not be observed
very often, so it has advantages over a simple listing of tasks. Focusing on the critical
aspects of work is desirable from the standpoint of selection.
FJA begins with the job analyst gathering information about the job in order to determine
the purpose and goal of the job. The job analyst should use multiple sources to gain
Page 12 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Next, the job analyst collects data about the job from the job incumbents. Typically, data
are collected by seating a panel of SMEs or job incumbents and asking them to describe
the tasks that they perform on the job. Although Fine and Cronshaw (1999) argued that
data should be collected during these focus group meetings, data can also be obtained
through observations and interviews of job incumbents in addition to or in place of a
focus group meeting. The job analyst's role is to turn the descriptions provided by the
SMEs into task statements. FJA requires a very specific structure for formulating task
statements. Each task statement should contain the following elements: the action
performed, the object or person on which the action is performed, the purpose or product
of the action, the tools and equipment required to complete the action, and whether the
task is prescribed or is at the discretion of the worker (Raymond, 2001). Once the job
analyst has created the task statements to (p. 127) describe the job, the SMEs then review
and rate the task statements.
The task statements created by the job analyst are subsequently evaluated for level of
complexity in terms of functioning with three entities: people, data, and things. In the FJA
definitions, people are exactly what we would normally think of as people, but also
includes animals. Data are numbers, symbols, and other narrative information. Finally,
things refer to tangible objects with which one interacts on the job. In addition to levels of
complexity for data, people, and things, FJA provides worker-oriented descriptors as well.
Other characteristics include language development, mathematics development, and
reasoning development (Brannick et al., 2007; Raymond, 2001). The physical strength
associated with each task may also be evaluated.
Like all job analysis methods, FJA has its strengths and weaknesses. A significant
strength and weakness of FJA is the specific way in which task statements are structured.
The structure provides an extremely clear and concise description of a task—what the
worker does, how it is done, and for what purpose. However, it is not easy to write proper
task statements according to the FJA structure (Fine speculated that as much as 6
months of supervised experience is needed for proficiency). Also, the cost associated with
hiring a job analyst who has an extensive background in FJA may be a deterrent for some
organizations. Another weakness of FJA is that it may be overly complex and detailed for
the purpose of selection (Knapp & Knapp, 1995; Raymond, 2001). FJA does provide task
information at an appropriate level of detail for selection, and it also provides some
information about worker attributes as well.
Page 13 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Like functional job analysis, task inventory analysis begins with a job analyst developing a
list of tasks based on multiple sources of information. Sources of information include
observations and interviews of job incumbents and supervisors (SMEs), small focus
groups with job incumbents and supervisors (SMEs), and any written descriptions of the
job. Also, like FJA, the task statements used in task inventories follow a specific format.
The format for writing a task statement begins with a verb or action, followed by the
object on which the action is being performed. Task statements often include a qualifier
to describe extra information essential to the task; however task inventories do not
require the use of a qualifier. Compared to FJA, the task statements in task inventory
analysis are shorter and more succinct. Such tasks tend to be narrower in scope than in
FJA. Often a task inventory will be used to gather information about several related jobs
and to make decisions about whether jobs are sufficiently similar to be grouped together.
For these reasons, there tend to be many more tasks in the task inventory approach than
in functional job analysis. A typical task inventory process will produce between 100 and
250 tasks (Brannick et al., 2007; Raymond, 2002).
The level of specificity with which task statements are developed can be hard to define.
General, overarching task statements should be avoided. Only those tasks with a defined
beginning, middle, and end should be included. An example of a task statement that is too
broad and overarching for a nurse would be Provide Patient Care. Although nurses do
provide patient care, the task statement is too general, and does not have a defined
beginning, middle, and end. On the other hand, task statements that describe discrete
physical movements are overly specific. Thinking again about our nurse, a sample task
may be Review the Physician's Order. The task may further be broken down into picking
up the patient's chart and looking at what the physician has ordered, but these steps are
too specific as they start to describe the physical movement of the nurse. If the resulting
task list is a lot shorter than about 100 tasks then it is probably too general. If, however,
the resulting task list has many more than 250 tasks, then it may be too detailed.
As part of the task inventory process, a survey or questionnaire is developed based on the
tasks identified during the analysis. The survey can be broken into two parts. The first
part of the survey asks the respondents to rate each of the tasks based on one (p. 128) or
more scales. As described earlier, there are many types of scales that could potentially be
used in this analysis, but the typical scales include frequency, importance, difficulty,
criticality, and time spent (Brannick et al., 2007; Nelson, Jacobs, & Breer, 1975; Raymond,
2001). The second part of the survey is the demographic section. It is important that the
people who respond to the survey or questionnaire are representative of those who
currently perform the job or those who would like to perform the job. Ideally, the survey
Page 14 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
The last step in the task inventory analysis process is to analyze the survey data. The job
analyst should verify that a representative sample of job incumbents was obtained. If a
subgroup of job incumbents is missing, then the survey should be relaunched with extra
effort to include those people in the survey process. Once a representative sample of job
incumbents has responded to the survey, the task ratings should be analyzed. Typically,
means and standard deviations are calculated. Those tasks that received low ratings on
one or more of the scales should be reviewed further by the job analyst and a group of
SMEs. It is possible that those tasks that received low ratings do not belong on the final
job analysis. In addition to reviewing those tasks that received low ratings, tasks that had
a high standard deviation should be reviewed. It is possible that job incumbents with
specific demographics perform tasks differently than those with other demographics. For
example, job incumbents who have been performing a job for 20 years may skip over
some tasks that new job incumbents perform. Or those who are new to the job may not
have a good grasp of which tasks are more or less important than others and so there
may be a lot of variability in their responses. Or the task statement may be interpreted in
different ways, particularly if it is worded poorly. For these reasons, all tasks that have
high standard deviations should be further reviewed by a group of SMEs.
There are two main limitations of task inventories. First, the KSAOs required to perform
each task are not identified. Job analysts trying to describe jobs that are highly analytical
and less vocational will be at a disadvantage when using task inventory analysis. For
example, it may be very difficult to ask a poet to describe his or her job in terms of the
specific, observable tasks that are performed. The second limitation to using task
inventories is that the rating scales used to evaluate the task statements may be
misinterpreted or ambiguous. If survey participants do not have a clear understanding of
the rating scales then the resulting survey data analysis will be problematic.
There are two main benefits to using task inventories over other job analysis methods.
First, task inventories can be much more efficient in terms of time and cost than other job
analysis methods if there are large numbers of incumbents, particularly when the
incumbents are geographically dispersed. The job analyst can create the initial list of
tasks in a reasonably short period of time, especially considering the simplicity with
which the task statements are structured. Then, the time and cost associated with
administering and analyzing a survey are relatively small. The entire job analysis process
can be completed in a shorter period of time than it might take the same job analyst to
perform the CIT interviews.
The second benefit to using a task inventory analysis over other job analysis methods is
that the results lend themselves to the development of an examination blueprint for
selection. The quantitative task ratings may be easily converted to test weights. Those
Page 15 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
DACUM.
DACUM is a systematic, group consensus method used to generate task lists associated
with a job. DACUM is an acronym for Developing A Curriculum. Although this method is
widely used in education, it is not as well known in psychology. We describe it here for
two reasons: (1) it is often used for developing certification tests, and thus can be quite
helpful in developing “content valid” tests, and (2) it incorporates SMEs directly into
linking tasks to KSAOs.
DACUM is based on three principles. The first is that incumbents know their own jobs
best. Many job analysis methods use both job incumbents and supervisors (e.g.,
functional job analysis, critical incident technique), but the DACUM process (p. 129) uses
only job incumbents. Second, the best way to define a job is by describing the specific
tasks that are performed on the job. Third, all tasks performed on a job require the use of
knowledge, skills, abilities, and other characteristics that enable successful performance
of the tasks. Unlike other job analysis methods, DACUM clearly documents the
relationship between each task and the underlying KSAOs.
In its most basic form, the DACUM process consists of a workshop or focus group
meeting facilitated by a trained DACUM facilitator leading 5 to 12 incumbents also known
as subject matter experts or SMEs, followed by some form of job analysis product review.
The primary outcome of the workshop is a DACUM chart, which is a detailed graphic
representation of the job. The DACUM chart divides the whole job into duties and divides
duties into tasks. Each task is associated with one or more KSAOs.
The DACUM process begins with the selection of the focus group panel. A working
definition of the job or occupation to be analyzed is created, and that definition is used to
aid in choosing panel members. The panel members should be full-time employees
representative of those who work in the job or occupation. Whenever possible, SMEs
selected to participate in the DACUM process should be effective communicators, team
players, open-minded, demographically representative, and willing to devote their full
commitment to the process (Norton, 1985). SMEs who are not be able to participate in
the entire process from start to finish should not be included in the DACUM panel, as
building consensus among all of the panel members is a critical element to the DACUM
process.
Following selection of the DACUM panel, the actual workshop is typically a 2-day group
meeting. The workshop begins with an orientation to the DACUM process and an
icebreaker activity. The facilitator then provides a description of the rest of the process.
Upon completion of the orientation, the facilitator leads the group in the development of
the DACUM chart. The SMEs are asked to describe the overall job during an initial
Page 16 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Once all of the job duties have been identified, each duty is further divided into tasks.
Tasks represent the smallest unit of activity with a meaningful outcome. They are
assignable units of work, and can be observed or measured by another person. Job tasks
have a defined beginning and end and can be performed during a short period of time.
They often result in a product, service, or decision. All tasks have two or more steps
associated with them, so in defining job tasks, if the SMEs are not able to identify at least
two steps for each task, then it is likely that the task in question is not really a task, but
rather a step in another task. Lastly, job tasks are usually meaningful by themselves—they
are not dependent on the duty or on other tasks. Thinking about the previous example,
Bake Dessert, Cook Breakfast, and Make Lunch may all be tasks that fall within the duty
of Preparing Family Meals. Each of these tasks has two or more steps in them (Bake
Dessert may require Preheat the Oven, Obtain the Ingredients, Mix the Ingredients,
Grease Baking Sheet, and Set Oven Timer). And each of the tasks listed can be performed
independently of the other tasks in the overall duty area. Note that the DACUM
definitions appear consistent with those we offered at the beginning of the chapter.
Finally, the associated KSAOs are described for each task. In addition to the knowledge,
skills, abilities, and worker behaviors required for successful performance of the task, a
list of tools, equipment, supplies, and materials is also created for each of the tasks. The
facilitator proceeds through each of the tasks individually, asking the panel what enablers
are required for the successful performance of the task. There should be a direct
relationship between the task and the enablers so that each task has an associated set of
enablers. Such a procedure is intended to document KSAOs that are required for each
task rather than those that are “nice to have” but are not required.
Upon completion of the workshop, the facilitator drafts a DACUM chart and distributes
the draft to a group of stakeholders for additional feedback. Following any corrections to
the draft, the chart is circulated to additional subject matter experts to obtain
quantitative data on importance, time spent, and so forth, that can be used to prepare a
test blueprint (or for other administrative purposes).
Unlike CIT, the DACUM method strives to define all of the duties, tasks, and KSAO
associated with a specific job. Like FJA (Fine & Cronshaw, 1999), DACUM relies upon a
trained facilitator to (p. 130) draw task content from a group of subject matter experts.
Like the task inventory, the tasks tend to be rather specific. Similar to the job element
method (Primoff & Eyde, 1988), but unlike the other methods in this section, DACUM
relies on on-the-job incumbents to identify the KSAOs underlying task performance.
Page 17 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
The other weakness of the DACUM method for selection is that time is spent defining
duties, tasks, and KSAOs that would never be used in the selection context. For example,
to be a licensed hair stylist, it is necessary to obtain continuing education credits
throughout your career. Because completing continuing education is a required
component of the job, the task of Obtaining Continuing Education Credit would be
identified along with the KSAOs required to perform the task successfully. The task and
the KSAOs associated with it would be included in the job analysis because it is part of
the job, and again, the DACUM process describes all of the job. However, it seems
unlikely we would select hair stylists based on their ability to obtain continuing education
credits as opposed to more immediately applicable KSAOs.
Taxonomy
Although there are many different ways of organizing human requirements at work, a
relatively simple, high-level scheme is SMIRP, for Sensory, Motor, Intellectual, Rewards,
and Personality. Because this is a high-level taxonomy, each of these categories can be
further subdivided, and different authors prefer different ways of organizing things, but
the majority of human attributes in most conventional job analysis methods can be fit into
one of these dimensions. The taxonomy is presented only as an aid to memory, not as a
theory of human ability.
Sensory.
The most straightforward of the sets is human sensory ability, which is typically thought
to contain vision, hearing, touch, taste, and smell. Proprioception, i.e., sensing body
Page 18 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Motor.
Motor requirements necessitate using the body to achieve the job's goals. Human body
movement varies from relatively skilled to relatively unskilled. Dancing, for example,
requires a great deal of skill, as does playing a guitar. Operating a motor vehicle requires
some skill; operating a mouse to control computer software typically takes little skill. Jobs
may require heavy lifting, standing for long periods, balancing yourself or objects, or
crawling into attics or other tight spaces. Most jobs require the use of the hands (but the
ability to use hands is rarely a criterion for selection).
Provisions of the Americans with Disabilities Act may render these aspects suspect if they
exclude qualified individuals with sensory or motor disabilities. The sensory and motor
specifications used for selection should be associated with essential job tasks, and should
not be easily substituted via (p. 131) alterations in equipment, staffing, or scheduling
(Brannick, Brannick, & Levine, 1992).
Intellectual/Cognitive.
Individual differences in this category have a rich history in psychology. Intellectual
abilities concern information processing, including perception, thinking, and memory.
This category is rather broad, and is further subdivided in different ways by different
authors. One way to organize intellectual traits is to consider whether they refer mainly
to functions or capacities or to contents and specific job knowledge.
Several systems (or at least parts of them) can be thought of as targeting more functional
aspects of the intellect. For example, the Position Analysis Questionnaire (PAQ;
McCormick, Jeanneret, & Mecham, 1972) considers information input and information
transformation. For information input, the PAQ asks whether the job provides numbers,
graphs, dials, printed words, or sounds as information. For information transformation,
the PAQ asks whether the job requires reasoning and problem solving. Fine's functional
job analysis considers a hierarchy of functions using data to describe the level of
intellectual challenge presented by a job. At the lower levels, a job might require
comparing two numbers to see whether they are the same. At a high level, the job might
require the incumbent to create a theory that explains empirical results or to design
research that will answer a question that cannot be answered except by original data
collection and analysis.
Page 19 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Cognitive processes cannot be directly observed, and for higher level cognitive functions,
the observable tasks and behaviors may not be very illuminating. For example, a research
scientist may spend time reading books and journal articles. Although an observer may
infer that the scientist is acquiring information, it is not at all clear what the scientist is
doing with the information so acquired. Methods of cognitive task analysis may be used
to better understand the way in which information is acquired, represented, stored, and
used (see, e.g., Seamster, Redding, & Kaempf, 1997). Cognitive task analysis may be used
to distinguish differences between the novice and the expert in approaching a specific
task. However, cognitive task analysis is designed to discover mental activity at a more
molecular level than the trait approaches described here, and does not possess a
standard list of traits to consider at the outset. Therefore, it is not discussed in greater
detail.
Rewards.
This category refers to the human side of job rewards. That is, it describes the interests,
values, and related aspects of people that make work motivating or intrinsically
satisfying. Here reward means a personal attribute that might be considered a need,
interest, or personal value that a job might satisfy. Several job analysis methods contain
lists of such rewards. The Multimethod Job Design Questionnaire (Campion & Thayer,
1985) contains a 16-item “motivational scale” that includes items such as autonomy,
feedback from the job, and task variety. Borgen (1988) described the Occupational
Reinforcer Pattern, which contains a list of job attributes such as social status and
autonomy. The O*NET descriptors for occupational interests and values include items
such as achievement, creativity, and security. Although descriptors we have labeled as
rewards are generally used for vocational guidance, they may be incorporated into the
selection process through recruiting and through measuring individual differences in an
attempt to assess person–job fit. For example, a job that offers low pay but high job
security may be of special interest to some people.
Personality.
Personality refers to traits that are used to summarize dispositions and typical behaviors,
such as conscientiousness, neuroticism, and extroversion. In addition to theories of
personality such as the Big-Five (Digman, 1990; Goldberg, 1993) and to conventional
tests of personality (e.g., the 16PF; Cattell, 1946), by personality we mean a broad
spectrum of noncognitive attributes including self-esteem, willingness to work odd hours
and shifts, and remaining attributes needed for specific jobs, that is, the O in KSAO. At
least one job analysis method was designed specifically for personality (the Personality-
Page 20 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
O*NET
The O*NET is remarkable for its comprehensiveness. The development of the O*NET is
described in Peterson et al. (1999). The O*NET is an excellent source of lists of human
abilities. Its content model is composed of six different sets of descriptors: (1) worker
requirements, (2) experience requirements, (3) worker characteristics, (4) occupational
requirements, (5) occupation-specific requirements, and (6) occupation characteristics.
The first three of these, which are further subdivided into standard lists that may be of
use when conducting job analysis for selection, are described next.
Experience requirements refer to specific types of training and licensure. In the previous
item, education refers to broader study that is not intended for a specific occupation. The
O*NET contains six descriptors in this category, including subject area education and
licenses required. In our high-level taxonomy, this category would also fall under the
intellectual category. However, experience and licenses imply competence in particular
tasks, meaning mastery of whatever declarative and procedural skills are needed for task
completion.
Worker characteristics are further subdivided into (1) abilities, (2) occupational values
and interests, and (3) work styles. Examples of abilities in the O*NET include oral
expression, mathematical reasoning, manual dexterity, and night vision. Note that the
O*NET organizes the abilities as capacities, and lists sensory, motor, and intellectual
abilities in the same category. Examples of occupational values and interests include
achievement, responsibility, and security. Occupational values would be considered
Page 21 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Fleishman and Reilly (1992) created a small book that lists a large number of human
abilities along with definitions of each. The abilities are grouped into cognitive (e.g.,
fluency of ideas, number facility), psychomotor (e.g., control precision, multilimb
coordination), physical (e.g., trunk strength, stamina), and sensory/perceptual (e.g., near
vision, sound localization). Note that Fleishman and Reilly (1992) have subdivided our
motor category into psychomotor and physical aspects, so their list may be particularly
useful for jobs with significant physical requirements. Additionally, the listed abilities are
linked to existing measures and test vendors, which is very helpful for the analyst who
has selection in mind.
Lopez (1988) provided a short but comprehensive list of human abilities that can provide
a basis for selection. The 33 listed traits are organized into five areas: physical, mental,
learned, motivational, and social. The first three correspond roughly to our sensory,
motor, and intellectual categories. Examples include strength and vision (physical),
memory and creativity (mental), and numerical computation and craft skill (learned). The
last two categories correspond roughly to our personality characteristics. Examples are
adaptability to (p. 133) change and to repetition (motivational) and personal appearance
and influence (social).
Management Competencies
Because leadership and management are so important to business, the KSAOs required
for success in such jobs is of abiding interest and has a long history in psychology. Many
proprietary systems targeting management competencies are currently available. One
system with some empirical support was described by Bartram (2005) as the “Great
Eight,” for the eight high-level dimensions of managerial functioning. Some of the
competencies included in the Great Eight are leading and deciding, supporting and
cooperating, analyzing and interpreting, and adapting and coping. Some of the attributes
are more intellectual (deciding, analyzing, interpreting) and some have a more social and
personality flavor (supporting and cooperating, adapting and coping). The ability to
handle stress and to cope with failure are noteworthy characteristics that may be more
Page 22 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
We have not distinguished between competency and KSAO to this point, and, in fact,
competency has been defined in many different ways (Shippmann et al., 2000). It is
interesting that some view competencies as behaviors, but others view them as
capacities. For example, “a competency is not the behavior or performance itself, but the
repertoire of capabilities, activities, processes and responses available that enable a
range of work demands to be met more effectively by some people than by others” (Kurz
& Bartram, 2002, p. 230). On the other hand, “a competency is a future-evaluated work
behavior” (Tett, Guterman, Bleier, & Murphy, 2000, p. 215). A related issue is whether the
competencies refer to capacities of people or to standards of job performance (Voskuijl &
Evers, 2008). Bartram (2005) considered the managerial competencies to be criteria to be
predicted from test scores, but others have regarded competencies as predictors of
performance (Barrett & Depinet, 1991). Of course, behavioral measures may be used as
either predictors or criteria. There is precedent for making little distinction between
ability and performance. Using the job element method (Primoff & Eyde, 1988), for
example, we might speak of the “ability to drive a car.” Such an element might be defined
in terms of a performance test rather than in terms of perceptual and psychomotor skills
along with knowledge of the rules of the road. Doing so has practical application when
work samples are used in selection. However, failing to distinguish between the
performance of a task and the underlying capacities or processes responsible for task
performance is unsatisfying from a theoretical standpoint. Defining the ability in terms of
the performance is circular; an ability so defined cannot serve to explain the
performance. Furthermore, it is a stretch to use an ability defined by a specific task
performance to explain more distal behaviors. Motor skills might serve to explain the
quality of operating many different kinds of vehicles, but the ability to drive a car would
not be expected to explain the quality of operating other vehicles.
Page 23 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
analysis is initiated to support personnel selection, attention be paid to both the work
performed (the tasks and duties) and the worker attributes (worker requirements)
necessary for success on the job. The immediate product of the analysis will be an
understanding of what the worker does and what characteristics are necessary or
desirable in job applicants. The process of the analysis and the resulting understanding
should be detailed and documented in writing as part of the practical and legal
foundation of the subsequent process of personnel selection (Thompson & Thompson,
1982).
The logic of test validation as indicated through a series of steps is to (1) discover the
KSAOs needed for successful job performance through job analysis, (2) find tests of the
KSAOs, (3) measure the workers’ KSAOs using the tests, (4) find measures of the
workers’ job performance, (5) measure the workers’ performance on the job, and (6)
compare the test scores to the job performance scores. On the face of it, we would expect
to see a relation between test scores and job performance scores, provided that the
KSAOs identified in the job analysis are in fact the major determinants of individual
differences in both performance on the tests and performance on the job. Experience has
shown that there is reason to believe that a well-executed validation study will provide
support for the job relatedness of a test. However, experience has also shown that there
are many ways in which the study may fail to support the job relatedness of the test;
empirical support for testing is not always easy to obtain.
Page 24 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
The logic of test validation suggests that we first discover the KSAOs and then find tests
of them. In practice, we often have prior knowledge of similar jobs, available tests, and
prior test validation studies. One decision that must be confronted is whether to use tests
of KSAOs that have a proven track record or to attempt to develop or buy tests that
appear appropriate but do not have such a record. For example, general cognitive ability
and conscientiousness have shown associations with job performance for a large number
of different jobs (e.g., Schmidt & Hunter, 1998; Barrick & Mount 1991).
A second decision that must be confronted is whether to buy an existing test or to create
one specifically for the job. There are advantages and disadvantages to buying a test. Test
development is time consuming and technically challenging. Many test vendors now
provide online testing capabilities, which is an additional resource to consider if there is
interest in building a test rather than buying one. There are several major test publishers
that sell tests often used in selection. Many of these can be found online at the Pan
Testing organization, http://www.panpowered.com/index.asp. On the other hand, a
properly trained psychologist should be able to develop a sound test given proper
resources, including time, materials, and participants. In the long run, it may be cheaper
to create a test than to continue to pay to use a commercially available product.
In test validation, we must find measures of the important KSAOs, regardless of whether
we build or buy them. As we mentioned earlier, the business of isolating and labeling the
KSAOs is considerably simplified if we use work samples as tests because we can
essentially dispense with anything other than a summary score, at least for the work
sample. On the other hand, if we decide that some trait such as agreeableness is
important for job performance, then we need to build or buy a test of agreeableness.
Unfortunately, it cannot be assumed that a test can be judged by its label. Different tests
that purport to measure agreeableness may yield rather different scores on the same
individuals (Pace & Brannick, 2010). Thus, careful research about the meaning of test
scores for existing tests must be conducted in order to have much confidence about the
congruence of the KSAO in question and what is being measured by the test. When a test
is being built, the content of the test is more fully under the control of the developer, but
the meaning of the resulting scores may not be as clear, particularly if it concerns an
abstract trait such as agreeableness, because the meaning of the scores yielded by
validation studies will not be readily discernible.
In theory, the same logic applies to the criterion (the measure of job performance) as to
the predictor (the test). We find one or more measures of job performance that tap the
required KSAOs and measure people on these. At first, our insistence upon considering
the KSAOs may seem silly. If we have a measure of job performance, does it not by
definition embody the KSAOs required for the job? If it has systematic variance that is
also related to the goals of the job, then of course it stands to reason that it must reflect
Page 25 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Of course in implementing actual criterion measures we must take into account factors
that are irrelevant to the constructs we wish to index or beyond the job holder's control.
For example, the dollar value of goods sold over a given time period in a sales job is
clearly a criterion of interest for a validation study of a sales aptitude test. However, the
variance in the measure, that is, the individual differences in the dollar value of goods
sold for different sales people, may be due primarily to factors such as geographic
location (sales territory), the timing of a company-wide sale, and the shift (time of day) in
which the employee typically works. This is one reason to consider percentage of sales
goal reached or some other measure of sales performance rather than raw dollars that
takes into account some or all of the extraneous factors. Although dollar outcome is
obviously relevant to job performance for sales people, obtaining reliable sales data may
take a surprisingly long period of time because so much of the variance in dollar
outcomes tends to be due to factors outside the sales person's control.
Supervisory ratings of job performance are the most commonly used criterion for test
validation studies. We recommend avoiding existing performance appraisal records as
criteria for test validation studies. Such records are problematic because they typically
document supervisory efforts to keep and promote their people rather than illuminate
individual differences in job performance. It is inevitable that supervisors form personal
ties with their subordinates, and understandable when such relationships influence the
annual evaluations. There are also organizational issues such as the size of a manager's
budget and the manager's ability to move a poorly performing person from one unit to
another that may impact annual evaluations. Finally, our experience has shown that
annual evaluations from performance appraisals rarely show a statistical relation to
applicant test scores.
Page 26 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
A rating form should be created. In the form, the duties should be enumerated, with the
tasks listed beneath them, as one might find in a task inventory. For each task, the
supervisor should be asked to rate the subordinate on task performance. A number of
different rating scales may be appropriate, such as behaviorally anchored rating scales
(BARS; Smith & Kendall, 1963) or behavioral observation scales (BOS; Latham & Wexley,
1977). After rating the tasks, the supervisor is asked to rate the subordinate overall on
the duty. The raters proceed one duty at a time until all the duties for the job are covered,
and then the rater is asked to provide a final rating for the ratee on overall job
performance. Such a method will link the performance rating (outcome measure of job
performance) clearly to the tasks or job content. It also has the benefit of drawing the
supervisor's attention to the task content of the job before asking about overall job
performance. By doing so, we hope to reduce the amount of extraneous variance in the
measure.
If other kinds of performance measures are used, then careful consideration should be
given to their reliability and whether they are likely to be strongly (p. 136) related to the
KSAOs in question. Criterion measures should probably be avoided unless they are
clearly related to the KSAOs captured in the predictors. Both criterion measures and
tests should reflect the KSAOs that determine excellence in job performance. Unless the
criteria and tests are well matched, test validation is a particularly risky business.
Available criteria are usually unreliable and contaminated by factors extraneous to what
the employee actually does. In such a case, even if we were to have a population of
employees with which to work, the association between tests and job performance would
not be strong. When we couple the underlying effect size with the typical sample size in a
validation study, the power to detect an association tends to be low. If the validation study
shows null results, then we have produced evidence that the test is not job related, which
is good ammunition for anyone wishing to attack the use of the test for selection.
Page 27 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Then we turn to methods that rely on judgments of similarity of test content and job
content. This kind of evidence supporting validity is accepted by three of the primary
authorities on test use, the American Educational Research Association, the American
Psychological Association, and the National Council on Measurement in Education as
promulgated in their Standards for Educational and Psychological Testing (1999).
Obviously, job analysis is fundamental to the comparison of job and test content. Where
this approach to validation is commonly employed, and may represent the only feasible
option, is in the domain of (1) developing certification and licensure assessments, the
passing of which is considered to be part of the selection of personnel in numerous
professional and technical jobs, and (2) the setting of minimum qualification
requirements.
Synthetic validity studies (Guion, 1965) can be arranged into two major groups based on
the study design (Johnson, 2007). In the ordinary design, the individual worker is the unit
of analysis. In the second design, the job is the unit of analysis. The second, job-level
design is commonly labeled “job component validity” or JCV (Hoffman et al., 2007;
Johnson, 2007). The logic of applying synthetic validity is to borrow test validation data
from other jobs and apply them to the target job.
Page 28 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Job-level studies.
Job component validity studies relate aspects of jobs [typically KSAs; much of this work
has been based on a job analysis method that is based on the PAQ, e.g., McCormick,
DiNisi, & Shaw, (1979)] to a job-level outcome. The two outcomes of interest are typically
either (1) mean test scores of job incumbents or (2) criterion-related validity coefficients
from test validation studies. The justification for using mean test scores as an outcome is
the “gravitational hypothesis” (Wilk, Desmarais, & Sackett, 1995), which states that
workers tend to gravitate to jobs that are appropriate to their level of KSAOs. Therefore,
we should expect to see, for example, brighter people on average in more cognitively
demanding jobs and stronger people on average in more physically demanding jobs. Note
that such a between-jobs finding does not directly show that better standing on the test
Page 29 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Validity Transport
The Uniform Guidelines on Employee Selection Procedures (EEOC, 1978) allows for a
validity study originally conducted in one setting to be applied in another target setting
provided that four general conditions are met: (1) the original study must show that the
test is valid, (2) the original job and the target job must involve “substantially the same
major work behaviors,” (3) test fairness must be considered, and (4) important contextual
factors affecting the validity of the test must not differ from the original and target
settings (Gibson & Caplinger, 2007).
The key question becomes one of whether the current job is similar enough to the
previously studied jobs so that the available evidence is applicable (the jobs share
“substantially the same major work behaviors”). Unfortunately, there is no professional
standard that indicates the required degree of similarity, nor is there an established
procedure that is agreed throughout the profession to yield an unequivocal answer to
whether evidence from another context is applicable. However, it is still possible to do
Page 30 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Regarding the job analysis, first, the Guidelines require that a job analysis be completed
for both the original and target jobs. Therefore, a transportability study is not feasible
unless the original study included an analysis of the job. Second, it seems very likely that
work-oriented descriptors are needed to describe both jobs in order to establish that the
major work behaviors are similar. Although is it possible to argue that the jobs are
substantially the same based on worker attributes (see, e.g., Gibson & Caplinger, 2007, p.
34), this argument appears risky, particularly in the absence of work descriptions. Third,
some rule or decision criterion must be adopted for establishing whether the jobs are
sufficiently similar.
Gibson and Caplinger (2007) provided an example transportability study that includes
practical suggestions regarding the job analysis. They recommended the development of
a task inventory that contains rating scales for time spent and importance, which they
combined to create a numerical scale of criticality. The task inventory was completed for
both the original and target jobs, and criticality was measured for both jobs. A common
cutoff for criticality was set and applied to both jobs, so that for each job, each task was
counted either as critical or not. The decision rule was a similarity index value of 0.75,
where the similarity index is defined by
where NC is the number of critical tasks common to both jobs, NO is the number of
critical tasks in the original, and NT is the number of critical task in the target job
(Hoffman, Rashkovsky, & D’Egidio, 2007, p. 96 also report a criterion of 75% overlap in
tasks for transportability). When the numbers of tasks in the original and target jobs are
the same, then the similarity index is the ratio of common to total critical tasks. There are
many other ways of assessing job similarity, of course (see, e.g., Gibson & Caplinger,
2007; Lee & Mendoza, 1981).
Gibson and Caplinger (2007) also noted three additional important considerations for the
job analysis. First, the task inventory for the target job should allow for the job experts to
add new tasks that are not part of the original job. Second, the degree of specificity of the
task statement will affect the ease with which it is endorsed, which may in turn affect the
apparent similarity of jobs. Third, where differences between the original and target jobs
are discovered, it is important to consider the KSAOs required and whether the choice of
tests would be affected. In short, diligence is required throughout the transportability
study to avoid misleading results.
Page 31 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
It has been argued that if a meta-analysis has been conducted for a certain job or job
family comparing a test with overall job performance (and assuming a positive result of
the meta-analysis), then it should be sufficient to complete a job analysis that shows that
the job of interest belongs to the job family in which the meta-analysis was completed
(e.g., (p. 139) Pearlman, Schmidt, & Hunter, 1980; Schmidt & Hunter, 1998). The
argument appears to have been that if the current job belongs to the family where the
validity generalization study has shown the test to be related to overall job performance
(the criterion of interest), then no further evidence of job relatedness is required.
Others have taken issue with the job family argument on various grounds. Some have
argued that the shape of the distribution of true effect sizes could result in erroneous
inferences, particularly if random-effects variation remains large (e.g., Kemery,
Mossholder, & Dunlap, 1989; Kisamore, 2008). Others have worried that unless the tests
and performance measures in the current job can be shown to be in some sense
equivalent to those in the meta-analysis, then the applicability of the meta-analysis to the
local study is doubtful and the argument is not very compelling (Brannick & Hall, 2003).
However, here we are concerned with the job analysis that might be used to support
validity generalization and how the information might be used. The idea is to determine
whether the current job is sufficiently similar to a job (or job family) that has been the
subject of a validity generalization study. One approach would be to match the target job
to the meta-analysis using a classification scheme such as the DOT (Pearlman et al., 1980)
or O*NET. In theory, the classification could be based on a variety of descriptors, both
work and worker oriented. The evaluation of the rule could be based at least in part on
classification accuracy. For example, what is the probability that a job with the title
“school psychologist” and tasks including counseling families and consulting teachers on
instruction of difficult students is in fact a job that fits the O*NET designation 19-3031.1
—School Psychologists? Is a reasonable standard 95% probability?
McDaniel (2007) noted that in addition to the comparability of the work activities of the
target job and the job(s) in the meta-analysis, it is also necessary to consider the
comparability of the tests and criteria used in the meta-analysis. Clearly it is a greater
stretch when the tests and criteria contemplated for the target job diverge from those
represented in the studies summarized in the meta-analysis. Additional issues in applying
a meta-analysis to a target job mentioned by McDaniel (2007) deal with technical aspects
of the meta-analysis, namely the artifact corrections used (range restriction, reliability),
the representativeness of the studies included in the meta-analysis, and the general
competence in completing the meta-analysis (see, e.g., Sackett, 2003; Borenstein,
Hedges, Higgins, & Rothstein, 2009).
Page 32 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Sometimes an employment test essentially samples job content and presents such content
to the applicant in the form of a test. Such a test is said to be content valid to the degree
that the stimulus materials in the test faithfully represent the task contents of the job. It
seems to us, therefore, that the job analysis in support of claims of content validity would
need to carefully define the work, such as duties and tasks. Otherwise, it will be difficult
to show that the test components mirror (or are sampled from) the content of the job.
The notion of content validity has been somewhat controversial because unlike other
methods of test validation, content validity does not contain an empirical check of the link
between test scores and job performance measures. Tests based on job content offer
some advantages, such as bypassing the naming of KSAOs and appearing fair to
applicants. Some jobs lend themselves well to such a practice. Translating spoken
material from one language to another might make a good test for an interpreter, for
example. For other jobs, such as a brain surgeon, a content valid test is probably a poor
choice. In any event, when the job involves risk to the public, careful tests of competence
of many sorts are used to ensure that the person is able to accomplish certain tasks, or at
least that he or she has the requisite knowledge to do so. In such instances, traditional
validation studies are not feasible (we do not want to hire just anyone who wants to be a
brain surgeon to evaluate a test).
In this section, we describe the development of tests for certification purposes because
they represent an approach to developing a test (employment or not) that can be
defended as representing the job domain of interest. Even though the material is covered
under certification testing, such an approach can be used to create content valid tests
because the job's KSAOs are carefully linked to the content of the test. In our view, a
simple sampling of the content of the job is possible, but the method described here is
likely to be easier to defend and to result in a good test.
Tests used in licensing and certification are almost invariably built and defended based on
their content. The logic is that the content of the test can (p. 140) be linked to the content
of the occupation of interest. Most often, the argument rests upon showing that the
knowledge required by the occupation is being tested in the examination used for
selection. Tests that are designed for certification follow the same logic. For such
instances, there is no empirical link between test scores and job performance scores.
Instead the test developer must provide other data and arguments that support the
choice of KSAOs and the evidence for the pass/fail decision that was made.
Although the distinction between licensure and certification testing has become blurred,
there are differences between the two (Downing, Haldayna, & Thomas, 2006). Generally
speaking, licensure is required to perform a job, whereas certification is often voluntary.
Licensure implies minimal competence, whereas certification implies something higher
Page 33 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
In both cases, the regulatory body that licenses the dentist and the orthodontic board that
certifies the dentist must provide evidence that the decision to license or certify the
dentist is appropriate. Credentialing organizations can provide evidence of content
validity in a number of ways. One way is to document the relationship between the
assessment used to license or certify the individual and the job in which the individual is
licensed or certified (Kuehn, Stallings, & Holland, 1990). The first step in illustrating the
relationship between the job and the selection instrument to be used for the job is to
conduct a job analysis. Any job analysis method can be used, but the methods most used
by credentialing organizations are task inventories, DACUM, critical incident technique,
functional job analysis, position analysis questionnaire, and the professional practices
model (Knapp & Knapp, 1995; Nelson, Jacobs, & Breer, 1975; Raymond, 2001; Wang,
Schnipke, & Witt, 2005). These methods are preferred over other job analysis methods
because they provide the sort of detail needed for developing a test blueprint for
assessing job knowledge.
The second step in illustrating the relationship between the job and the selection
instrument is to conduct a verification study of the job analysis (in the literature on
certification testing, this step is often referred to as a “validation study,” but we use the
term “verification study” here so as to avoid confusion with traditional labels used in test
validation). The purpose of the verification study is twofold. First, the study is used to
verify that all of the components of the job were described in the job analysis and that no
aspects of the job were missed (Colton, Kane, Kingsbury, & Estes, 1991). This is critical,
as the selection instrument will be based on a test blueprint, and the test blueprint is
based on the job analysis. Second, the study is used to verify that all of the components of
the job analysis are actually required for the job (i.e., the tasks described in the job
analysis are all performed on the job, and the KSAOs required to perform those tasks are
in fact necessary). This is evaluated by asking participants in the verification study to rate
the components of the job analysis using one or more of the rating scales. Note the
similarity to criterion deficiency and contamination.
The third step is to create a test blueprint based on the job analysis. There are a number
of ways to combine ratings of tasks or KSAOs to arrive at an overall test blueprint. Kane,
Kingsbury, Colton, and Estes (1989) recommend using a multiplicative model to combine
task ratings of frequency and importance to determine the weights of the tasks on a test
blueprint (in their example, the tasks identified in the job analysis became the content
areas on the subsequent selection test). Lavely, Berger, Blackman, Bullock, Follman,
Kromrey, and Shibutani (1990) used a factor analysis of importance ratings to determine
the weighting on a test blueprint for a test used to select teachers for certification.
Page 34 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
In addition to the link provided by the test blueprint, arguments in support of test
(p. 141)
contents are bolstered by the inclusion of SMEs throughout the job analysis and test
development process. SMEs, which include both job incumbents and supervisors, should
be involved in the initial job analysis, the validation study, the development of the
examination blueprint, and the development of the final selection instrument or
assessment. SMEs can affirm that the job analysis resulted in a complete depiction of the
job, that the most important, critical, and frequent tasks have a greater emphasis than
those that are not as important or critical, or performed less frequently, that the
examination blueprint is based on the verification study, and that the content of the test
items is congruent with the examination blueprint.
Minimum Qualifications
We often turn to work samples as a means to avoid making the inferential leap needed for
the precise specification of the required KSAOs. Precise statements about KSAOs can also
be avoided by directly specifying experience and education as a basis for predictions of
job success. Stated amounts and kinds of education and/or experience are almost
universally used in formulating what are referred to as the minimum qualifications (MQs)
for a job.
MQs are usually defined by the specified amounts and kinds of education and experience
deemed necessary for a person to perform a job adequately. The global nature and
complexity of such indicators in terms of the KSAOs they presumably index render them a
real challenge when attempting to develop and justify them by means of conventional job
analysis [see Tesluk & Jacobs (1998) for an elaborate discussion of what work experience
encompasses, and Levine, Ash, & Levine (2004) for both experience and education]. Thus,
it is not surprising that MQs have been set and continue to be set by intuition, tradition,
trial and error, and expectations of yield in terms of numbers of qualified applicants,
including minorities and females. Such unsystematic approaches to developing MQs may
account in large part for the relatively poor levels of validity found in research on the use
of education and experience in selection (Schmidt & Hunter, 1998).
Perhaps because MQs are rarely challenged under Equal Employment Opportunity laws,
and perhaps because managers may have an exaggerated sense of their capacity to set
MQs, little research has been devoted to the development of job analysis methods that
could facilitate the formulation and validation of MQs (Levine, Maye, Ulm, & Gordon,
Page 35 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
The approach developed by Levine et al. (1997) relies on the use of evidence from job
content as the basis for setting MQs. The method begins with a job analysis that first
seeks to establish a full inventory of tasks and KSAs. The resulting lists are evaluated and
edited by subject matter experts who rate them using scales that enable the winnowing of
these full lists to only those tasks and KSAs that can be expected to be performed
adequately (tasks) or possessed (KSAs) by barely acceptable employees upon hire. From
this restricted list, human resource specialists conduct careful research, consult with
SMEs, and use their own occupational knowledge to estimate what kinds and amounts of
training, education, and work experience indicate that the applicant possessed the task
proficiencies and/or the KSA levels required for barely acceptable performance. This
process leads to the development of so-called profiles of education and experience, any
one of which would suggest that the applicant may be expected to perform adequately.
To illustrate the contrast between the old and new MQs developed using the new method,
we cite here the outcomes for one of the jobs analyzed by Levine et al. (1997), Pharmacy
Technician. The original MQs stated the need for “Two years of experience in assisting a
registered pharmacist in the compounding and dispensing of prescriptions.” At the end of
the process six profiles were deemed (p. 142) acceptable. Two of these were: (1) “Eighteen
months of experience assisting a pharmacist in a nonhospital setting. Such duties must
include maintaining patient records, setting up, packaging, and labeling medication
doses, and maintaining inventories of drugs and supplies”; (2) “Completion of a Hospital
Pharmacy Technician program accredited by the American Society of Hospital
Pharmacists.”
Buster et al. (2005) developed a method that shares some elements with the approach of
Levine et al., but differs in significant ways. The approach of Buster et al. (2005) focuses
first on the MQs themselves. Analysts meet with SMEs who are given a list of KSAs (but
not tasks) and are then asked individually to generate potential MQs. Subsequently, they
discuss as a group the MQs offered. A form is provided that specifies various options of
experience and education for SMEs. The selected options are bracketed in an MQ
questionnaire, meaning that options a bit more demanding and a bit less are included for
Page 36 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Outcomes included a smaller number of MQ alternatives in the set accepted for use in
selection compared to options found by Levine et al. (1997), reduced adverse impact
shown for one job, and a successful defense of the method in a court hearing. No
reliability data were provided.
Perhaps the most important difference between these approaches is the extent of reliance
on SMEs. Levine et al. (1997) found across numerous jobs that SMEs were unable to
provide useful MQ options. Instead they seemed to rely on traditional statements using
amounts of education and experience as the primary input, and there was relatively little
agreement across SMEs in the quality and content of the MQ statements. In reviewing
the MQs resulting from the method employed by Buster et al., our judgment is that they
reflect a stereotypical conception, raising the question of whether the elaborate method
resulted in a qualitatively different set of MQs than just asking a few SMEs.
For example, an MQ for a Civil Engineer Administrator asked for a high school diploma/
GED plus 16 years of engineering experience. Such an MQ is problematic in several
respects. First, the use of a high school diploma had been rejected as unlawful in the
landmark Griggs v. Duke Power case testing Title VII of the Civil Rights Act. Second, it is
unclear what a high school diploma actually measures in terms of KSAs (contrast this
with completing a hospital tech pharmacy program). Finally, as Levine, et al. (2004)
stated, “Length of experience beyond five years is unlikely to offer substantial
incremental validity” (p. 293). Human resource analysts known to us, based on their
experience using work experience measures for selection, and our own extensive use of
MQs, also are against very lengthy experience requirements because they may result in
indefensible adverse impact, especially against women.
Reliance on SMEs for developing the MQs may also be counterproductive for various
other reasons. First, SMEs who are not psychometrically sophisticated may recommend
overly simplistic and unjustified substitutions of education for experience or vice versa.
For example, substituting experience for education could result in meeting MQs without
any relevant education, conceivably yielding employees deficient in some KSA that may
be acquired only in a formal educational setting. Second, SMEs may at times attempt to
manipulate the process to achieve a hidden agenda, such as “professionalizing” a job by
requiring more demanding MQs, or seeking to raise the pay level for the job by raising
the MQs, regardless of whether KSAs are being measured validly. Third, SMEs are often
Page 37 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
The reliability and validity of MQs are difficult to assess. We are unaware of research
indicating the degree to which independent efforts to establish MQs for the same job
result in the same MQs. Levine et al. (1997) showed that judges could reliably determine
whether applicants met given MQs, which might seem obvious, but the nature of (p. 143)
experience is actually sometimes difficult to determine. Both methods suffer from the
problem of lack of evidence on the construct validity of key scales, including the scale
used to establish the linkage between MQs and job analysis items.
As yet there is little empirical research on the results of using these methods for setting
MQs. For example, we do not know whether the use of the MQs developed via one of
these methods results in better cohorts of hires than MQs set in the traditional,
superficial fashion. The adverse impact of using MQs as tests also needs attention (Lange,
2006). Clearly this domain is of sufficient importance as an application of job analysis,
and the gaps in our knowledge beg for additional research.
Conclusions
The topic of this chapter is discovering a job's nature, including the tasks and the
knowledge, skills, abilities, and other characteristics believed to provide the underlying
link between employment tests and job performance. The process of discovery is called
job analysis, which may take many different forms, some of which were described as
conventional methods in this chapter (e.g., the task inventory, Functional Job Analysis).
Completing a job analysis study involves making a number of decisions, including which
descriptors to use, which sources of information to use, the amount of detail and context
to include, and whether to identify underlying KSAOs or to assume that tests based on
sampling job content will cover whatever KSAOs are necessary. The purpose or goal of
testing must also be considered (i.e., is this intended to select the best? To screen out
those totally unqualified?).
After defining terms and describing some of the decisions required in a job analysis, we
described some conventional methods of job analysis and provided a taxonomy of KSAOs
along with supporting references for further detail. Because the conventional methods
are described in many different texts, we did not provide a great deal of procedural detail
about them. Instead we emphasized applying the information on KSAOs to the choice or
development of tests and criteria. We stressed that in the conventional criterion-related
validity design, it is important to choose tests and criteria that are matched on the KSAOs
and are saturated with the same KSAOs on both the predictor and criterion side to have a
good chance of finding a positive result. Finally we turned to alternative validation
strategies that might be used when the conventional criterion-related validity study is not
Page 38 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
PAQ synthetic validity studies suggest that trained job analysts can provide information
about KSAOs that is related to correlations between tests scores and job performance,
particularly for cognitive ability tests. There is also some empirical support for
noncognitive tests. However, legal considerations as well as professional opinion (e.g.,
Harvey, 1991) suggest that work activities (tasks, duties) continue to be an important part
of job analysis used to support personnel selection decisions.
This chapter emphasizes supporting test use though discovery and documentation of the
important underlying KSAOs responsible for job success. The chapter is unique in that it
stresses the decisions and judgments required throughout the job analysis and their
relation to test development and use. It also contributes by considering in one place not
only the conventional test validation design, but also the relations between job analysis
and tests set by judgment (minimum qualifications, certification/content testing) and
alternative validation strategies.
References
American Educational Research Association, American Psychological Association, &
National Council on Measurement in Education. (1999). Standards for educational and
psychological testing. Washington, DC: American Educational Research Association.
Barrick, M. R., & Mount, M. D. (1991). The Big Five personality dimensions and job
performance: A meta-analysis. Personnel Psychology, 44, 1–26.
Page 39 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to
meta-analysis. Chichester, UK: John Wiley & Sons.
Borgen, F. H. (1988). Occupational reinforcer patterns. In S. Gael (Ed.), The job analysis
handbook for business, industry, and government (Vol. II, pp. 902–916). New York: John
Wiley & Sons.
Boyatzis, R. E. (1982). The competent manager. New York: John Wiley & Sons.
Brannick, M. T., Brannick, J. P., & Levine, E. L. (1992). Job analysis, personnel selection,
and the ADA. Human Resource Management Review, 2, 171–182.
Brannick, M. T., & Hall. S. M. (2003). Validity generalization from a Bayesian perspective.
In K. Murphy (Ed.), Validity generalization: A critical review (pp. 339–364). Mahwah, NJ:
Lawrence Erlbaum Associates.
Brannick, M. T., Levine, E. L., & Morgeson, F. P. (2007). Job and work analysis: Methods,
research and applications for human resource management. Thousand Oaks, CA: Sage.
Buster, M. A., Roth, P. L., & Bobko, P. (2005). A process for content validation of education
and experienced-based minimum qualifications: An approach resulting in Federal court
approval. Personnel Psychology, 58, 771–799.
Cascio, W. F., & Aguinis, H. (2011). Applied Psychology in Human Resource Management
(7th ed.). Boston: Prentice Hall.
Christal, R. E., & Weissmuller, J. J. (1988). Job-task inventory analysis. In S. Gael (Ed.),
The job analysis handbook for business, industry, and government (Vol. II, pp. 1036–
1050). New York: John Wiley & Sons.
Colton, A., Kane, M. T., Kingsbury, C., & Estes, C. A. (1991). A strategy for examining the
validity of job analysis data. Journal of Educational Measurement 28(4), 283–294.
Page 40 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Fine, S. A., & Cronshaw, S. F. (1999). Functional job analysis: A foundation for human
resources management. Mahwah, NJ: Lawrence Erlbaum Associates
Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51, 327–
358.
Gael, S. (1983). Job analysis: A guide to assessing work activities. San Francisco: Jossey-
Bass.
Gatewood, R. D., & Field, H. S. (2001). Human resource selection (5th ed.). Orlando, FL:
Harcourt.
Griggs v. Duke Power Co. (1971). 401 U.S. 424 (1971) 91 S.Ct. 849 Certiorari to the
United States Court of Appeals for the Fourth Circuit No. 124.
Gutenberg, R. L., Arvey, R. D., Osburn, H. G., & Jeaneret, P. R. (1983). Moderating effects
of decision-making/information processing job dimensions on test validities. Journal of
Applied Psychology, 36, 237–247.
Harvey, R. J., & Wilson, M. A. (2000). Yes Virginia, there is an objective reality in job
analysis. Journal of Organizational Behavior, 21, 829–854.
Page 41 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Hoffman, C. C., Holden, L. M., & Gale, E. K. (2000). So many jobs, so little “N”: Applying
expanded validation models to support generalization of cognitive test validity. Personnel
Psychology, 53, 955–991.
Hoffman, C. C., & McPhail, S. M. (1998). Exploring options for supporting test use in
situations precluding local validation. Personnel Psychology, 51, 987–1003.
Hoffman, C. C., Rashkovsky, B., & D’Egidio, E. (2007). Job component validity:
Background, current research, and applications. In S. M. McPhail (Ed.), Alternative
validation strategies: Developing new and leveraging existing validity evidence (pp. 82–
121). San Francisco: John Wiley & Sons.
Hogan Assessment Systems. (2000). Job Evaluation Tool manual. Tulsa, OK: Hogan
Assessment Systems.
Hogan, J., Davies, S., & Hogan, R. (2007). Generalizing personality-based validity
evidence. In S. M. McPhail (Ed.), Alternative validation strategies: Developing new and
leveraging existing validity evidence (pp. 181–229). San Francisco: John Wiley & Sons.
Johnson, J. W., Carter, G. W., & Tippins, N. T. (2001, April). A synthetic validation approach
to the development of a selection system for multiple job families. In J. W. Johnson & G. W.
Carter (Chairs), Advances in the application of synthetic validity. Symposium conducted
at the 16th Annual Conference of the Society for Industrial and Organizational
Psychology, San Diego, CA.
Kane, M. T., Kingsbury, C., Colton, D., & Estes, C. (1989). Combining data on criticality
and frequency in developing test plans for licensure and certification examinations.
Journal of Educational Measurement, 26(1), 17–27.
Kemery, E. R., Mossholder, K. W., & Dunlap, W. P. (1989). Meta-analysis and moderator
variables: A cautionary note on transportability. Journal of Applied Psychology, 74, 168–
170.
Page 42 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Kuehn, P. A., Stallings, W. C., & Holland, C. L. (1990). Court-defined job analysis
requirements for validation of teacher certification tests. Educational Measurement:
Issues and Practice 9(4), 21–24.
Kurz, R., & Bartram, D. (2002). Competency and individual performance: Modeling the
world of work. In I. T. Robertson, M. Callinan, & D. Bartram (Eds.), Organizational
effectiveness: The role of psychology (pp. 227–255). New York: John Wiley & Sons.
LaPolice, C. C., Carter, G. W., & Johnson, J. W. (2008). Linking O*NET descriptors to
occupational literacy requirements using job component validation. Personnel Psychology,
61, 405–441.
Latham, G. P., & Wexley, K. N. (1977). Behavioral observation scales for performance
appraisal purposes. Personnel Psychology, 30, 355–368.
Lavely, C., Berger, N., Blackman, J., Bullock, D., Follman, J., Kromrey, J., & Shibutani, H.
(1990). Factor analysis of importance of teacher initial certification test competency
ratings by practicing Florida teachers. Educational and Psychological Measurement, 50,
161–165.
Lee, J. A., & Mendoza, J. L (1981). A comparison of techniques which test for job
differences. Personnel Psychology, 34, 731–748.
Levine, E. L. (1983). Everything you always wanted to know about job analysis. Tampa,
FL: Mariner.
Levine, E. L., Ash, R. A., Hall, H., & Sistrunk, F. (1983). Evaluation of job analysis
methods by experienced job analysts. Academy of Management Journal, 26, 339–348.
Levine, E. L., Ash, R. A., & Levine, J. D. (2004). Judgmental evaluation of job-related
experience, training, and education for use in human resource staffing. In J. C. Thomas
(Ed.), Comprehensive handbook of psychological assessment, Vol. 4, Industrial and
organizational assessment (pp. 269–296). Hoboken, NJ: John Wiley & Sons.
Levine, E. L., Maye, D. M., Ulm, R. A., & Gordon, T. R. (1997). A methodology for
developing and validating minimum qualifications (MQs). Personnel Psychology, 50, 1009–
1023.
Page 43 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
McCormick, E. J. (1979). Job analysis: Methods and applications. New York: AMACOM.
McCormick, E. J., DiNisi, A. S., & Shaw, J. B. (1979). Use of the Position Analysis
Questionnaire for establishing the job component validity of tests. Journal of Applied
Psychology, 64, 51–56.
McCormick, E. J., Jeanneret, P. R., & Mecham, R. C. (1972). A study of job characteristics
and job dimensions as based on the position analysis questionnaire (PAQ). Journal of
Applied Psychology, 56, 347–368.
Nelson, E. C., Jacobs, A. R., & Breer, P. E. (1975). A study of the validity of the task
inventory method of job analysis. Medical Care, 13(2), 104–113.
Norton, R. E. (1985). DACUM handbook. Columbus, OH: Ohio State University National
Center for Research in Vocational Education.
Pace, V. L., & Brannick, M. T. (2010). How similar are personality scales of the ‘same’
construct? A meta-analytic investigation. Personality and Individual Differences, 49, 669–
676.
Pearlman, K., Schmidt, F. L., & Hunter, J. E. (1980). Validity generalization results for
tests used to predict job proficiency and training criteria in clerical occupations. Journal
of Applied Psychology, 65, 373–406.
Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., & Fleishman, E. A.
(Eds.). (1999). An occupational information system for the 21st century: The development
of O*NET. Washington, DC: American Psychological Association.
Peterson, N. J., Wise, L. L., Arabian, J., & Hoffman, G. (2001). Synthetic validation and
validity generalization: When empirical validation is not possible. In J. P. Campbell & D. J.
Knapp (Eds.), Exploring the limits of personnel selection and classification (pp. 411–451).
Mahwah, NJ: Lawrence Erlbaum Associates
Page 44 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Primoff, E. S., & Eyde, L. D. (1988). Job element analysis. In S. Gael (Ed.), The job analysis
handbook for business, industry, and government (Vol. II, pp. 807–824). New York: John
Wiley & Sons.
the JCV model to include personality predictors. Paper presented at the annual meeting of
the Society for Industrial and Organizational Psychology, Los Angeles.
Raymark, P. H., Schmit, M. J., & Guion, R. M. (1997). Identifying potentially useful
personality constructs for employee selection. Personnel Psychology, 50, 723–736.
Raymond, M. R. (2001). Job analysis and the specification of content for licensure and
certification examinations. Applied Measurement in Education 14(4), 369–415.
Sackett, P. R. (2003). The status of validity generalization research: Key issues in drawing
inferences from cumulative research findings. In K. R. Murphy (Ed.), Validity
generalization: A critical review (pp. 91–114). Mahwah, NJ: Lawrence Erlbaum
Associates.
Sanchez, J. I., & Fraser, S. L. (1994). An empirical procedure to identify job duty-skill
linkages in managerial jobs: A case example. Journal of Business and Psychology, 8, 309–
326.
Sanchez, J. I., & Levine, E. L. (1989). Determining important tasks within jobs: A policy-
capturing approach. Journal of Applied Psychology, 74, 336–342.
Page 45 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Seamster, T. L., Redding, R. E., & Kaempf, G. L. (1997). Applied cognitive task analysis in
aviation. Brookfield, VT: Ashgate.
Shippmann, J. S., Ash, R. A., Battista, M., Carr, L. Eyde, L. D., Hesketh, B., Kehoe, J.,
Pearlman, K., Prien, E. P., & Sanchez, J. I. (2000). The practice of competency modeling.
Personnel Psychology, 53, 703–740.
Spray, J. A., & Huang, C. (2000). Obtaining test blueprint weights from job analysis
surveys. Journal of Educational Measurement, 27(3), 187–201.
Tesluk, P. E., & Jacobs, R. R. (1998). Toward an integrated model of work experience.
Personnel Psychology, 51, 321–355.
Tett, R. P., Guterman, H. A., Bleier, A., & Murphy, P. J. (2000). Development and content
validation of a “hyperdimensional” taxonomy of managerial competence. Human
Performance, 13, 205–251.
Thompson, D. E., & Thompson, T. A. (1982). Court standards for job analysis in test
validation. Personnel Psychology, 35, 865–874.
Voskuijl, O. F., & Evers, A. (2008). Job analysis and competency modeling. In S.
Cartwright & C. L. Cooper (Eds.), The Oxford handbook of personnel psychology (pp. 139–
162). New York: Oxford University Press.
Wang, N., Schnipke, D., & Witt, E. A. (2005). Use of knowledge, skill, and ability
statements in developing licensure and certification examinations. Educational
Measurement: Issues and Practice, 24(1), 15–22.
Wernimont, P. F., & Campbell, J. P. (1968). Signs, samples, and criteria. Journal of Applied
Psychology, 52, 372–376.
Wilk, S. L., Desmarais, L., & Sackett, P. R. (1995). Gravitation to jobs commensurate with
ability: Longitudinal and cross-sectional tests. Journal of Applied Psychology, 80, 79–85.
Page 46 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).
Michael T. Brannick
Adrienne Cadle
Edward L. Levine
Page 47 of 47
PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights
Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in
Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).