You are on page 1of 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220014617

Personality Testing and Industrial-Organizational Psychology:


Reflections, Progress, and Prospects

Article  in  Industrial and Organizational Psychology · September 2008


DOI: 10.1111/j.1754-9434.2008.00048.x

CITATIONS READS

100 6,768

2 authors, including:

Frederick L Oswald
Rice University
140 PUBLICATIONS   5,698 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Relative Importance View project

All content following this page was uploaded by Frederick L Oswald on 27 November 2017.

The user has requested enhancement of the downloaded file.


Industrial and Organizational Psychology, 1 (2008), 272–290.
Copyright ª 2008 Society for Industrial and Organizational Psychology. 1754-9426/08

FOCAL ARTICLE

Personality Testing and Industrial–


Organizational Psychology:
Reflections, Progress, and Prospects

LEAETTA M. HOUGH
The Dunnette Group, Ltd.
FREDERICK L. OSWALD
Michigan State University

Abstract
As the title suggests, this article takes a broad perspective on personality as it is conceptualized and measured in
organizational research, and in the spirit of this Society for Industrial and Organizational Psychology journal,
we framed the article as a series of 7 questions. These 7 questions deal with (1) personality and multidimen-
sional models of performance, (2) personality taxonomies and the five-factor model, (3) the effects of situations
on personality–performance relationships, (4) the incremental validity of personality over cognitive ability, (5)
the need to differentiate personality constructs from personality measures, (6) the concern with faking on
personality tests, and (7) the use of personality tests in attempting to address adverse impact. We dovetail these
questions with our perspectives and insights in the hope that this will stimulate further discussion with our
readership.

We introduce this article on the use of per- article makes no attempt to resolve the insid-
sonalityvariables in industrial–organiza- ious problem that job applicants might dis-
tional (I–O) psychology and personnel tort their responses—and even lie—on
selection in particular by stating—with personality tests. Although we do draw on
utmost confidence—what we will not be the meta-analysis literature, and we do pro-
doing. This article is not another review or vide some thoughts and concerns about
meta-analysis of validity findings for per- faking personality tests, our more general
sonality tests, as there are many current in- goal is to provide a current perspective on
formative ones (e.g., for summaries of many personality testing and its use in organiza-
meta-analyses, see Barrick, Mount, & Judge, tional research and practice. In the spirit of
2001; Bartram, 2005; Hogan & Holland, this journal’s goal for stimulating con-
2003; and Hough & Furnham, 2003). This versation among all interested parties, the
format of our article is to offer a series of
questions about personality testing, dove-
tailed with our thoughts, to spur construc-
Correspondence concerning this article should be
addressed to Leaetta M. Hough. E-mail: leaetta@msn.com tive discussion and future research and
Address: The Dunnette Group, Ltd., 370 Summit practice in I–O psychology. We hope that
Avenue, St. Paul, MN 55142. you find that this approach is valuable:
Leaetta M. Hough, The Dunnette Group, Ltd.;
Frederick L. Oswald, Department of Psychology, not too abstract, not too concrete, and
Michigan State University. not too naı̈ve.

272
Personality testing and I–O psychology 273

A Bit of History—How We Got Here constructs for Ghiselli and for Project A were
not the five constructs in the five-factor model
Since the early 1990s, there has been a near-
(FFM) of personality.1
literal explosion of interest involving person-
In addition to this major literature review,
ality constructs in organizational research
Project A also included significant predictor
and practice. Prior to this was a two-decade
and criterion development efforts as well as
lull in personality research that was an after-
a large concurrent validation study. The con-
math of the work in the 1960s by Guion and
clusions from those activities were that when
Gottier (1965), whose research concluded
personality measures are based on theory-
that personality variables have little or no
relevant constructs, and when these mea-
systematic relationship to work-related cri- sures are administered to job incumbents
teria, and by Mischel’s (1968) book that was and correlated with scores on job analysis–
interpreted as a full-scale attack on trait informed performance measures, then
psychology. So what caused the resurgence personality variables tended to predict dis-
of interest? cretionary ‘‘will do’’ criteria better than cog-
Project A, a project sponsored by the U.S. nitive ability measures (McHenry, Hough,
Army and conducted in the 1980s, was per- Toquam, Hanson, & Ashworth, 1990).2
haps the most impactful personnel selection Importantly, these empirical relationships
research project in our field in the second half were consistent with the conclusions from
of the 20th century. In addition to addressing the Project A literature review and Ghiselli’s
the critical personnel selection issues at previous empirical findings.
hand, it did nothing short of reorienting our Project A, together with earlier research
field’s thinking about both the predictor and by personality psychologists such as Harri-
the criterion space in a more sophisticated son Gough, Robert Hogan, Douglas Jackson,
way that was multidimensional and construct Paul Costa, Lew Goldberg, Robert McCrae,
oriented in nature. Regarding the measure- and Auke Tellegen, set the stage for the
ment and modeling of personality variables, Barrick and Mount (1991) meta-analysis of
this meant incorporating the taxonomic validities between Big Five personality vari-
thinking of the field of personality at that time. ables and job performance, an article that
Thus, when the obligatory literature review became the most cited Personnel Psychol-
on personality was undertaken for Project ogy article in the 1990s (Hollenbeck, 1998).
A, the research team contrasted the Guion We could spend time debating the
and Gottier (1965) conclusion that opera- advances and retreats in our field that have
tionally, personality variables were not useful resulted since (see Hough, 2001, for a dis-
predictors of work behavior or performance, cussion of such advances, and see the
with the very different conclusion of Ghiselli exchange between Morgeson et al., 2007a,
(1966) concerning the usefulness of person- 2007b; Ones, Dilchert, Viswesvaran, &
ality variables in making personnel selection Judge, 2007; and Tett & Christiansen, 2007,
decisions. Ghiselli had indicated that he had for a stand-off on personality testing in selec-
included only personality scales that were tion settings). Instead, we prefer to focus
conceptually appropriate for the job or work, more broadly on how our knowledge to date
but unfortunately, he did not describe his might improve our future understanding of
underlying rationale. Similar to Ghiselli, individual behavior in work settings and to
however, the Project A research team found
that when they summarized the literature
relating personality variables to job-related 1. These constructs did not duplicate the FFM con-
structs. For a more complete description of the
criteria according to both personality con- approach and results, see Kamp and Hough
structs and work-related criterion constructs (1986). See Schneider and Hough (1995) and
taken together, then meaningful validity coef- Hough and Schneider (1996), for a history of the
FFM and its origins.
ficients emerged (Hough, Eaton, Dunnette, 2. See Campbell and Knapp (2001), for a thorough
Kamp, & McCloy, 1990). Note that the focal description of the activities and findings of Project A.
274 L.M. Hough and F.L. Oswald

stimulate an exchange of ideas that will aid Borman, Penner, Allen, & Motowidlo,
researchers and practitioners who apply our 2001; Dudley et al., 2006; Hurtz &
science in the world of work. Donovan, 2000; LePine, Erez, & John-
Although personality traits are challeng- son, 2002; Organ & Ryan, 1995).
ing to conceptualize and measure, arguably d Personality variables predict overall
more so than for cognitive ability where the g managerial effectiveness, promotion,
factor drives much of the prediction, we and managerial level (meta-analysis:
want to present a sample of some of the Hough, Ones, & Viswesvaran, 1998).
major findings that should encourage per- d Personality variables predict leader
sonality researchers in I–O psychology to emergence and effectiveness as well
press on. as transformational leadership (meta-
analyses: Bono & Judge, 2004; Judge,
Personality and Major Life Outcomes Bono, Ilies, & Gerhardt, 2002).
d Personality variables predict expatri-
d Personality variables predict mortality, ate performance (meta-analysis: Mol,
divorce, and occupational attainment. Born, Willemsen, & Van Der Molen,
Correlations tend to be higher than 2005).
those for cognitive ability predictors d Personality variables predict goal set-
(meta-analysis of only prospective ting and, conversely, procrastination
longitudinal studies: Roberts, Kuncel, (meta-analyses: Judge & Ilies, 2002;
Shiner, Caspi, & Goldberg, 2007). Steel, 2007).
d Conscientiousness and its facets predict d Personality variables predict creativity
health behaviors, drug use, and mortal- and innovation (Hough, 1992; meta-
ity (Roberts, Chernyshenko, Stark, & analyses: Feist, 1998; Hough & Dilchert,
Goldberg, 2005; meta-analysis: Bogg 2007).
& Roberts, 2004). d Personality-based integrity tests predict
d Personality measures predict alcohol- overall job performance (meta-analysis:
ism (meta-analysis: Cooper-Hakim & Ones, Viswesvaran, & Schmidt, 1993).
Viswesvaran, 2002).
Personality and Counterproductive
Personality and Performance Work Behaviors

d Personality variables predict overall d Personality variables predict counter-


job performance, objective perfor- productive work behavior (CWB)
mance, and getting ahead (meta-anal- (meta-analysis: Berry, Ones, & Sackett,
yses: Barrick et al., 2001; Dudley, 2007).
Orvis, Lebiecki, & Cortina, 2006; d Personality-based integrity tests predict
Hogan & Holland, 2003). CWB (meta-analysis: Ones et al., 1993).
d Personality variables predict task per- d Personality-based integrity tests pre-
formance (meta-analyses: Dudley et al., dict absenteeism (meta-analysis: Ones,
2006; Hurtz & Donovan, 2000). Viswesvaran, & Schmidt, 2003).
d Personality variables predict training
performance and learning and skill Personality and Team Performance
acquisition (meta-analyses: Barrick &
Mount, 1991; Barrick et al., 2001; d Personality variables predict team perfor-
Colquitt, LePine, & Noe, 2000). mance (meta-analysis: M. A. G. Peeters,
d Personality variables predict contextual Van Tuijl, Rutte, & Reymen, 2006).
performance, such as organizational d Personality variables predict getting
citizenship, altruism, job dedication, along and teamwork (meta-analyses:
interpersonal facilitation, and gene- Barrick et al., 2001; Hogan & Holland,
ralized compliance (meta-analyses: 2003).
Personality testing and I–O psychology 275

Personality and Job Satisfaction 1. Given the development of multidi-


mensional models of job performance, can
d Personality variables predict job and refined measures of personality improve the
career satisfaction (meta-analyses: understanding and prediction of work
Judge, Heller, & Mount, 2002; Ng, behavior?
Eby, Sorensen, & Feldman, 2005). The 1990s reflected rampant enthusiasm
d Personality variables are highly corre- for research and practice involving person-
lated with subjective well-being with ality measures used for personnel selection
correlations up to .62 observed (.80 dis- and other organizational practices, and at
attenuated) when substantively similar the same time, the 1990s also ushered in
measures are grouped (meta-analysis: an era of creating multidimensional models
Steel, Schmidt, & Shulz, 2008). of the job performance domain. Multidi-
mensional models of job performance have,
Mediated Models Involving Personality by design and intent, included dimensions of
performance that extend beyond core task
d The relationship between conscien- performance. A multidimensional model of
tiousness and learning is mediated by general job performance across jobs
goal commitment (Klein & Lee, 2006). includes dimensions such as demonstrating
d The relationships between personality effort, maintaining personal discipline, and
variables and work-related criteria facilitating peer and team performance (J. P.
are affected by motivational and self- Campbell, McCloy, Oppler, & Sager, 1993).
regulatory mechanisms (Barrick, Mount, Other multidimensional performance mod-
& Strauss, 1993; Erez & Judge, 2001; els or frameworks address organizational cit-
Kanfer & Heggestad, 1999; F. K. Lee, izenship behaviors (OCB) such as personal
Sheldon, & Turban, 2003). support, organizational support, and consci-
d Positive emotions moderate stress reac- entious initiative (e.g., Borman, 2004) and
tivity and mediate recovery from stress. CWBs such as theft, unsafe behaviors, and
Over time, positive emotions help people absenteeism (e.g., Sackett & DeVore, 2002),
high in psychological resilience effec- and subsequent research has meta-analyzed
tively recover from daily stress (Ong, or otherwise investigated the relationships
Bergeman, Bisconti, & Wallace, 2006). between OCBs and CWBs (e.g., Dalal,
d Status striving mediates the relation- 2005; Sackett, Berry, Wiemann, & Laczo,
ship between extraversion and sales 2006). Note that because many if not most
performance, such that high extraverts of these performance dimensions largely
are high-status strivers, and high-status reflect discretionary behaviors (i.e., ‘‘will
strivers are better performers (Barrick, do’’ behaviors), they tend to relate more
Stewart, & Piotrowski, 2002). strongly to personality constructs than to
those task performance dimensions that
If you view this research as a large expen- more heavily reflect competence or ability-
diture of interest, energy, time—and yes, based behaviors (i.e., ‘‘can do’’ behaviors).
money—as well as a zero-sum game with Again, to reiterate, there has been a useful
attendant opportunity costs (i.e., focusing convergence in organizational research
on some areas might mean focusing less on trends, where the resurgence of personality
other areas), then we assert that it is critical as a predictor of performance coincided
for experts in the field to spend more time with the theoretical expansion of the job per-
together discussing where such resources formance domain into types of behaviors
might be strategically allocated to address that are more discretionary or volitional
organizational issues via research and prac- and therefore more personality relevant.
tice (see a similar view in J. P. Campbell, Adding to this coincidence is the finding that
1990). To that end, we present seven ques- personality variables not only show incre-
tions to generate discussion. mental validity but also can have some effect
276 L.M. Hough and F.L. Oswald

in reducing adverse impact (Hough, Oswald, agerial job performance, finding that the
& Ployhart, 2001), as we will discuss. observed meta-analytic validity of extra-
Given that these important refinements in version was .05, whereas the observed vali-
the performance domain have been rela- dities of its facets were .16, 2.01, and .12 for
tively recent, it perhaps should not be a com- dominance, sociability, and energy level,
plete surprise to us that it has taken time for respectively. In a similar vein, Rothstein
measurement and modeling in personality and Goffin (2006) reported 11 studies with
testing to catch up by considering personal- higher validities for narrow compared to
ity predictors at commensurate levels of broad measures. Other research also sup-
refinement as the criterion. The idea of com- ports higher or more interpretable pre-
mensurate measurement is not new at all; it dictive validity for narrower facets than for
is in line with the tenets of Brunswik’s Lens broader factors of personality (Ashton, 1998;
Model (Brunswik, 1952), which indicates Mershon & Gorsuch, 1988; Paunonen, 1998;
that criterion-related validity will be higher Paunonen & Ashton, 2001; Schneider,
when trait-relevant cues are more consistent Hough, & Dunnette, 1996).
with performance-relevant cues (setting A recent meta-analysis of conscientious-
aside other empirical concerns such as range ness–performance relationships (Dudley
restriction and differential measurement et al., 2006) also indicated that patterns of
unreliability). criterion-related validities at the facet level
What is the most theoretically informative were theoretically informative. For instance,
and practically useful level of commensu- achievement predicted task performance for
rate measurement? Some have essentially sales jobs (meta-analytic r ¼ .26)3 but order
argued that when predicting job perfor- did not (r ¼ .03). Across jobs, dependability
mance, extremely broad measures of per- predicted counterproductive work perfor-
sonality (e.g., integrity, core self-evaluation) mance (r ¼ 2.34) but achievement did not
yield the maximum criterion-related valid- (r ¼ .00). These results indicate that studies
ity. Certainly, it is true that if the bandwidth aggregating broad predictors and criteria
of the criterion is wide, the most effective potentially obscure our theoretical under-
predictor will be a predictor that is matched standing and the prediction of practical
to the criterion in terms of relevance and outcomes (Hough & Schneider, 1996). Con-
bandwidth. That is, if the criterion is com- versely, information on narrower predictor–
plex, the most effective predictor composite criterion relationships has the potential to
is also complex (Hough & Ones, 2001). How- be more informative (assuming adequate
ever, for breadth that is most informative, sampling, measurement, statistical power,
theory and data regarding relationships and so on), and practically speaking, it
between narrow predictor and criterion con- makes testing time more focused and effec-
structs will help us more effectively understand tive by not including content from facets that
validities for complex or aggregated criteria. are less theoretically relevant or predictive.
Factor models and meta-analytic results of Related to the breadth of a construct,
validities between homogeneous predictor there has been much recent discussion in
constructs and homogeneous criterion con- the organizational literature about com-
structs are informative to this end (Hough, pound traits in personality. A compound trait
2001). To illustrate this claim, several studies can be defined as a set of trait constructs;
(Hough, 1992; Hough et al., 1998; Roberts for instance, the compound trait of integrity
et al., 2005; Stewart, 1998; Tett, Steele, &
Beauregard, 2003) taken together indicate
that facets of conscientiousness show mean- 3. Validities were corrected for unreliability on both
ingful differential prediction with outcomes the predictor and the criterion, but the predictor
relevant to organizations. Hough et al. reliabilities were generally high (estimated to be
around .80), and thus, these corrections tend to
(1998) meta-analyzed validities of FFM per- reflect low criterion reliability (estimated to be
sonality factors and facets with overall man- around .60).
Personality testing and I–O psychology 277

has been defined as a combination of con- some finer grained constructs for predicting
scientiousness, emotional stability, and criterion constructs as well as for predicting
agreeableness measures based on the Big overall work-related criteria. Their tables of
Five (Ones, Viswesvaran, & Dilchert, 2005). meta-analyses organized by predictor con-
The previous concern regarding commen- struct and criterion construct show clearly
surate measurement is at play here: The that criterion-related validities vary widely
criterion-related validities for integrity may depending upon the nature and breadth of
be meaningful, but the patterns of correla- both the predictor and the criterion con-
tion between the component traits and any struct, the type of job, and the relevance of
particular criterion may be obscured. For the predictor for the criterion. Their tables
example, the compound trait of integrity also show the advantage of combining nar-
might yield the same validity for two differ- row constructs found to be relevant to a par-
ent criteria; yet, its validity might be driven ticular criterion construct into a compound
by conscientiousness when predicting atten- variable that is found to correlate more
dance and by emotional stability when pre- highly with relevant criteria than any of the
dicting customer service.4 Alternatively, factors in the FFM. These sorts of compound
a compound trait can reflect a subset of items variables are driven by careful examination
sampled from a set of trait measures. Take, of criterion-related validities; they are not
for instance, the measure of core self-evalu- items or traits combined solely on the basis
ation (Judge, Erez, Bono, & Thoresen, 2003). of their own intercorrelations.
It contains 12 items with content that over- To be clear, we are not suggesting a level
laps the constructs of self-esteem, general- of refinement in personality measures that is
ized self-efficacy, emotional stability, and impractical. What we are saying is that, it
locus of control. An issue with this sort of may ultimately be more practical to make
compound trait is that the items potentially meaningful refinements of personality vari-
sacrifice the construct validity of the factors ables so that relevant variables are more
from which they arise because there are very carefully aligned with theoretical models
few items (if any) that strongly represent any that include our major dependent variables
one factor, and thus, the criterion-related of interest: task and contextual performance,
validity of the measure might be driven by training performance, team performance,
item-specific covariances as much as by job commitment, turnover, and job satis-
variance because of the compound trait faction (see Hough, 1992, 1998b; Hough &
measure as a general factor. Schneider, 1996; Kanfer, Ackerman, Murtha,
Taken together, both of these types of & Goff, 1995; and Tett & Christiansen, 2007).
compound traits have the potential to over- We urge I–O psychologists to develop
look the important details underlying them. better databases that contain descriptive
Although compound traits may exhibit statistics, correlations, and other relevant
reasonable or even substantial prediction effects concerning narrow personality con-
(as can any unit-weighted composite of structs and criterion dimensions. Ideally,
measures or items), it can obscure interesting such databases would contain personality
theoretical relationships that involve its con- and criterion information separated by mea-
stituent components, just as a general job surement method, situation, and other
performance measure can obscure impor- known moderators5 of the relationship (see
tant multidimensional information on the Hough, 2001). Data such as these are invalu-
criterion side. Hough and Furnham (2003) able for meta-analysis undertakings and
summarized meta-analytic validities of can help establish more systematic and

4. Remember that conscientiousness and emotional 5. See Schneider and Hough (1995) and Hough and
stability contain facets and also might be consid- Schneider (1996), for a discussion of known mod-
ered compound traits. But the example contains erators of the relationship between personality con-
the spirit of the argument. structs and criterion constructs.
278 L.M. Hough and F.L. Oswald

important relationships between personality want to return to the ‘‘good old daze’’
and performance. Such a database was (Hough, 1997, p. 233) of hundreds of per-
attempted in the Validity Information sonality scales, each supposedly measuring
Exchange efforts of the 1950s and 1960s, unique variables.
but in the era of meta-analysis, the lack of We do acknowledge that the FFM has
such a database is virtually inexcusable for been important in understanding personality
I–O psychology as a science. and provides an organizing structure for the
2. How and why might personality myriad of personality scales. Nonetheless,
research in I–O psychology be useful in we suggest that the breadth of the five factors
extending its insights beyond the FFM? of the FFM may be one reason why observed
Useful taxonomies are important mile- criterion-related validities are low and are
stones in the history and advancement of criticized by some I–O psychologists as such
a science. An adequate taxonomy is impor- (e.g., Morgeson et al., 2007a, 2007b). We do
tant for communicating efficiently, effec- argue that even with low validities, person-
tively revealing patterns of relationships, ality tests remain useful in selection because
reaching a deep understanding of research they provide incremental validity over cog-
results, and generalizing results meaningfully nitive ability, and even modest amounts of
to other contexts (Fleishman & Quaintance, validity can translate into significant
1984; Hough & Ones, 2001). As we men- amounts of utility to the organization when
tioned previously, taxonomic structure of aggregated across individuals and over time
variables—both for predictors and criteria— (conversely, not administering a personality
is important for discovering and understand- measure means losing this utility). But per-
ing relationships hitherto obscured as well as sonality research shows that we can do bet-
for replicating those relationships in future ter than the FFM in attempting to improve
research and applications. both our theories and our predictions.
Over 50 years ago, Cronbach and Meehl Publications by Hough and her col-
(1955) published their classic article on con- leagues (Hough, 1992; Hough & Ones,
struct validity of psychological tests and their 2001; Hough & Schneider, 1996; Schneider
nomological nets. Following this, Guion & Hough, 1995) have summarized research
(1961), Dunnette (1963), Smith (1976), and that has identified several additional con-
others argued for more sophisticated con- structs beyond the five in the FFM. More re-
struct-oriented thinking about variables in cently, K. Lee, Ashton, and deVries (2005)
our field. More recently, since Barrick and also added an honesty–humility factor, evi-
Mount’s (1991) influential meta-analysis of dence that indicates that the FFM needs to
the validities of FFM personality constructs be expanded.
for predicting work-related variables, most The FFM of personality certainly has its
I–O psychologists have embraced the FFM critics. Several highly respected personality
as the taxonomic structure for personality. psychologists (e.g., Block, 1995; Loevinger,
Many researchers have concluded that 1994; McAdams, 1992; Pervin, 1994; Tellegen,
the FFM of personality is robust and general- 1993) criticize the FFM as theoretically
izes across different rating sources and cul- inadequate. For example, the theories and
tures (e.g., Hogan & Ones, 1997; Saucier & constructs of moral development (see
Goldberg, 1998; Wiggins & Trapnell, 1997).6 Kohlberg, Levine, & Hewer, 1994) and ego
Although we harbor some criticisms of the development (Loevinger, 1966, 1976) can
FFM, we certainly do not want to lose the make important contributions to our under-
organizing benefits that broad-level taxono- standing of individual behavior and perfor-
mies such as the FFM afford. We do not mance in organizations. Similarly, constructs
such as self-regulation and ego depletion
(Baumeister, Gailliot, DeWall, & Oaten,
6. However, see Hough and Schneider (1996), for 2006) can make important contributions to
a critique of this research. our understanding of individual behavior
Personality testing and I–O psychology 279

and performance in organizations. Hough context into measurement of personality


(1997) provided general criticisms of the variables?
FFM as follows: Hattrup and Jackson (1996) urge I–O psy-
chologists to learn more about individual
d FFM overlooks important personality differences by understanding their function
constructs; it is not comprehensive. within the context of situations and their
d FFM combines constructs that are salient characteristics. Hough (2003) optimis-
better left separate. tically reported that Hattrup and Jackson’s
d FFM is method bound and dependent plea was being taken seriously, citing exam-
upon the factor analysis of lexical ples of how type of task, type of team, type of
terms. job, as well as research setting, culture, and
d FFM consists of constructs so broad and fit were recognized as moderator and medi-
heterogeneous that accuracy of predic- ator variables that interacted with individual
tion is sacrificed. difference variables such as personality to
predict work behavior and performance.
Typical efforts to identify the taxonomic Her optimism may have been somewhat pre-
structure of personality variables such as the mature. Certainly, the controversy about the
FFM involve some type of factor analysis that usefulness of self-report personality mea-
only involves the scales themselves. Hough sures suggests that some critics who focus
and Ones (2001) argue that a better approach on the low validities of personality variables
for developing a useful taxonomy is to create may have not accepted the fact that situa-
profiles of relationships between a target per- tional characteristics exert important moder-
sonality variable and other psychological ator effects that make some validities higher
variables, such as job performance variables and some lower across situational contexts
and other individual difference variables, (see Tett & Christiansen, 2007).
such as those from the ability, motivation According to the Attraction–Selection–
and interest domains. They labeled the Attrition model (Schneider, Goldstein, &
approach ‘‘nomological-web clustering.’’ Smith, 1995), we know that individuals seek
Profiles across target variables can be cluster out and self-select into different jobs and
analyzed by taking either a rational or math- work situations. For instance, we have evi-
ematical approach, then ‘‘the pattern of rela- dence that employees perceive achievement-
tionships that a variable has with other based or relationship-based situations in
variables should be similar to other variables a manner consistent with their standing on
that are in the same taxon or construct’’ measured personality traits (Westerman &
(Hough & Ones, 2001, p. 237). This is a much Simmons, 2007). But going above and
more stringent approach to understanding beyond these effects, some situational con-
whether a set of personality variables func- texts may result in mean performance differ-
tions similarly and is in line with McDonald ences across individuals but do not change
(1999), who said that measures claimed to personality–performance correlations (i.e.,
be similar should exhibit similar correla- situations exert main effects), and other situ-
tional patterns with external variables. ational contexts result in mean differences
Hough and Ones (2001, Appendix) used and also influence correlations between per-
nomological-web clustering to develop con- sonality and performance (i.e., there are
ceptually similar personality constructs. We interactions). We seem to prefer simple and
are unaware of any research that has fol- straightforward analyses with very few situ-
lowed up on this approach; yet, we believe ational moderators. Yet, ‘‘the situation’’ is
work along these lines would be productive such a large and looming entity that one
in helping us develop a better set of trait- would be foolish to ignore its major features.
based personality constructs for our field. For example, the relationship between cog-
3. Are situations important and, if so, nitive variables and job performance is
how can we incorporate the situation or clearly moderated by the situational factor
280 L.M. Hough and F.L. Oswald

of job complexity: The relationship between finding is in line with the research finding
cognitive ability and job performance tends that personality may be more predictive
to be lower for work characterized as low in under typical performance conditions,
complexity compared to high in complexity, because performance is more varied across
corrected r ¼ .23 for unskilled jobs versus individuals, than under maximum perfor-
.56 for high-complexity jobs (Hunter, mance conditions where all employees tend
1980). The relationship tends to remain to be highly motivated, such as when they
constant (r ¼ .56), however, across job com- know their performance is being monitored
plexity when the dependent variable is per- (Klehe & Anderson, 2007; Sackett, Zedeck,
formance in training programs (Schmidt & & Fogli, 1988).
Hunter, 1998). We call for I–O psychologists to continue
Regarding personality, we have evidence work on existing situation taxonomies, such
that the choice of personality variables as those described by Saucier, Bel-Bahar,
based on a job analysis tends to result in and Fernandez (2007), to refine and build
higher validities than do validation studies on those taxonomies and to develop ways to
that are exploratory in nature (Tett, Jackson, incorporate them into more sophisticated
Rothstein, & Reddon, 1999). We also know moderator analyses. We have the well-
that correlations between personality con- developed analytical tool of hierarchical
structs and job-related constructs tend to linear modeling (HLM) to accommodate
be higher when item content is contextual- such a taxonomy, in that situational charac-
ized, that is, when the work situation is teristics can be entered as variables that
incorporated into the item (Schmit, Ryan, predict group differences (means and corre-
Stierwalt, & Powell, 1995). lations). Furthermore, an appropriate tax-
The ‘‘strength’’ of a situation has a predict- onomy of personality variables should be
able effect on personality–performance val- integrated with an appropriate taxonomy
idities. Situational strength is the extent the of situations. We agree with Tett and Chris-
situation signals expectations about what is tiansen (2007) that a theory of ‘‘trait activa-
appropriate behavior (Mischel, 1977). In tion’’ (Tett & Burnett, 2003) would highlight
strong organizational situations, employee where and how personality variables are
work behavior may be predictable without most likely to influence behavior and
any knowledge of individual differences performance.
(e.g., highly scripted work on the assembly 4. Does the combination of cognitive
line), but in weak situations (e.g., individuals ability and personality measures increase
assemble cars as members of a team, with no the accuracy of our predictions of work per-
assigned roles or timelines), individual formance and other work-related criteria?
behavior is more discretionary, and person- We have already discussed how there is
ality would therefore be more likely to pre- room for improving the measurement of per-
dict work behavior. Evidence supports this sonality variables and job-related criteria
notion. In high-autonomy jobs (i.e., weak and thus for increasing the validity of these
situations), validities are higher for conscien- relationships. That said, the accumulated
tiousness, extraversion, and agreeableness wisdom regarding the incremental validity
than in low-autonomy jobs (Barrick & of personality over cognitive ability is clear:
Mount, 1993). More generally, the Big Five
factors of personality show higher validities 1. Personality variables show meaning-
when predicting supervisory ratings of task ful correlations with a wide array of
and contextual performance in weak perfor- work-relevant criteria (see the previous
mance situations (when employees with ‘‘A Bit of History—How We Got Here’’
similar jobs disagree about which tasks are section). Even with self-report mea-
most important) than in strong performance sures of personality, the validity of
situations (when employees generally agree; integrity tests is high for jobs low in
Beaty, Cleveland, & Murphy, 2001). This complexity, r ¼ .41 (Ones et al., 1993).
Personality testing and I–O psychology 281

2. Personality variables correlate at low validities found in our personnel selection


levels with cognitive ability (see literature, Schmidt and Hunter (1998), suf-
Ackerman & Heggestad, 1997; fers from this problem. They summarized
McHenry et al., 1990; and Ones, the literature as they found it often lacking
Viswesvaran, & Reiss, 1996, for esti- in precision about the differences between
mates of these correlations), affording constructs and measurement methods (e.g.,
the possibility for incremental validity. cognitive ability, interviews, conscientious-
3. Personality variables do show incre- ness, assessment centers, and biographical
mental validity, thereby increasing inventories). Similarly, meta-analyses sum-
the overall accuracy of our predictions marizing the validity of the interview have
of job performance and other job- shown important results but often confound
related criteria. measurement method with the individual
difference constructs it intends to measure.
Project A with its sample size of approxi- Validities tend to vary depending upon
mately 4,000 provided clear evidence of (a) the amount of structure involved in gath-
the increase in validity when both cognitive ering as well as evaluating interview infor-
ability and personality measures are included mation, (b) situational versus job-related
in the same predictor battery (McHenry et al., versus psychological nature of the interview,
1990). Again, using Project A data, a model and (c) individual versus board-conducted
of the determinants of work performance that interview (Huffcutt & Arthur, 1994; McDa-
included personality variables accounted for niel, Whetzel, Schmidt, & Maurer, 1994).
more than twice the variance in supervisory Interviews are likely moderated by the type
performance ratings than did Hunter’s (1983) of constructs measured as well. That infor-
model that included only cognitive ability, mation is difficult to determine especially
job knowledge, and performance (Borman, in an unstructured interview, but it does not
White, Pulakos, & Oppler, 1991). remove the confound between method and
We can also estimate the validity of such construct being assessed.
composites without conducting actual full- Most, if not all, of the meta-analyses of
scale validity studies. If the correlations validities of personality variables for pre-
between predictors and criterion-related dicting work-related criteria heavily rely on
validities are known, the validity of the self-report measures of personality. These
composite can be estimated (see Ghiselli, summaries are helpful in that they provide
Campbell, & Zedeck, 1981; Trattner, 1982). summaries of an important and very com-
5. If we were to better differentiate per- mon way of measuring personality, and self-
sonality and other organizationally relevant report personality measures do predict
constructs from the way these constructs meaningful work-related outcomes. How-
are measured, what benefits might accrue? ever, without other methods of assessment,
We are an applied field, and information self-report and construct measurement is
about the usefulness of our interventions, confounded. Additionally, we know that
such as personnel selection systems, is criti- others’ratings of a target person’s personality
cally important. We have generated much can result in higher correlations between
useful knowledge about those interventions, personality and work-related criteria (Mount,
but as a field, we have tended to confound Barrick, & Strauss, 1994; Nilsen, 1995).
constructs with measurement methods. In In short, we need to remember the time-
other words, we need to maintain the dis- tested insights of our field concerning
tinction between the psychological con- the process of construct validation that in-
structs we seek to measure and the corporate multitrait, multimethod evidence
measurement itself. Our ability to generalize (D. T. Campbell & Fiske, 1959). At the same
personality findings is limited to the extent time we seek to accumulate stable and dis-
they are bound to specific modes of mea- ambiguated information about constructs
surement. One of the best summaries of the and how they are measured, many I–O
282 L.M. Hough and F.L. Oswald

practitioners involved in personnel selection We view these longstanding concerns


face the challenge of predicting performance about faking in personality tests as legitimate
in today’s world of work where tasks, teams, in concept but suggest that the empirical
and work contexts are constantly changing. research on faking could benefit from a stron-
New models for validation might involve ger theoretical perspective and language
databases that synthesize validities for ever- that is focused on a framework that considers
changing combinations of work activities several distinct dimensions at the same time,
and responsibilities. Even in a changing such as:
world of work, organizations need to decide
who among their job applicants will add d test-taking mode (e.g., paper-and-
value, and they want to base those decisions pencil, computerized, web-based)
on evidence that predicted performance lev- d test-taking settings (e.g., timed/
els are relatively accurate. Of course, this re- untimed, proctored/unproctored),
quires organizations to work with their I–O d test-taking instructions (e.g., to answer
consultants to inform them what criterion honestly, to answer quickly, that
constructs are important. In turn, the I–O con- responses are verifiable),
sultants should ideally be able to conduct d test format (e.g., ipsative/normative,
a work and situation analysis to identify the long form/short form),
relevant KSAO constructs, refer to our empir- d test-taker individual differences (e.g.,
ical literature (both meta-analyses and pri- general and specific test experience,
mary studies), and synthesize the validity of test anxiety, reading ability, and
the prediction equation to inform the organi- impression management),
zation of the utility of a recommended selec- d external motivating factors (e.g., tight
tion system. or loose labor market, job salary and
6. How can we reframe the ‘‘faking issue’’ benefits), and
on personality tests in more productive d test-taking outcomes (e.g., differences
ways? in descriptive statistics, factorial struc-
Whether it has been considered a holy ture, and criterion-related validities).
grail worth pursuing or a windmill at which
to tilt, the topic of applicant faking in person- McFarland and Ryan (2006) recently embed-
ality measures has been pursued by organi- ded some of these dimensions of applicant
zational researchers with something of faking within the Theory of Planned Behav-
a vengeance. This is not to say that research ior. Tourangeau and Yan (2007) cite some of
and practice has not been concerned about the dimensions above when explaining
dissimulation on other selection measures; response distortion in items that are particu-
such research has been conducted on situa- larly sensitive in nature. By taking a more
tional judgment tests (e.g., Cullen, Sackett, systematic approach to faking that keeps
& Lievens, 2006; H. Peeters & Lievens, the practical goal of generalizability to
2005), biographical data measures (e.g., applicant settings in mind, we are suggesting
Kluger & Collela, 1993; Schmitt & Kunce, that researchers abandon the hunt for unob-
2002), employment interviews (e.g., Delery served ‘‘true’’scores or elusive suppressors as
& Kacmar, 1998; Ellis, West, Ryan, & keys that will unlock the treasure chest of
DeShon, 2002; Levashina & Campion, higher criterion-related validities. Many
2006), the assessment center (e.g., McFar- hunts of this nature have been dismally
land, Yun, Harold, Viera, & Moore, 2005), unsuccessful. In fact, simulations indicate
and the ubiquitous job application blank that even going to the extreme of flagging
(e.g., Wood, Schmidke, & Decker, 2007). and removing individuals with high scores
But it is safe to say that the lion’s share of on an indicator of faking or social desirabil-
research on faking selection measures in ity leads to a minimal increase in predicted
the past two decades has been directed mean performance at best (Schmitt &
mostly toward personality tests. Oswald, 2006). This means that partialling
Personality testing and I–O psychology 283

social desirability out of personality scores d Social desirability response bias in real-
would be even less effective. Instead of flag- life employment settings is not as large
ging extreme scores, if you assume that fak- as that produced in directed faking
ing is uncorrelated with the personality studies (e.g., Dunnette, McCartney,
constructs measured, then greater faking Carlson, & Kirchner, 1962; Hansen &
would in fact predict lower job performance McLellan, 1997; Heron, 1956; Hough,
outcomes (Komar, Brown, Komar, & Robie, 1998a; Hough et al., 1990; Kirchner,
2008); however, these assumptions are 1962; Michaelis & Eysenck, 1971;
unknowable. Orpen, 1971; Ryan & Sackett, 1987;
In real-life employment settings, relation- Schwab & Packard, 1973; Trent, 1987;
ships between substantive personality scales versus Rosse, Stecher, Levin, & Miller,
and job performance do not appear to be 1998).
moderated or suppressed by social desirabil- d In real-life employment settings, rela-
ity at least as measured by scores on social tionships between substantive person-
desirability scales (Hough, 1998a; Hough & ality scales and job performance do not
Ones, 2001; Hough et al., 1990; Ones et al., appear to be moderated or suppressed
1996). Studies that use instructed faking con- by social desirability at least as mea-
ditions can push average test scores around, sured by scores on social desirability
particularly for ‘‘fake bad’’ but also for ‘‘fake scales (Hough, 1998a; Hough & Ones,
good’’ (Hough et al., 1990; Viswesvaran & 2001; Hough et al., 1990; Ones et al.,
Ones, 1999); however, such studies tend to 1996).
be conducted in the lab. More informative d Corrections to substantive personality
studies depart from the lab study with its col- scales based on response bias scale
lege samples, and they examine the dimen- scores do not affect relationships be-
sions above in field settings. Ellingson, tween substantive personality scales and
Sackett, and Hough (1999) find that job job performance measures (Hough,
applicants with high social desirability 1998a; Schmitt & Oswald, 2006).
scores still provide differentiating informa-
tion across multiple personality traits. Other In ‘‘bottom line’’ terms, hiring organizations
studies have examined applicant retest seek to achieve the highest levels of their
effects on personality scores after being objectives (e.g., high performance, low turn-
rejected in previous application attempts. over) in those applicants they select and train
Some of those studies find negligible effects and in whom they invest time and money.
(Hogan, Barrett, & Hogan, 2007), whereas Such a desire first assumes that organizations
others have found quite significant effects have a clear idea of how to conceptualize
(Young, 2003). Further research investigat- and operationalize their objectives (e.g.,
ing these differences is warranted. how to develop, administer, and weight mul-
Keeping in mind the suggestions for pro- tidimensional measures of job perfor-
ductive avenues for future research, what are mance). To the extent that effective
some conclusions we can make about faking criterion measurement is not in place, and
based on past literature? to the extent we cannot determine the type of
test score faking that would lead to harming
d Social Desirability scales, aka Unlikely the organization (not hiring more effective
Virtues scales, detect intentional distor- individuals) and to the qualified applicant
tion (Hough et al., 1990). (being displaced by applicants who fake
d Respondents can, when instructed to more), the effect of faking on criterion-
do so, distort their responses signifi- related validity becomes a more difficult
cantly in a positive direction, even question to answer. We face these chal-
more in a negative direction (Hough, lenges in today’s research.
et al., 1990; Viswesvaran & Ones, 7. How do groups protected under U.S.
1999). Civil Rights Acts score on personality
284 L.M. Hough and F.L. Oswald

variables? Do personality tests have adverse are combined. With this general knowledge
impact against protected groups? Can per- in hand, it is then important to know which
sonality tests be used in combination with measures show larger mean differences by
cognitive ability tests to increase validity subgroups of interest and which show
and reduce adverse impact? smaller differences.
Most organizations around the world are Hough et al. (2001) gathered and summa-
interested in fair employment practices and rized standardized mean subgroup (ethnic/
employing a work force that is representative cultural, age, and gender) differences (d val-
of their communities. However, when differ- ues) on personality, cognitive, and physical
ent groups score on average differently, dif- ability constructs at both broadly and more
ferential hiring rates (disparate or adverse narrowly defined construct levels. They found:
impact) can result. In the United States,
organizations whose employment practices . . . some surprising results. Research
result in disparate impact against members clearly indicates that the setting, the sam-
of protected classes are at risk of charges of ple, the construct and the level of construct
alleged discrimination and possible law- specificity can all, either individually or
suits. An often-accepted indicator of adverse in combination, moderate the magnitude
impact is a selection ratio that is less than of differences between groups. Employers
four fifth that of the majority group. An using tests in employment settings need to
important determinant of adverse impact is assess accurately the requirements of
a mean score difference between a protected work. When the exact nature of the work
group and the majority group on compo- is specified, the appropriate predictors
nents or composites of the personnel selec- may or may not have adverse impact
tion measures. against some groups (p. 152).
Measures of cognitive ability, included in
many personnel selection test batteries, For example, they found very little dif-
often produce fairly substantial mean score ference between Blacks and Whites for
differences, thereby contributing to differen- broad-level constructs such as FFM consci-
tial hiring rates for Blacks, Hispanics, and entiousness and extraversion; nonetheless,
Whites. On the other hand, adding measures at the facet level, some subgroup mean dif-
of personality to a selection test battery can ferences were as large as .2 or .3 standard
reduce adverse impact (J. P. Campbell, deviations, with Blacks scoring higher or
1996). However, research, simulation lower than Whites depending upon the
results, and a mathematical analysis have facet. Other researchers also provide infor-
all shown that countering the effects of a mation about these important subgroup dif-
measure with a large mean difference ferences as well. Foldes, Duehr, and Ones (in
between subgroups (e.g., the typical 1 SD press) meta-analyzed subgroup differences
mean Black–White difference on cognitive in personality variables for five U.S. racial
ability tests) requires grossly overweighting groups for FFM variables and their facets.
other measures showing little mean differ- Else-Quest, Hyde, Goldsmith, and Van
ence (e.g., DeCorte, Lievens, & Sackett, Hulle (2006) meta-analyzed gender differ-
2007; Potosky, Bobko, & Roth, 2005; Sackett ences in personality, and Van Vugt, De
& Ellingson, 1997). For instance, even Cremer, and Janssen (2007) summarized
though a personality measure may have gender differences in cooperation and
a smaller subgroup mean score difference competition.
than a cognitive ability measure, if the two These findings provide important general
measures are essentially uncorrelated with guidelines for personnel selection practi-
each other and the majority group scores tioners: Use job analysis information to
higher on both measures, then adverse guide the selection of tests that measure the
impact against the protected group may specific needed abilities and characteristics,
actually increase when the two measures realizing that subgroup differences will vary
Personality testing and I–O psychology 285

depending upon the predictor construct and Since the 1990s, there has been a near-literal
its specificity, and that will affect the hiring explosion of interest and research involving
rate of protected subgroups. Selection bat- personality constructs and multidimen-
teries that include measures of cognitive sional models of performance that continues
ability almost never satisfy the four-fifths to this day. It has become clear that the com-
rule, whereas selection batteries that include plexity of the nomological nets of personal-
only measures of personality typically (but ity constructs is enormous. In comparison
not always) satisfy the four-fifths rule (Bobko, with cognitive ability, which is hierarchical
Roth, & Potosky, 1999; Hough & Oswald, and often driven by g in its validity, we have
2000; Ryan, Ployhart, & Friedel, 1998; only begun to scratch the surface. Our theo-
Schmitt, Rogers, Chan, Sheppard, & Jen- ries and models of personality-based deter-
nings, 1997). Group mean differences and minants of behavior and performance need
the relative weights for all the cognitive to catch up with this inherent complexity.
and personality variables contributing to We need better models, better interventions,
a predictor composite should to be carefully and more evidence-based practice as they
examined before making any assumptions relate to behavior and performance in orga-
about the composite. Campion et al. (2001) nizational settings.
addressed the dilemma of the twin, but often Our goal was to provide a current per-
competing, goals of achieving accurate pre- spective on personality testing and its use
diction (criterion-related validity) while also in organizational research and practice by
achieving similar selection rates for sub- offering a series of questions about person-
groups (reduced adverse impact). Using lin- ality testing to spur constructive discussion
ear programming methods, De Corte et al. and future research and practice in I–O
(2007) proposed a procedure for forming psychology.
a weighted composite that reduces adverse
impact as much as possible, given a specified
level of validity. We suggest examining this References
procedure to understand the sensitivity of
Ackerman, P. L., & Heggestad, E. D. (1997). Intelligence,
predictor weights on adverse impact and personality, and interests: Evidence for overlapping
validity outcomes. In particular, regression traits. Psychological Bulletin, 121, 219–245.
weights are designed to maximize validity Ashton, M. C. (1998). Personality and job performance:
The importance of narrow traits. Journal of Organi-
in a sample and will therefore capitalize on zational Behavior, 19, 289–303.
chance to some extent. A different set of pre- Barrick, M. R., & Mount, M. K. (1993). Autonomy as
dictor weights that is robust across samples a moderator of the relationships between the big five
personality dimensions and job performance. Jour-
would show more consistent validities that nal of Applied Psychology, 78, 111–118.
do not capitalize on chance, and they may Barrick, M. R., & Mount, M. K. (1991). The Big Five
also demonstrate reduced adverse impact. personality dimensions and job performance: A
meta-analysis. Personnel Psychology, 44, 1–26.
Job-relevant predictor composites often Barrick, M. R., Mount, M. K., & Judge, T. A. (2001).
contain cognitive ability measures with Personality and performance at the beginning of
their likely adverse impact, but new statisti- the new millennium: What do we know and where
do we go next? International Journal of Selection and
cal methods can help us understand these Assessment, 9, 9–30.
issues in a more precise manner. Barrick, M. R., Mount, M. K., & Strauss, J. P. (1993).
Conscientiousness and performance of sales repre-
sentatives: Test of the mediating effects of goal set-
Summary ting. Journal of Applied Psychology, 78, 715–722.
Barrick, M. R., Stewart, G. L., & Piotrowski, M. (2002).
In the 1980s, Project A reoriented our field’s Personality and job performance: Test of the medi-
ating effects of motivation among sales representa-
thinking about both the predictor and the tives. Journal of Applied Psychology, 87, 43–51.
criterion space, focusing on constructs and Bartram, D. (2005). The Great Eight competencies: A
multifaceted models of performance. Per- criterion-centric approach to validation. Journal of
Applied Psychology, 90, 1185–1203.
sonality constructs were shown to relate to Baumeister, R. F., Gailliot, M., DeWall, C. N., & Oaten,
performance constructs in meaningful ways. M. (2006). Self-regulation and personality: How
286 L.M. Hough and F.L. Oswald

interventions increase regulatory success, and how Cooper-Hakim, A., & Viswesvaran, C. (2002). A meta-
depletion moderates the effect of traits on behavior. analytic review of the MacAndrew Alcoholism
Journal of Personality, 74, 1773–1801. scale. Educational and Psychological Measurement,
Beaty, J. C., Cleveland, J. N., & Murphy, K. R. (2001). The 62, 818–829.
relation between personality and contextual perfor- Cronbach, L. J., & Meehl, P. E. (1955). Construct validity
mance in ‘‘strong’’ versus ‘‘weak’’ situations. Human in psychological tests. Psychological Bulletin, 52,
Performance, 14, 125–148. 281–302.
Berry, C. M., Ones, D. S., & Sackett, P. R. (2007). Inter- Cullen, M. J., Sackett, P. R., & Lievens, F. (2006). Threats
personal deviance, organizational deviance, and to the operational use of situational judgment tests in
their common correlates: A review and meta-analy- the college admissions process. International
sis. Journal of Applied Psychology, 92, 410–424. Journal of Selection and Assessment, 14, 142–155.
Block, J. (1995). A contrarian view of the five-factor Dalal, R. (2005). A meta-analysis of the relationship
approach to personality description. Psychological between organizational citizenship behavior and
Bulletin, 117, 187–215. counterproductive work behavior. Journal of
Bobko, P., Roth, P. L., & Potosky, D. (1999). Derivation Applied Psychology, 90, 1241–1255.
and implications of a meta-analytic matrix incorpo- De Corte, W., Lievens, F., & Sackett, P. R. (2007). Comb-
rating cognitive ability, alternative predictors, and ing predictors to achieve optimal trade-offs between
job performance. Personnel Psychology, 52, 1–31. selection quality and adverse impact. Journal of
Bogg, T., & Roberts, B. W. (2004). Conscientiousness Applied Psychology, 92, 1380–1393.
and health behaviors: A meta-analysis of the leading Delery, J. E., & Kacmar, K. M. (1998). The influence of
behavioral contributors to mortality. Psychological applicant and interviewer characteristics on the use
Bulletin, 130, 887–919. of impression management. Journal of Applied
Bono, J. E., & Judge, T. A. (2004). Personality and Social Psychology, 28, 1649–1669.
transformational and transactional leadership: A Dudley, N. M., Orvis, K. A., Lebiecki, J. E., & Cortina,
meta-analysis. Journal of Applied Psychology, 89, J. M. (2006). A meta-analytic investigation of Con-
901–910. scientiousness in the prediction of job performance:
Borman, W. C. (2004). The concept of organizational Examining the intercorrelations and the incremental
citizenship. Current Directions in Psychological validity of narrow traits. Journal of Applied Psychol-
Science, 6, 238–241. ogy, 91, 40–57.
Borman, W. C., Penner, L. A., Allen, T. D., & Motowidlo, Dunnette, M. D. (1963). A note on the criterion. Journal
S. J. (2001). Personality predictors of citizenship per- of Applied Psychology, 47, 251–254.
formance. International Journal of Selection and Dunnette, M. D., McCartney, J., Carlson, H. C., &
Assessment, 9, 52–69. Kirchner, W. K. (1962). A study of faking behavior
Borman, W. C., White, L. A., Pulakos, E. D, & Oppler, S. on a forced-choice self-description checklist. Per-
H. (1991). Models of supervisory job performance sonnel Psychology, 15, 13–24.
ratings. Journal of Applied Psychology, 76, 863–872. Ellingson, J. E., Sackett, P. R., & Hough, L. M. (1999).
Brunswik, E. (1952). The conceptual framework of psy- Social desirability correction in personality measure-
chology. Chicago: University of Chicago Press. ment: Issues of applicant comparison and construct
Campbell, D. T., & Fiske, D. W. (1959). Convergent and validity. Journal of Applied Psychology, 84, 155–166.
discriminant validity in the multi-trait multi-method Ellis, A. P. J., West, B. J., Ryan, A. M., & DeShon, R. P.
matrix. Psychological Bulletin, 56, 81–105. (2002). The use of impression management tactics in
Campbell, J. P. (1990). The role of theory in industrial structured interviews: A function of question type?
and organizational psychology. In M. D. Dunnette & Journal of Applied Psychology, 87, 1200–1208.
L. M. Hough (Eds.), Handbook of industrial and orga- Else-Quest, N. M., Hyde, J. S., Goldsmith, H. H., & Van
nizational psychology (2nd ed., Vol. 1, pp. 39–73). Hulle, C. A. (2006). Gender differences in tempera-
Palo Alto, CA: Consulting Psychologists Press. ment: A meta-analysis. Psychological Bulletin, 132,
Campbell, J. P. (1996). Group differences and personnel 33–72.
decisions: Validity, fairness, and affirmative action. Erez, A., & Judge, T. A. (2001). Relationship of core
Journal of Vocational Behavior, 49, 122–158. self-evaluations to goal setting, motivation, and
Campbell, J. P, & Knapp, D. (2001). Project A: Exploring performance. Journal of Applied Psychology, 86,
the limits of performance improvement through per- 1270–1279.
sonnel selection and classification. Hillsdale, NJ: Feist, G. J. (1998). A meta-analysis of personality in
Erlbaum. scientific and artistic creativity. Personality and
Campbell, J. P., McCloy, R. A, Oppler, S. H., & Sager, Social Psychology Review, 2, 290–309.
C. E. (1993). A theory of performance. In N. Fleishman, E. A., & Quaintance, M. K. (1984). Taxono-
Schmitt & W. C. Borman (Eds.), Personnel selec- mies of human performance: The description of
tion in organizations (pp. 35–70). San Francisco: human tasks. Orlando, FL: Academic Press.
Jossey Bass. Foldes, H., Duehr, E. E., & Ones, D. S. (in press). Group
Campion, M. A., Outtz, J. L., Zedeck, S., Schmidt, F. L., differences in personality: Meta-analyses comparing
Kehoe, J. F., Murphy, K. R., et al. (2001). The contro- five U.S. racial groups. Personnel Psychology.
versy over score banding in personnel selection: Ghiselli, E. E. (1966). The validity of occupational
Answers to 10 key questions. Personnel Psychology, aptitude tests. New York: John Wiley & Sons.
54, 149–185. Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (1981).
Colquitt, J. A., LePine, J. A., & Noe, R. A. (2000). Toward Measurement theory for the behavioral sciences.
an integrative theory of training motivation: A meta- San Francisco: W.H. Freeman.
analytic path analysis of 20 years of research. Journal Guion, R. M. (1961). Criterion measurement and person-
of Applied Psychology, 85, 678–707. nel judgments. Personnel Psychology, 14, 141–149.
Personality testing and I–O psychology 287

Guion, R. M., & Gottier, R. F. (1965). Validity of person- Hough, L. M., & Furnham, A. (2003). Importance and
ality measures in personnel selection. Personnel use of personality variables in work settings. In I. B.
Psychology, 18, 135–164. Weiner (Ed-in-Chief) & W. Borman, D. Ilgen, & R.
Hansen, T. L., & McLellan, R. A. (1997, April). Social Klimoski (Vol. Eds.), Comprehensive handbook of
desirability and item content. In G. J. Lautenschlager psychology: Industrial organizational psychology.
(Chair), Faking on non-cognitive measures: The (Vol. 12, pp. 131–169). New York: Wiley & Sons.
extent, impact, and identification of dissimulation. Hough, L. M., & Ones, D. S. (2001). The structure, mea-
Symposium conducted at the 12th Annual Confer- surement, validity, and use of personality variables
ence of the Society of Industrial and Organizational in industrial, work, and organizational psychology.
Psychology, St. Louis, MO. In N. Anderson, D. S. Ones, H. K. Sinangil, & C.
Hattrup, K., & Jackson, S. E. (1996). Learning about Viswesvaran (Eds.), Handbook of industrial, work
individual differences by taking situations seriously. and organizational psychology (Vol. 1, pp.
In K. R. Murphy (Ed.), Individual differences and 233–377). London: Sage.
behavior in organizations (pp. 507–541). San Hough, L. M., Ones, D. S., & Viswesvaran, C. (1998,
Francisco: Jossey-Bass. April). Personality correlates of managerial perfor-
Heron, A. (1956). The effects of real-life motivation on mance constructs. In R. Page (Chair), Personality
questionnaire response. Journal of Applied Psychol- determinants of managerial potential, performance,
ogy, 40, 65–68. progression and ascendancy. Symposium con-
Hogan, J., Barrett, P., & Hogan, R. (2007). Personality ducted at 13th Annual Conference of the Society
measurement, faking, and employment selection. for Industrial and Organizational Psychology,
Journal of Applied Psychology, 92, 1270–1285. Dallas, TX.
Hogan, J., & Holland, B. (2003). Using theory to evalu- Hough, L. M., & Oswald, F. L. (2000). Personnel selec-
ate personality and job-performance relations: A tion: Looking toward the future—Remembering the
socioanalytic perspective. Journal of Applied past. Annual Review of Psychology, 51, 631–664.
Psychology, 88, 100–112. Hough, L. M., Oswald, F. L., & Ployhart, R. E. (2001).
Hogan, J., & Ones, D. S. (1997). Conscientiousness and Determinants, detection, and amelioration of
integrity at work. In R. Hogan, J. Johnson, & S. Briggs adverse impact in personnel selection procedures:
(Eds.), Handbook of personality psychology (pp. Issues, evidence, and lessons learned. International
513–541). San Diego: Academic Press. Journal of Selection and Assessment, 9, 152–194.
Hollenbeck, J. R. (1998). Personnel Psychology’s cita- Hough, L. M., & Schneider, R. J. (1996). Personality
tion leading articles: The first five decades [Introduc- traits, taxonomies, and applications in organiza-
tion]. Personnel Psychology, 51, Editorial. tions. In K. Murphy (Ed.), Individual differences
Hough, L. M. (1992). The ‘‘Big Five’’ personality and behavior in organizations (pp. 31–88). San Fran-
variables—Construct confusion: Description versus cisco: Jossey-Bass.
prediction. Human Performance, 5, 139–155. Huffcutt, A. I., & Arthur, W., Jr. (1994). Hunter and
Hough, L. M. (1997). The millennium for personality Hunter (1984) revisited: Interview validity for
psychology: New horizons or good old daze. entry-level jobs. Journal of Applied Psychology,
Applied Psychology: An International Review, 47, 79, 184–190.
233–261. Hunter, J. E. (1980). Validity generalization for 12,000
Hough, L. M. (1998a). Effects of intentional distortion in jobs: An application of synthetic validity and validity
personality measurement and evaluation of sug- generalization to the General Aptitude Test Battery
gested palliatives. Human Performance, 11, (GATB). Washington, DC: U.S. Department of
209–244. Labor, Employment Service.
Hough, L. M. (1998b). Personality at work: Issues and Hunter, J. E. (1983). A causal analysis of cognitive abil-
evidence. In M. Hakel (Ed.), Beyond multiple ity, job knowledge, job performance, and supervisor
choice: Evaluating alternatives to traditional testing ratings. In F. Landy, S. Zedeck, & J. Cleveland (Eds.),
for selection (pp. 131–166). Hillsdale, NJ: Lawrence Performance measurement and theory (pp.
Erlbaum. 257–266). Mahwah, NJ: Erlbaum.
Hough, L. M. (2001). I/Owes its advances to personality. Hurtz, G. M., & Donovan, J. J. (2000). Personality and
In B. Roberts & R. T. Hogan (Eds.), Personality job performance: The Big Five revisited. Journal of
psychology in the workplace (pp. 19–44). Washing- Applied Psychology, 85, 869–879.
ton, DC: American Psychological Association. Judge, T. A., Bono, J. E., Ilies, R., & Gerhardt, M. W.
Hough, L. M. (2003). Emerging trends and needs in per- (2002). Personality and leadership: A qualitative
sonality research and practice: Beyond main effects. and quantitative review. Journal of Applied Psychol-
In M. Barrick & A. Ryan (Eds), Personality and work ogy, 87, 765–780.
(pp. 289–325). New York: Wiley & Sons. Judge, T. A., Erez, A., Bono, J. E., & Thoresen, C. J.
Hough, L. M., & Dilchert, S. (2007, October). Inventors, (2003). The core self-evaluation scale: Development
innovators, and their leaders: Selecting for Con- of a measure. Personnel Psychology, 56, 303–331.
scientiousness will keep you ‘‘inside the box.’’ Judge, T. A., Heller, D., & Mount, M. K. (2002). Five-
Paper presented at SIOP’s 3rd Leading Edge Con- factor model of personality and job satisfaction: A
sortium: Enabling Innovation in Organizations, meta-analysis. Journal of Applied Psychology, 87,
Kansas City, MO. 530–541.
Hough, L. M., Eaton, N. L., Dunnette, M. D., Kamp, J. D., Judge, T. A., & Ilies, R. (2002). Relationship of personality
& McCloy, R. A. (1990). Criterion-related validities to performance motivation: A meta-analytic review.
of personality constructs and the effect of response Journal of Applied Psychology, 87, 797–807.
distortion on those validities [Monograph]. Journal Kamp, J. D., & Hough, L. M. (1986). Utility of personal-
of Applied Psychology, 75, 581–595. ity assessment: A review and integration of the
288 L.M. Hough and F.L. Oswald

literature. In L. M. Hough (Ed.), Utility of tempera- McAdams, D. P. (1992). The five-factor model in person-
ment, biodata, and interest assessment for predic- ality: A critical appraisal. Journal of Personality, 60,
ting job performance: A review and integration 329–361.
of the literature (ARI Research Note No. 88-02, McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., &
pp. 1–90). Alexandria, VA: U.S. Army Research Ins- Maurer, S. D. (1994). The validity of employment
titute for the Behavioral and Social Sciences. interviews: A comprehensive review and meta-
Kanfer, R., Ackerman, P. L., Murtha, T., & Goff, M. analysis. Journal of Applied Psychology, 79, 599–616.
(1995). Personality and intelligence in industrial McDonald, R. P. (1999). Test theory: A unified treat-
and organizational psychology. In D. H. Saklofske ment. Mahwah, NJ: Erlbaum.
& M. Zeidner (Eds.), International handbook of McFarland, L., & Ryan, A. M. (2006). Toward an inte-
personality and intelligence (pp. 577–602). New grated model of applicant faking behavior. Journal of
York: Plenum Publishing. Applied Social Psychology, 36, 979–1016.
Kanfer, R., & Heggestad, E. D. (1999). Individual differ- McFarland, L., Yun, G., Harold, C. M., Viera, R., Jr., &
ences in motivation: Traits and self-regulatory skills. Moore, L. (2005). An examination of impression
In P. L. Ackerman & P. C. Kyllonen (Eds.), Learning management use and effectiveness across assess-
and individual differences: Process, trait, and ment center exercises: The role of competency
content determinants (pp. 293–313). Washington, demands. Personnel Psychology, 58, 949–980.
DC: American Psychological Association. McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M.
Kirchner, W. K. (1962). ‘‘Real-life’’ faking on the A., & Ashworth, S. (1990). Project A validity results:
Edwards Personal Preference Schedule by sales The relationship between predictor and criterion
applicants. Journal of Applied Psychology, 46, domains. Personnel Psychology, 43, 335–354.
128–130. Mershon, B., & Gorsuch, R. L. (1988). Number of factors
Klehe, U. C., & Anderson, N. (2007). Working hard and in personality sphere: Does increase in factors
working smart: Motivation and ability during typical increase predictability of real life criteria? Journal
and maximum performance. Journal of Applied Psy- of Personality and Social Psychology, 55, 675–680.
chology, 92, 978–992. Michaelis, W., & Eysenck, H. J. (1971). The determina-
Klein, H. J., & Lee, S. (2006). The effects of personality tion of personality inventory factor pattern and inter-
on learning: The mediating role of goal setting. correlations by changes in real-life motivation.
Human Performance, 19, 43–66. Journal of Genetic Psychology, 118, 223–234.
Kluger, A. N., & Collela, A. (1993). Beyond the Mischel, W. (1968). Personality and assessment. New
mean bias: Effects of warning against faking on bio- York: Wiley.
data item variances. Personnel Psychology, 46, Mischel, W. (1977). The interaction of person and sit-
763–780. uation. In D. Magnusson & N. S. Endler (Eds.), Per-
Kohlberg, L., Levine, C., & Hewer, A. (1994). Moral sonality at the crossroads: Current issues in
stages: A current formulation and a response to interactional psychology (pp. 333–352). Hillsdale,
critics. In B. Puka (Ed.), New research in moral NJ: Erlbaum.
development (Moral development: A compendium) Mol, S. T., Born, M. P. H., Willemsen, M. E., & Van Der
(Vol. 5, pp. 126–188). New York: Garland Molen, H. T. (2005). Predicting expatriate job per-
Publishing. formance for selection purposes: A quantitative
Komar, S., Brown, D. G., Komar, J. A., & Robie, C. review. Journal of Cross-Cultural Psychology, 36,
(2008). Faking and the validity of conscientiousness: 590–620.
A Monte Carlo investigation. Journal of Applied Psy- Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hol-
chology, 93, 140–154. lenbeck, J. R., Murphy, K., & Schmitt, N. (2007a). Are
Lee, F. K., Sheldon, K. M., & Turban, D. B. (2003). Per- we getting fooled again? Coming to terms with lim-
sonality and the goal-striving process: The influence itations in the use of personality tests for personnel
of achievement goal patterns, goal level, and mental selection. Personnel Psychology, 60, 1029–1049.
focus on performance and enjoyment. Journal of Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hol-
Applied Psychology, 88, 256–265. lenbeck, J. R., Murphy, K., & Schmitt, N. (2007b).
Lee, K., Ashton, M. C., & deVries, R. E. (2005). Predict- Reconsidering the use of personality tests in person-
ing workplace delinquency and integrity with the nel selection contexts. Personnel Psychology, 60,
HEXACO and five-factor models of personality 683–729.
structure. Human Performance, 18, 179–197. Mount, M. K., Barrick, M. R., & Strauss, J. P. (1994).
LePine, J. A., Erez, A., & Johnson, D. E. (2002). The Validity of observer ratings of the Big Five personal-
nature and dimensionality of organizational citizen- ity factors. Journal of Applied Psychology, 79,
ship behavior: A critical review and meta-analysis. 272–280.
Journal of Applied Psychology, 87, 52–65. Nilsen, D. (1995). Investigation of the relationship
Levashina, J., & Campion, M. A. (2006). A model of between personality and leadership performance.
faking likelihood in the employment interview. Unpublished doctoral dissertation, Minneapolis,
International Journal of Selection and Assessment, MN: University of Minnesota.
14, 299–316. Ng, T. W. H., Eby, L. T., Sorensen, K. L., & Feldman, D. C.
Loevinger, J. (1966). The meaning and measurement of (2005). Predictors of objective and subjective career
ego development. American Psychologist, 21, success. A meta-analysis. Personnel Psychology, 58,
195–206. 367–408.
Loevinger, J. (1976). Ego development: Conceptions Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T. A.
and theories. San Francisco: Jossey-Bass. (2007). In support of personality assessment in orga-
Loevinger, J. (1994). In search of grand theory. Psycho- nizational settings. Personnel Psychology, 60, 995–
logical Inquiry, 5, 142–144. 1027.
Personality testing and I–O psychology 289

Ones, D. S., Viswesvaran, C., & Dilchert, S. (2005). Rothstein, M. G., & Goffin, R. D. (2006). The use of
Personality at work: Raising awareness and correc- personality measures in personnel selection: What
ting misconceptions. Human Performance, 18, does current research support? Human Resource
389–404. Management Review, 16, 155–180.
Ones, D. S., Viswesvaran, C., & Reiss, A. D. (1996). Role Ryan, A. M., Ployhart, R. E., & Friedel, L. A. (1998).
of social desirability in personality testing for per- Using personality testing to reduce adverse impact:
sonnel selection: The red herring. Journal of Applied A cautionary note. Journal of Applied Psychology,
Psychology, 81, 660–679. 83, 298–307.
Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Ryan, A. M., & Sackett, P. R. (1987). Pre-employment
Comprehensive meta-analysis of integrity test honesty testing: Fakability, reactions of test takers,
validities: Findings and implications for personnel and company image. Journal of Business and Psy-
selection and theories of job performance. Journal chology, 1, 248–256.
of Applied Psychology, 78, 679–703. Sackett, P. R., Berry, C. M., Wiemann, S. A., & Laczo,
Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (2003). R. M. (2006). Citizenship and counterproductive
Personality and absenteeism: A meta-analysis of behavior: Clarifying relations between the two
integrity tests. European Journal of Personality, domains. Human Performance, 19, 441–464.
17(Suppl), S19–S38. Sackett, P. R., & DeVore, C. J. (2002). Counterproductive
Ong, A. D., Bergeman, C. S., Bisconti, T. L., & Wallace, behaviors at work. In N. Anderson, D. S. Ones, H. K.
K. A. (2006). Psychological resilience, positive emo- Sinangil & C. Viswesvaran (Eds.), Handbook of
tions, and successful adaptation to stress in later life. industrial, work and organizational psychology
Journal of Personality and Social Psychology, 91, (Vol. 1, pp. 145–164). Thousand Oaks, CA: Sage.
730–749. Sackett, P. R., & Ellingson, J. E. (1997). The effects of
Organ, D. W., & Ryan, K. (1995). A meta-analytic forming multi-predictor composites on group differ-
review of attitudinal and dispositional predictors of ences and adverse impact. Personnel Psychology,
organizational citizenship behavior. Personnel Psy- 50, 707–721.
chology, 48, 775–802. Sackett, P. R., Zedeck, S., & Fogli, L. (1988). Relations
Orpen, C. (1971). Fakability of the Edwards Personal between measures of typical and maximum job
Preference Schedule in personnel selection. Person- performance. Journal of Applied Psychology, 73,
nel Psychology, 24, 1–4. 482–486.
Paunonen, S. V. (1998). Hierarchical organization of Saucier, G., Bel-Bahar, T., & Fernandez, C. (2007). What
personality and prediction of behavior. Journal of modifies the expression of personality tendencies?
Personality and Social Psychology, 74, 538–556. Defining basic domains of situation variables. Jour-
Paunonen, S. V., & Ashton, M. C. (2001). Big Five nal of Personality, 75, 479–503.
factors and facets and the prediction of behavior. Saucier, G., & Goldberg, L. R. (1998). What is beyond
Journal of Personality and Social Psychology, 81, the Big Five? Journal of Personality, 66, 495–524.
524–539. Schmidt, F. L., & Hunter, J. E. (1998). The validity and
Peeters, H., & Lievens, F. (2005). Situational judgment utility of selection methods in personnel psychol-
tests and their predictiveness of college students’ ogy: Practical and theoretical implication of 85
success: The influence of faking. Educational and years of research findings. Psychological Bulletin,
Psychological Measurement, 65, 70–89. 124, 262–274.
Peeters, M. A. G., Van Tuijl, H. F. J. M., Rutte, C. G., & Schmit, M. J., Ryan, A. M., Stierwalt, S. L., & Powell, A.
Reymen, I. M. M. J. (2006). Personality and team B. (1995). Frame-of-reference effects on personality
performance: A meta-analysis. European Journal of scale scores and criterion-related validity. Journal of
Personality, 20, 377–396. Applied Psychology, 80, 607–620.
Pervin, L. A. (1994). A critical analysis of current trait Schmitt, N., & Kunce, C. (2002). The effects of required
theory. Psychological Inquiry, 5, 103–113. elaboration of answers to biodata questions. Person-
Potosky, D., Bobko, P., & Roth, P. L. (2005). Forming nel Psychology, 55, 569–587.
composites of cognitive ability and alternative mea- Schmitt, N., & Oswald, F. L. (2006). The impact of cor-
sures to predict job performance and reduce adverse rections for faking on the validity of noncognitive
impact: Corrected estimates and realistic expecta- measures in selection settings. Journal of Applied
tions. International Journal of Selection and Assess- Psychology, 91, 613–621.
ment, 13, 304–315. Schmitt, N., Rogers, W., Chan, D., Sheppard, L., &
Roberts, B. W., Chernyshenko, O. S., Stark, S., & Jennings, D. (1997). Adverse impact and predictive
Goldberg, L. R. (2005). The structure of conscien- efficiency of various predictor combinations. Journal
tiousness: An empirical investigation based on seven of Applied Psychology, 82, 719–730.
major personality questionnaires. Personnel Schneider, B., Goldstein, H. W., & Smith, D. B. (1995).
Psychology, 58, 103–139. The ASA framework: An update. Personnel Psychol-
Roberts, B. W., Kuncel, N. R., Shiner, R., Caspi, A., & ogy, 48, 747–779.
Goldberg, L. R. (2007). The power of personality: Schneider, R. J., & Hough, L. M. (1995). Personality and
The comparative validity of personality traits, socio- industrial/organizational psychology. In C. L. Coo-
economic status, and cognitive ability for predicting per & I. T. Robertson (Eds.), International review of
important life outcomes. Perspectives on Psycholog- industrial and organizational psychology (pp. 75–
ical Science, 2, 313–345. 129). Chichester, UK: Wiley.
Rosse, J. G., Stecher, M. D., Levin, R. A., & Miller, J. L. Schneider, R. J., Hough, L. M., & Dunnette, M. D.
(1998). The impact of response distortion on preem- (1996). Broadsided by broad traits: How to sink sci-
ployment personality testing and hiring decisions. ence in five dimensions or less. Journal of Organiza-
Journal of Applied Psychology, 83, 634–644. tional Behavior, 17, 639–655.
290 L.M. Hough and F.L. Oswald

Schwab, D. P., & Packard, G. L. (1973). Response dis- ity-job performance relationship. Journal of Organi-
tortion on the ‘‘Gordon Personal Inventory’’ and the zational Behavior, 24, 335–356.
‘‘Gordon Personal Profile’’ in a selection context: Tourangeau, R., & Yan, T. (2007). Sensitive questions in
Some implications for predicting employee tenure. surveys. Psychological Bulletin, 133, 859–883.
Journal of Applied Psychology, 58, 372–374. Trattner, M. H. (1982). Synthetic validity and its appli-
Smith, P. C. (1976). Behaviors, results, and organiza- cation to the Uniform Guidelines validation require-
tional effectiveness: The problem of criteria. In M. ments. Personnel Psychology, 35, 383–397.
D. Dunnette (Ed.), Handbook of industrial and orga- Trent, T. (1987, August). Armed forces adaptability
nizational psychology (pp. 745–775). Chicago: screening: The problem of item response distortion.
Rand McNally College Publishing. Paper presented at the 95th Annual Convention
Steel, P. (2007). The nature of procrastination: A meta- of the American Psychological Convention, New
analytic and theoretical review of quintessential York City.
self-regulatory failure. Psychological Bulletin, 133, Van Vugt, M., De Cremer, D., & Janssen, D. P. (2007).
65–94. Gender differences in cooperation and competition:
Steel, P., Schmidt, J., & Shulz, J. (2008). Refining The male-warrior hypothesis. Psychological Sci-
the relationship between personality and subjec- ence, 18, 19–23.
tive well-being. Psychological Bulletin, 134, Viswesvaran, C., & Ones, D. S. (1999). Meta-analyses of
138–161. fakability estimates: Implications for personality
Stewart, G. (1998). Trait bandwidth and stages of job measurement. Educational and Psychological Mea-
performance: Assessing differential effects for con- surement, 59, 197–210.
scientiousness and its subtraits. Journal of Applied Westerman, J. W., & Simmons, B. L. (2007). The effects
Psychology, 84, 959–968. of work environment on the personality-performance
Tellegen, A. (1993). Folk concepts and psychological relationship: An exploratory study. Journal of Mana-
concepts of personality and personality disorder. gerial Issues, 19, 288–305.
Psychological Inquiry, 4, 122–130. Wiggins, J. S., & Trapnell, P. D. (1997). Personality struc-
Tett, R. P., & Burnett, D. B. (2003). A personality trait- ture: The return of the Big Five. In R. Hogan, J. John-
based interactionist model of job performance. Jour- son, & S. Briggs (Eds.), Handbook of personality
nal of Applied Psychology, 88, 500–517. psychology (pp. 737–765). San Diego: Academic.
Tett, R. P., & Christiansen, N. D. (2007). Personality tests Wood, J. L., Schmidke, J. M., & Decker, D. L. (2007).
at the crossroads: A response to Morgeson, Cam- Lying on job applications: The effects of job rele-
pion, Dipboye, Hollenbeck, Murphy, and Schmitt vance, commission, and human resource manage-
(2007). Personnel Psychology, 60, 967–993. ment experience. Journal of Business Psychology,
Tett, R. P., Jackson, D. N., Rothstein, M., & Reddon, J. R. 22, 1–9.
(1999). Meta-analysis of bi-directional relations in Young, M. C. (2003, June). Effects of retesting on a new
personality-job performance research. Human Per- Army measure of motivational attributes: Implications
formance, 12, 1–29. for response distortion, test validity, and operational
Tett, R. P., Steele, J. R., & Beauregard, R. S. (2003). Broad use. Paper presented at the 27th IPMAAC Conference
and narrow measures on both sides of the personal- on Personnel Assessment, Baltimore, MD.

View publication stats

You might also like