You are on page 1of 16

The Impact of Applicant Faking on Selection Measures, Hiring Decisions, and Employee

Performance
Author(s): John J. Donovan, Stephen A. Dwight and Dan Schneider
Source: Journal of Business and Psychology, Vol. 29, No. 3 (September 2014), pp. 479-493
Published by: Springer
Stable URL: https://www.jstor.org/stable/24709848
Accessed: 02-04-2020 19:39 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Springer is collaborating with JSTOR to digitize, preserve and extend access to Journal of
Business and Psychology

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
J Bus Psychol (2014) 29:479^193
DOl 10.1007/s 10869-013-9318-5

The Impact of Applicant Faking on Selection Measures, Hiring


Decisions, and Employee Performance

John J. Donovan • Stephen A. Dwight •


Dan Schneider

Published online: 6 August 2013


© Springer Science+Business Media New York 2013

Abstract Originality/Value Remarkably few studies have exam


Purpose The purpose or this study was to examine the ined applicant faking using a within-subjects design using
prevalence of applicant faking and its impact on the psy actual job applicants, which has limited our understanding
chometric properties of the selection measure, the quality of applicant faking. Even fewer studies have attempted to
of hiring decisions, and employee performance. link faking to criterion data to evaluate the impact of faking
Design/Methodology/Approach This study utilized a on employee performance. By utilizing this design and
within-subjects design where responses on a self-report setting, the present study provides a unique glimpse into
measure were obtained for 162 individuals both when they both the prevalence of faking and the significant impact
applied for a pharmaceutical sales position, and after they faking can have on organizations.
were hired. Training performance data was collected at the
completion of sales training and sales data was collected Keywords Applicant faking • Response distortion •
5 months later. Within-subjects design • Field study • Employee
Findings Applicant faking was a common occurrence, selection
with approximately half of the individuals being classified
as a faker on at least one of the dimensions contained in the

self-report measure. In addition, faking was found to neg Introduction


atively impact the psychometric properties of the selection
measure, as well as the quality of potential hiring decisions The past two decades have seen a resurgence in the use of
made by the organization. Further, fakers exhibited lower self-report, non-cognitive measures in employee selection
levels of performance than non-fakers. (e.g., personality testing; Oswald and Hough 2008). This
Implications These findings indicate that past conclusions growth in popularity appears to be due in part to empirical
that applicant faking is either uncommon or does not evidence demonstrating that these types of measures
negatively impact the selection system and/or organiza exhibit positive relationships with performance criteria for
tional performance may be unwarranted. a variety of jobs (e.g., Barrick and Mount 1991; Hurtz and
Donovan 2000), and may provide incremental validity in
predicting performance over and above more traditional
J. J. Donovan (0) selection techniques such as cognitive ability testing (e.g.,
Department of Management, Rider University, Day and Silverman 1989; McHenry et al. 1990) and
2083 Lawrenceville Road, Lawrenceville,
assessment centers (e.g., Goffin et al. 1996).
NJ 08648, USA
e-mail: jdonovan@rider.edu Despite these benefits, some researchers and practitio
ners are reluctant to accept this type of testing as a valid
S. A. Dwight means of selecting employees due to concerns over appli
Novo Nordisk, Plainsboro, NJ, USA
cant faking (Hough and Oswald 2008). Applicant faking
D. Schneider refers to situations where applicants distort their responses
BTG International, West Conshohocken, PA, USA to self-report measures, presumably as a means of making

*£j Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
480 J Bus Psychol (2014) 29:479^*93

themselves look more attractive to the company in order to RQ 1 How common is applicant faking in an applied
maximize their chances of being hired. The concern is that selection setting?
there are individuals who can and will fake these measures
Moving beyond the issue of faking prevalence, there is
and they will be misclassified as being among the most
also considerable disagreement regarding whether or not
qualified individuals for a position. The organization in
faking has harmful effects on the psychometric properties
turn will make poor hiring decisions, which may lead to
of non-cognitive, self-report selection measures. For
negative outcomes such as low levels of job performance
example, while some suggest that faking has the potential
among those hired, poor person-job fit, or an increased
to degrade the factor structure (e.g., Cellar et al. 1996;
likelihood of hiring those likely to engage in counterpro
Schmit and Ryan 1993) and psychometric functioning
ductive behaviors (e.g., Morgeson et al. 2007). The goal of
(e.g., Stark et al. 2001) of these measures, other researchers
the present study was to examine the validity of these
have concluded that faking does not pose a significant
concerns regarding the use of self-report measures for
threat to the measurement properties of self-report selec
employee selection using empirical evidence obtained in an
tion measures (e.g., Smith and Ellingson 2002; Smith et al.
actual employment setting.
2001). Once again, these inconsistent findings suggest that
additional research examining the impact of faking on the
Research on Applicant Faking measurement properties of self-report selection instruments
is warranted. In line with this assertion, our second
A sizable number of studies have been conducted on the
research question focused on examining the effects of
topic of applicant faking to determine if commonlyapplicant faking on the psychometric properties of selec
tion measures.
voiced concerns over this phenomenon in selection set
tings are warranted. The first, and perhaps most obvious
RQ 2 How does faking impact the psychometric prop
question that comes to mind when attempting to deter
erties of the self-report measure?
mine if faking represents a real concern in employee
selection is whether or not applicants can actually inflate A final issue in the faking literature that has generated
their scores on the types of non-cognitive measures usedsubstantial debate is whether applicant faking is actually
in hiring. Research examining this question has clearly
harmful to organizational selection systems. As noted by
and unambiguously concluded that individuals are able toseveral researchers, faking may not be problematic from an
inflate their scores on these types of measures, with fakorganizational perspective unless faking impacts the per
ability estimates ranging between one-half of a standardformance of those that are hired (Mueller-Hanson et al.
deviation (Viswesvaran and Ones 1999) all the way up 2003;
to Rosse et al. 1998). Not surprisingly, there is an equal
over one full standard deviation (e.g., Alliger and Dwightamount of disagreement on this issue with a similarly
2000). inconsistent research literature. More specifically, while
However, the fact that applicants can effectively inflate some have concluded that the effects of faking on selection
their scores on these measures would only be a concern if systems is minimal (e.g., Hogan et al. 2007; Hough et al.
applicants are actually engaging in this behavior in selec 1990; Ones et al. 1996), others have demonstrated that
tion settings. In other words, although the studies cited faking has the potential to decrease the criterion related
above tell us that applicants can fake, they provide us with validity of these self-report measures (e.g., Douglas et al.
no information about whether applicants actually do fake. 1996; Komar et al. 2008), and impair the overall utility of
Unfortunately, in contrast to the rather clear research selection systems (Anderson et al. 1984; Christiansen et al.
findings regarding the ability of applicants to fake, there is 1994; Pannone 1984; Rosse et al. 1998). Given these mixed
much less agreement concerning the prevalence of faking findings, our final research question focused on examining
among applicants. While some have concluded that faking the impact of applicant faking on the overall utility of a
is a relatively uncommon occurrence or is minimal in selection system.
nature (e.g., Dunnette et al. 1962; Hough et al. 1990),
RQ 3 Does faking impact the utility of selection systems?
others have found faking to be relatively common in
selection settings (e.g., Anderson et al. 1984; Donovan Taken together, these mixed and often inconsistent
et al. 2003; Pannone 1984; Rosse et al. 1998). Given these findings make it clear that additional research is needed to
mixed findings, it is clear that additional research is needed shed light on the prevalence and effects of applicant faking.
to shed light on the prevalence of applicant faking. As However, as noted by several researchers (e.g., Ellingson
such, our first research question focused on providing an et al. 2007; Griffith et al. 2007; Hogan et al. 2007), in order
estimate of the prevalence of applicant faking in a real for us to provide informative answers to these key ques
world selection setting. tions about applicant faking, researchers in this area need

Ô Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
J Bus Psychol (2014) 29:479^193 481

to refine the methodologies used to study these topics. That The final major limitation of past faking research is that
is, it's not that we simply need more studies of faking to many of these studies have attempted to examine faking
help us answer these questions, it's that we need new from a between-subjects perspective. To illustrate, one of
studies of faking that surmount the methodological limi the more common methods of studying faking in the past
tations of past research in this domain. involved comparing the scores of groups of individuals
obtained in distinct settings (e.g., applicants vs. incum
Methodological Limitations of Past Faking Research bents) on the selection measures of interest and drawing
inferences about faking based upon observed differences
Although the body of faking research accumulated thus far (e.g., Becker and Colquitt 1992; Dunnette et al. 1962;
has certainly provided us with useful information regarding Hogan et al. 2007—Study 3; Hough et al. 1990; Rosse et al.
this phenomenon, much of this research has suffered from 1998). As noted by past researchers (e.g., Ellingson et al.
three significant limitations that hinder our ability to draw 2007; Guion and Cranny 1982; Viswesvaran and Ones
firm conclusions about applicant faking. 1999), there are a number of important differences that
First, with few exceptions (e.g., Ellingson et al. 2007; exist between job applicants and incumbents which limit
Rosse et al. 1998), the majority of faking research con the utility and validity of these types of group mean
ducted to date has taken place in either a laboratory setting comparisons to assess faking. For example, this type of
or in a simulated employment scenario. Although this research design clearly cannot utilize random assignment
research can provide us with helpful information regarding to these groups in order to rule out confounding influences
the potential prevalence and outcomes of faking, these on test scores. Perhaps more importantly, these types of
studies do not provide us with estimates of the prevalence between-subjects designs ignore the variability in faking
and impact of faking in actual organizations. To provide behavior among applicants that has been demonstrated in
truly informative answers about the prevalence and effects past research (e.g., McFarland and Ryan 2000; Zickar et al.
of faking in real-world selection settings, researchers need 2004). Both Ellingson et al. (2007) and Viswesvaran and
to shift their focus to conducting field research examining Ones (1999) note that the limitations of a between-subjects
these very questions using actual job applicants. Given the approach to studying faking call into question the accuracy
broad usage of non-cognitive, self-report measures in of the estimates provided by these studies, and suggest that
applied settings and their susceptibility to response dis a within-subjects approach to studying faking may be more
tortion, one cannot overestimate the need for well-con appropriate and valid. Unfortunately, there have been
ducted field studies to help illuminate whether faking remarkably few within-subjects studies examining appli
occurs and whether or not applicant faking negatively cant faking, and even fewer that have taken place in actual
impacts the utility of selection systems. Although we rec organizational settings.
ognize that this sort of research is often more difficult and In a promising sign, recent work by Ellingson et al.
time consuming to conduct, the value of the information (2007) and Hogan et al. (2007) has attempted to address
that can be gathered far outweighs the costs. these methodological limitations by utilizing within-sub
A second limitation present in research on applicant jects designs in real-world settings. However, the nature of
faking is that very few studies conducted thus far have the data collected in these studies suffers from limitations
collected any form of real-world criterion data to assess the that may hinder our ability to draw generalizable conclu
impact of observed faking on employee performance in sions about the nature of applicant faking. First, the two
organizational settings (for notable exceptions see data points gathered for each individual by Ellingson et al.
O'Connell et al. 2011; Peterson et al. 2011a, b). Instead, (2007) from an archival data set to assess faking were not
much of the research has focused on assessing the preva uniform regarding the amount of time that had elapsed
lence of faking, the impact of faking on selection measures between the first and second administrations, with time
themselves, or evaluating how faking impacts hiring intervals ranging from 12 days to 7 years. Second, the
decisions. However, as noted by Mueller-Hanson et al. personality inventories examined in both Hogan et al.
(2003), applicant faking in and of itself may not be prob (2007) and Ellingson et al. (2007) utilized a dichotomous
lematic unless this faking results in lower levels of per response format, which necessarily limits the magnitude of
formance among fakers that are hired as compared to those faking that can be observed as compared to response for
who responded honestly to the selection measure. As such, mats commonly used by other self-report measures (e.g., a
the lack of research examining the impact of applicant 5-point or 7-point Likert scale). As a result, research con
faking on employee performance makes it rather difficult to ducted with these dichotomous response scales has the
provide a confident answer to the practical question of potential to significantly underestimate the magnitude of
"Does faking matter for organizations?" applicant faking. In addition, neither of these two studies

"0 Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
482 J Bus Psychol (2014) 29:479-493

collected any form of criterion data to evaluate the impact and were subsequently hired and asked to attend a sales
of faking on employee performance. training program.
More recent work by O'Connell et al. (2011) and Pet
erson et al. (2011 a, b) has also attempted to provide data on Procedure

the critical relationship between applicant faking and sub


sequent employee performance. O'Connell et al. (2011) As a part of the job application process, all individuals were
examined the link between faking behavior and both sub asked to complete a dispositional goal orientation ques
jective and objective measures of performance in a sample tionnaire specifically designed to assess goal orientation
of manufacturing employees, finding that various indices of tendencies in workers (VandeWalle 1997). It is important to
faking were weakly correlated to the outcomes of contex note that this measure was conceptualized and designed as a
tual performance and tardiness, but not supervisor ratings of measure of stable goal orientation related to the workplace,
task performance. Peterson et al. (2011a, b) examined the as opposed to other measures of goal orientation designed to
relationship between faking and counterproductive work either assess state-like orientations (e.g., Elliot and Church
behaviors in a sample of manufacturing employees, 1997) or assess goal orientation tendencies in non-work
observing that faking was weakly correlated with self domains (e.g., Midgley et al. 1998). These responses were
reports of counterproductive work behaviors. Together, collected along with several other selection measures dur
these studies represent an excellent first step toward ing the application process, but were not used in any way as
developing our understanding of the impact of faking on a means of selecting or rejecting applicants. Following the
real-world performance. However, the inconsistency of the completion of this hiring event, those applicants that had
relationships observed in O'Connell et al. (2011), the use of been selected were required to attend a training program
a self-report measure of performance by Peterson et al. designed to increase their product knowledge, understand
(201 la, b), and the use of manufacturing employees for both ing of the marketplace, and selling skills. In this training
studies suggests that more research examining the faking program, representatives participated in a high fidelity sales
performance relationship using other forms of performance simulation where they were given the opportunity to make
data and other organizational populations is needed. 3-5 min simulated "sales calls" to trainers playing the role
Thus, although these four recent studies are a step in the of physicians. The number of calls made and the timing of
right direction, limitations associated with this empirical work these calls were left entirely up to the discretion of the
hinder our ability to draw firm conclusions about the preva trainee. Each sales call was scored by the trainer using a
lence and impact of applicant faking in real-world settings. 5-point behaviorally anchored rating scale along two
dimensions (product knowledge and selling skills), and
Purpose of Present Study trainees were given their score immediately following the
call. In addition, cumulative scores for trainees were posted
Given the inconsistent findings in the faking literature each day in a public location.
along with the methodological limitations that have pla This training program commenced approximately
gued much of this research, the purpose of the present 4 months after the hiring event. At the start of this
study was twofold: (a) to conduct a field study designed to training program, all individuals once again completed
assess the prevalence of applicant faking, and (b) to eval the goal orientation instrument, along with a number of
uate the impact of this faking on the construct validity of other training-related measures. As such, all participants
the selection measure, the quality of hiring decisions made, provided us with self-report dispositional goal orientation
and employee performance. To accomplish this, we col data as both applicants and incumbents, thus creating a
lected and examined responses to a stable, non-cognitive within-subjects design that allowed us to circumvent
selection measure (dispositional goal orientation; Van many of the threats to validity associated with the use of
deWalle 1997) obtained from a group of individuals first as between-subjects designs in faking research (Ellingson
job applicants, then as incumbents after they were hired et al. 2007; Viswesvaran and Ones 1999).
(i.e., using a within-subjects design). It is important to note that for each administration of the
goal orientation survey, respondents were simply told that
they were expected to complete the questionnaire as a part
Methods of the hiring event (when they were applicants) or as a part
of the training event (when they were incumbents). No
Participants additional information regarding the purpose or use of the
instrument was provided to individuals in this study. In
Participants in this field study were 162 individuals who addition, individuals were not provided with any explana
applied for a sales position in a pharmaceutical company tion for why they were taking this instrument a second time

Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
J Bus Psychol (2014) 29:479-493 483

(as incumbents) or reminded that they had already com throughout the entire training program (i.e., cumulative
pleted this same instrument once before. performance). Each sales call could earn representatives
from 2 to 10 points, based upon the ratings assigned by the
Measures trainers playing the role of "physicians". The training
performance observed among individuals in this study
Dispositional Goal Orientation ranged from 8 to 453 (M = 121.25, SD = 46.10).

Dispositional goal orientation was assessed using a 13-item


Job Performance
measure designed by VandeWalle (1997) to assess stable
goal orientation tendencies among workers. Within this
Job performance was operationalized by calculating the
change in market share percentage for each individual's
inventory, four items are used to assess each of the perfor
mance orientation dimensions (performance-prove goal
geographic region of sales responsibility observed in the
orientation—PPGO; performance-avoid goal orientation—5 months following the sales training program described
previously (i.e., market share at 5 months following
PAGO), and five items assess the individual's learning goal
orientation (LGO). Although past research on faking andtraining — market share when the individual began their
job). The job performance scores observed among indi
selection has focused predominantly on Big Five measures
of personality, we felt that goal orientation represents viduals
a in this study ranged from —6.67 to 26.75
(M = 4.17, SD = 4.61).
reasonable alternative to these Big Five measures for several
reasons. First, similar to empirical work on the Big Five,
recent work in the field of motivation has suggested that goal
Results
orientation exhibits meaningful relationships with important
behaviors related to sales performance, such as effort,
planning, feedback seeking, and goal setting (e.g., Porath
Comparison of Applicant and Incumbent Scores
and Bateman 2006; VandeWalle et al. 1999). Second, given
its link to sales performance, goal orientation has been
Prior to investigating the effects of faking on our selection
suggested as a possible selection device for sales people
measure, we first sought to determine the extent to which
(e.g., Silver et al. 2006; VandeWalle et al. 1999). Third, this
individuals in the present study had engaged in faking by
particular measure of dispositional goal orientation has well
comparing applicant and incumbent scores across the three
established psychometric properties. More specifically, past
goal orientation dimensions. If applicants were faking, this
research on this inventory (e.g., VandeWalle 1997) has
should lead to more socially desirable responses in the
applicant setting (i.e., higher LGO scores, lower PPGO and
demonstrated a consistent factor structure composed of three
dimensions, and acceptable levels of both internal consisPAGO scores) as compared to the incumbent setting where
tency (a > .80) and test-retest reliability (LGO = .66,there is less motivation to fake. An examination of the mean
PPGO = .60, PAGO = .57 over a three-month span) that scores obtained across these two settings (Table 1) indicated
are comparable to the levels commonly observed with other
that, while applicant and incumbent scores were not signif
non-cognitive, self-report measures used in employee icantly different for the PAGO dimension (A/App|icant =
selection. To illustrate, VandeWalle (1997) reported a test19.70, MIncumbent = 19.64), applicants scored significantly
higher
retest reliability of .66 for the LGO dimension, which is quite on the LGO dimension (^Applicant = 19.49,
^Incumbent = 18.92, ?(161) = 2.04, p < .05, d = .17) and
similar to the reported test-retest reliability of the Reid
Report (Brooks and Arnold 1988), an integrity test that has
significantly lower on the PPGO dimension (MApp[icant =
been widely used in employee selection contexts. Finally,14.97, Mlncumbent = 15.41, r(161) = -2.35, p < .05, d =
research conducted by Tan and Hall (2005) has indicated
-.22). Although the effect sizes obtained for these differ
ences are somewhat lower than those reported in prior
that, similar to Big Five measures of personality, this mea
sure of goal orientation is susceptible to socially desirable
research, these findings nonetheless provide initial evidence
responding. Based upon these characteristics and the simi
that applicants were engaging in faking by inflating their
larity of this measure to other commonly examined selfscores on a socially desirable dimension (LGO) and lowering
their scores on a undesirable dimension (PPGO).
report selection measures, we felt goal orientation was well
suited to the purposes of the present study.
Identification of Fakers (RQ 1)
Training Performance
While mean differences among applicant and incumbent
Performance was operationalized as the number of points
scores suggest the presence of applicant faking, they do not
earned by representatives during simulated sales calls
provide us with any specific information regarding the

"Ö Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
484 J Bus Psychol (2014) 29:479-493

Table
Table 11 Descriptive
Descriptivestatistics
statisticsand
anddimension
dimension
intercorrelations
intercorrelations
forfor
the the appropriate given that our administrations of the goal ori
goal
goal orientation
orientationinstrument
instrument entation instrument were separated by 4 months (as com
Variable
VariableM SD 1
M SD 1 32 3 4 55 6 pared to other faking studies in which the administrations
are separated by very short periods of time). Second, the
Applicant
Applicant
scores scores
estimates provided by VandeWalle (1997) were observed
1.
1. LGO
LGO 19.49
19.49 3.67
3.67 (.57)
(.57)
over a time period (3 months) quite similar to that in the
2.
2. PPGO
PPGO14.97
14.971.75
1.75 .63*
.63* (-.16)
(-.16)
present study with participants that had no incentive to
3.
3. PAGO
PAGO19.70 2.78
19.70 2.78 .61*
.61* .47*
.47* (.42)
(.42)
engage in faking. The resulting SEMs for the goal orien
Incumbent
Incumbent scores
scores
tation dimensions were 2.02 (LGO), 1.41 (PPGO), and 1.75
4.
4. LGO
LGO18.92
18.923.06
3.06 .41*
.41* .26*
.26* .23*
.23* (.80)
(.80) (PAGO). The 95 % confidence intervals' were then created
5.
5. PPGO
PPGO15.41 2.342.34.24*
15.41 .24*.30*
.30*.13
.13 .62*
.62*(.82)
(.82) for each dimension by multiplying this SEM by 1.96 to
6.
6. PAGO
PAGO19.64 2.672.67
19.64 .18*.01
.18* .01.26*
.26* .62*
.62*.30*
.30* (.86)
(.86) determine the relevant upper or lower bound of the confi
dence interval
N = 162. Values on the diagonal represent reliability estimates (see Fig. 1). We then classified people as
"fakers"
LGO Learning Goal Orientation, PPGO Performance-Prove if their
Goal applicant score was above the upper limit
Orienta
tion, PAGO Performance-Avoid Goal Orientation of this confidence interval for the LGO dimension, and
* A correlation that is significant at the .05 level below the lower limit of this confidence interval for the

PAGO and PPGO dimensions (since lower PAGO and


PPGO scores would be considered more socially desirable
for job
prevalence of faking among these individuals. Toapplicants;
this end, Tan and Hall 2005). In other words,
individuals
we conducted a more focused set of analyses were classified as "fakers" when their scores as
that identified
individuals who demonstrated changes in their goal
applicants: ori from their incumbent scores in a
(a) departed
entation scores in a socially desirable direction when
socially desirable direction, and (b) this departure was
comparing their applicant scores to their incumbent
larger than would
scores.
be expected on the basis of measurement
The logic behind these analyses is that oneerror
canalone.
reasonably
assume that individuals have less motivation The
to results
fake of (i.e.,
this classification process (presented in
Table 2) indicated
their responses are more honest) when responding to thesethat roughly 22 % of the individuals
couldalready
instruments as an incumbent, since they have be classified as fakers on the LGO dimension, while
been
14 % and
hired and there is little benefit to be gained from 25 % could
faking. Asbe classified as fakers on the PPGO
such, these incumbent scores can serve as a logical
and baseline respectively. These findings clearly
PAGO dimensions,
indicate that
against which to compare the scores obtained by there were a substantial number of individuals
these
same individuals as applicants to ascertain which
displayingindivid
significant score changes across the applicant
uals appear to have engaged in faking. Recognizing that
and incumbent a
settings. Although we cannot definitively
conclude that
score change of one or two points can be entirely these changes are solely due to response
attrib
distortion on
utable to measurement error (rather than applicant the part of applicants, these changes in scores
faking),
we utilized the standard error of measurement (SEM)
are larger for one would expect to see simply due to
than what
each goal orientation dimension to create a 95 %error.
measurement confi
dence interval around each individual's incumbent score We then examined faking across the three goal ori
that represents the range of applicant scores that mighttation
be dimensions to examine the extent to which individ
uals were classified as fakers on at least one scale. The
expected for this individual given the random measurement
results of this analysis (Table 2) revealed that 50.3 % of
error associated with each goal orientation dimension (c.f.
Griffith et al. 2007). The SEM for each goal orientation
the individuals in this study were not classified as fakers for
was computed as: any of the three goal orientation dimensions. While this
could be considered reassuring evidence that a sizable
SEM = s\fV^~r\2
number of individuals do not engage in faking, it also
where s is the standard deviation of the incumbent scores ofindicates that approximately half of the individuals in this

study (49.7 %) are categorized as fakers on one or more


the goal orientation dimension of interest and rx2 is the
dimensions. These findings also suggest that applicant
test-retest reliability for that dimension. While past
researchers (e.g., Griffith et al. 2007) have tended to utilize
the internal consistency estimates obtained from "honest
responses" in their samples to form their SEM, we chose1
1to
While
Whileothers
othershave
have
suggested
suggested
alternatives
alternatives
to theto95the
% confidence
95 % conf
interval
interval(e.g.,
(e.g.,
Arthur
Arthur
et al.
et 2010),
al. 2010),
we felt
we that
felt our
thatapproach
our approach
would
utilize the test-retest reliability estimates reported by
facilitate
facilitatecomparison
comparison andand
integration
integration
with with
past research
past research
that hasthat
VandeWalle (1997; see "Methods") for two reasons. First,
tended
tendedto touse
usethis
this
95 95
% confidence
% confidence
interval
interval
(e.g.. Griffith
(e.g., Griffith
et al. 2007;
et al.
we felt that the test-retest reliability estimate was more
Peterson
Petersonet
etal.al.2011a, b). b).
201 la,

*0 Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
J Bus Psychol (2014) 29:479^193 485

Fig.
Fig. 11 Illustration
Illustrationofofanalyses
analyses For LGO scores:
used
used to
to identify
identifyfakers
fakers
95% Confidence Interval

À1

Faker

t
Incumbent
A

LGO score 1.96 SEMs

Low High
Applicant
ApplicantScores
Scores

For PAGO and PPGO scores:


95% Confidence Interval

Faker
A

Incumbent
1.96 SEMs PPGO or
PAGO score

Low High
Applicant Scores

Table 2 Results of "Faker" identification analyses Psychometric Impact of Applicant Faking (RQ 2)
Dimension % of Individuals
identified as fakers Having established that applicant faking was prevalent in
this sample, we sought to determine what effect this faking
By goal orientation dimension
had on the psychometric properties of the goal orientation
LGO
LGO 21.7 % (n 21.7
= %35)
(n = 35)
scales by comparing the internal consistency and factor
PPGO
PPGO 13.7 % (n 13.7
= % 22)
(n = 22)
structure of these measures across the applicant and
PAGO
PAGO 24.8 % (« 24.8
=% (n
40)= 40)
incumbent settings.
Number
Number of dimensions faked
of dimensions faked Percentage
Percentageof
of
respondents Internal Consistency
Across all three dimensions

0 As a first step in these analyses, we examined the internal


0 50.3 % (n =50.3 81)
% (n = 81)
consistency of the three dimensions of goal orientation
1 1 40.4 % (n =40.4 65)
% (n = 65)
2 2 8.7 % (n = 8.7 14)
% (n = 14)
across the applicant and incumbent settings. If faking
3
impacts the internal consistency of self-report measures,
3 .6 % (n =.6 %1)
(n = 1)
then we would expect to observe differences in these esti
mates between the incumbent setting (where there is little
incentive to engage in faking) and the applicant setting
faking was quite prevalent when examined across the entire (where there is ample incentive to fake). Consistent with
goal orientation instrument, rather than looking at each of past research, the internal consistency estimates for all three
the three individual scales. Interestingly, the results also goal orientation dimensions using incumbent responses
indicated that more extreme forms of faking (i.e., faking on exceeded .80 (LGO = .80, PAGO = .86, PPGO = .82),
more than one scale) were present in this sample; 9.3 % of indicating that all three dimensions displayed acceptable
the respondents were categorized as fakers on two or more levels of reliability. However, an examination of the
of the dimensions, indicating that, although less common, responses provided by these same individuals to these very
there were a meaningful number of individuals engaged in same scales when applying for a job indicated that all three
these behaviors. of the dimensions demonstrated unacceptably low levels of

Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
486 J Bus Psychol (2014) 29:479^193

Table 3 Select item-level means across the a

^applicant
^applicant Encumbent t d
Encumbent

PPGO
PPGO items
items
focusing
focusing
on high on
ability/performance
high ability/performance
IIam
am concerned
concerned withthat
with showing showing that
I can perform I can
better thanperform 5.75
better than
my coworkers 3.99
my coworkers 13.55*
5.75 3.99 13.55* 1.63
IItry
try to out
to figure figure outto what
what it takes prove my it takes
ability at workto 5.72
prove my ability 4.45
at work 5.72 10.87*
4.45 10.87* 1.28

PPGO
PPGO items
items
focusing
focusing
on wantingon
others
wanting
to know how
others
good they
to know
are how good they are
IIenjoy
enjoy it others
it when when others
at work at of
are aware work are
how well doing of how well 1.55
aware
I am 4.51
I am doing 1.55 -26.59*
4.51 —26.59* -2.82
IIprefer
prefer toonwork
to work projects on projects
where I can prove where
my abilityItocan prove my ability
others to others
2.02 4.42 2.02-17.34*
4.42 —17.34* -1.97
LGO
LGOitem
item
focusing
focusing
on riskon
taking
risk taking
Forme,me,
For development
development of
of my work myiswork
ability ability
important enough is important
to take risks 5.75
enough to take 5.87
risks 5.75-2.24*
5.87 —2.24* -.22

* p < .05

internal consistency, with coefficient alphas ranging from a on being a high performer), while the second group of
high of .57 (LGO) to a low of -.16 (PPGO), with PAGO items might be considered undesirable (i.e., "showing off'
falling in the middle (a = .42). All three of these internal to others may be seen as a negative characteristic), one
consistency estimates are much lower than those typically could explain the formation of these 2-item composites
reported for these scales, as well as the estimates obtained within the PPGO dimension as an indicator that these

for the same respondents once they had been hired, sug applicants were trying to emphasize positive characteristics
gesting that these scales were not functioning as they nor and downplay negative characteristics during the applica
mally do. Even more striking was the observation of a tion process. In other words, this bifurcation of the PPGO
negative internal consistency estimate for the PPGO dimension could be due to applicant faking. To address this
dimension. Negative estimates typically only arise from possibility, exploratory analyses were conducted examin
problems associated with reverse coding of items or poorly ing the item means for this dimension across the applicant
worded items. However, the PPGO scale does not utilize and incumbent settings. The results of these analyses
any reverse-coded items and the item wording did not (presented in Table 3) support the proposition that appli
appear to be a problem when these individuals responded as cant faking may have been responsible for the internal
job incumbents (see above). consistency issues observed in the applicant setting. More
In an attempt to determine the possible causes of these specifically, for the two items that assessed the desire to
low levels of internal consistency, we explored the item demonstrate high levels of ability and performance, item
level statistics obtained for each of the goal orientation means were significantly higher when individuals respon
dimensions from applicants. First, we examined the PPGO ded as applicants as compared to their responses as
scale to try to determine why these four items were incumbents (t (161) = 13.55 and 10.87, both p < .05,
exhibiting a negative internal consistency estimate. An d = 1.63 and 1.28). In contrast, for the two items that
examination of the item-total correlations revealed that assessed the desire to "show off' to others, item means
were significantly lower when individuals responded as
these four items appeared to be split into two subgroups of
two items, with each subgroup displaying a negative cor applicants as compared to their incumbent responses
(t (161) = -26.59 and -17.34, both p < .05, d = -2.82
relation with items in the other subgroup. These types of
inter-item correlations were somewhat surprising given and -1.97). Although these analyses are post hoc, they
that all items are designed to assess the same dimension of
nonetheless provide support for the assertion that applicant
goal orientation and that previous research using thisfaking could be responsible for the poor internal consis
instrument has never observed this type of item perfor tency estimates evidenced for the PPGO scale in the
mance. A closer look at the content of these two groups of
applicant setting.
items, however, indicates that this factor split may be the Similarly, the items that form the LGO dimension were
result of socially desirable responding (i.e., faking). Moreexamined in an attempt to pinpoint the source of the poor
internal consistency estimates. The item-total correlations
specifically, the items in each of the identified subgroups
could be classified as focusing on either: (a) demonstrating
for this dimension revealed that one item appeared to be
high ability and high performance, or (b) wanting others to
largely responsible for this low level of scale homogeneity.
know how good they are. Inasmuch as the first group ofAs with the PPGO scale, the item causing this problem was
items represents characteristics that would likely be condistinct from the remainder of the items in this scale in that
sidered desirable when applying for a job (i.e., it's a goodit focused on a willingness to take risks on the job, while
thing to let a potential employer know that you are focused
the remainder of the items focuses on taking on challenges

Ô Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
S Bus Psychol (2014) 29:479^493 487

and looking for ways to improve oneself. If one considers by Schmit and Ryan (1993) would emerge. However, these
that risk taking may be something that is viewed as analyses failed to produce any interpretable factor structure
undesirable on this job, while taking on challenges and displaying acceptable fit.
trying to improve on past performance are desirable char Taken as a whole, the internal consistency estimates and
acteristics, these item-level results could once again be CFA results observed for the three dimensions of goal
attributed to socially desirable responding on the part of the orientation provide us with evidence that this personality
applicants. Examination of the mean for this item across inventory does not function normally when completed in a
the two settings (Table 3) indicated that a lower item selection setting. When completing these measures as
mean was observed for applicant responses as compared incumbents (with little motivation to engage in faking), the
to incumbent responses (t (161) = —2.24, p < .05, d = scales functioned as expected. In contrast, when responding
-.22), providing additional support for the proposition that as applicants (where there is likely to be substantially more
applicant faking may have harmed the internal consistency motivation to fake), the scales exhibited rather poor psy
of these instruments. chometric properties. Although these conclusions rely to a
Together, these analyses indicate that the items in these certain degree on post hoc analyses, these findings are
instruments functioned differently across the two settings clearly compatible with the proposition that applicants
(applicant vs. incumbent) and that applicant faking may were engaging in faking and that this type of responding
have the effect of reducing the internal consistency of these damaged the measurement properties of an otherwise
measures. To further explore the impact of faking on the psychometrically sound measure.
psychometric functioning of these scales, we turned to
confirmatory factor analysis (CFA) to assess the fit of an Impact of Faking on the Utility of Selection Systems
established measurement model to the responses obtained (RQ 3)
in these two settings.
Although the results thus far have clearly indicated that
Factor Structure response distortion appears to be a relatively common
occurrence and that the response patterns of job applicants
Although the internal consistency findings obtained above
may harm the psychometric properties of self-report mea
sures used for selection, these observations still do not
clearly suggest that the factor structure of the goal orien
tation measure is likely to be altered, we nonethelessindicate whether applicant faking is harmful for the overall
conducted a CFA on the responses obtained from ourutility of selection procedures. That is, an examination of
respondents as applicants and job incumbents to determinefaking at the group level does not indicate whether or not
the impact of faking on the factor structure of this measure.faking has practically significant results in a traditional,
As noted previously, past work using this scale in both
top-down employee selection context (Alliger and Dwight
research and applied settings has demonstrated a consistent2000; Mueller-Hanson et al. 2003; Rosse et al. 1998). To
fit of these scales to a three-factor model (e.g., VandeWalleaddress this issue, we conducted individual-level analyses
1997). in which we examined the extent to which fakers would be

The results of the CFA for incumbent responses indi "hired" under different selection scenarios (i.e., varying
cated that the three-factor solution demonstrated in past selection ratios). Individuals were rank ordered on their
research fit the data reasonably well (RMSEA = .05, applicant LGO scores (since this is the primary dimension
SRMR = .05, GFI = .92, AGFI = .89, CFI = .97). In that has been suggested as a selection tool for salespeople)
contrast, the results of the CFA for applicant responses and simulated top-down selection decisions were made
based on selection ratios of .05, .10, .15, and .202 to pro
indicated that a three-factor solution provided a very poor
fit to the data (RMSEA = .25, SRMR = .19, GFI = .60,vide an examination of whether fakers "rise to the top" in a
AGFI = .41, CFI = .74), with all indices falling well variety of selection scenarios (cf. Dwight and Donovan
2003;
outside of the acceptable range. These results reinforce the Mueller-Hanson et al. 2003; Rosse et al. 1998). The
earlier suggestion that the goal orientation scales were
results of these analyses (presented in Table 4) indicated
functioning quite differently when individuals were that a substantial number of the individuals who would be

responding as applicants compared to when these same hired under these different selection settings were those
individuals were responding as incumbents. To furtherthat we classified as fakers using our categorization
examine the factor structure of applicant responses, an
exploratory factor analysis using maximum likelihood
22 Because
Becauseofof thethe
presence
presenceof ties
ofamong
ties among
individuals
individuals
on their LGO
on their LGO
estimation was used to determine if a meaningful factor
scores,
scores,thetheactual
actual
selection
selection
ratiosratios
differed
differed
slightly slightly
from the from
stated the stated
structure existed within these responses as well as toselection
selectionratios
ratios
of .05,
of .05,
.10, and
.10,.15.
andThe
.15.
obtained
The obtained
selection selection
ratios wereratios were
determine whether the "ideal employee" factor observed.07,
.07,.11,
.11,.19,
.19,
andand
.29..29.

Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
488 J Bus Psychol (2014) 29:479-493

Table 4 Simulated hiring


hiring Selection ratio Fakers hired Non-fakers hired
results across various selection
ratios
.05 N
N ==88(73
(73
%)%) N = 3 (27 %)
^APPLICANT
^APPLICANT — 25.63
— 25.63 ^applicant = 27.00
^INCUMBENT
^INCUMBENT = 20.88
= 20.88 Encumbent = 26.67
d = 3.63 d
d== .19
.19

% of Faker population %
% of
of Non-faker
Non-faker population
population
hired = 22.8 % hired = 2.4 %

.10 N = 10 (56 %) N = 8 (44 %)


^APPLICANT = 25.20 ^applicant = 24.75
N the number of individuals M'.INCUMBENT = 20.40 ^INCUMBENT = 24.25
hired in the selection ratio that d = 3.00 d
d== .22
.22
are classified as either fakers or
% of Faker population %
% of
ofNon-faker
Non-fakerpopulation
population
non-fakers, MAPPUCANT the
hired = 28.6 % hired = 6.3 %
mean LGO score obtained by
individuals in this grouping
.15 N = 14 (47 %) N= 16 (53 %)
when responding as applicants, ^APPLICANT = 24.29 ^applicant = 23.38
M incumbent 'he mean LGO
^INCUMBENT = 19.64 ^INCUMBENT — 23.19
score obtained by individuals in
d = 2.34 d
d== .09
.09
this grouping when responding
as incumbents, d the mean % of Faker population %
% of
of Non-faker
Non-faker population
population
standardized different among hired = 40.0 % hired = 12.6 %

applicant and incumbent scores .20 N = 18 (38 %) N = 30 (62 %)


for each group, % of Faker/
^APPLICANT = 23.56 ^applicant = 22.27
Non-Faker population hired the
percentage of the total ^INCUMBENT = 18.94 ^incumbent = 22.20
population of Fakers (n = 35, d = 2.05 d
d== .03
.03
see Table 2) and Non-Fakers
% of Faker population %
% of
ofNon-faker
Non-fakerpopulation
population
(n = 127) that were hired in hired = 51.4 % hired = 23.6 %
each of the selection ratios

strategy. 73 % (8 out of 11) of those hired under a .05 scenario, as compared to 2.4 % (3 out of 127) of the non-faker
selection ratio would be classified as fakers, while 56 % population in our study. Looking at these percentages, one can
(10/18), 47 % (14/30), and 38 % (18/48) of those hired see additional evidence of the disproportionate selection of
under selection ratios of .10, .15, and .20 (respectively) fakers present in each of the selection ratios. Perhaps the most
would be categorized as fakers. These observations that striking finding is that under a .20 selection ratio, over 50 % of
fakers tend to rise to the top of the selection rankings are the population of fakers identified in our study would be hired
quite consistent with past simulation and experimental (i.e., a selection ratio greater than .50), while less than 24 % of
research looking at the effects of faking on the utility of the non-faker population would be hired under that same
selection systems (e.g., Dwight and Donovan 2003; Mu selection scenario.
eller-Hanson et al. 2003; Rosse et al. 1998), and clearly To get a better sense of whether the fakers that were hired
suggest that applicant faking will impact the hiring deci using applicant scores would be hired using incumbent
sions made by an organization utilizing these types of self scores (which should contain little to no faking), we created a
report measures, particularly with small selection ratios. simulated hiring situation using incumbent LGO scores with
Table 4 also presents the mean LGO scores reported for a selection ratio of .20. We then compared the individuals
individuals within the faking and non-faking groups as both that were hired under this scenario with those that were hired
applicants and incumbents. Examination of these means and using the same selection ratio using applicant LGO scores. If
the associated effect sizes (d) illustrates the substantial mag faking truly does not impact the hiring decisions made by an
nitude of the faking that was occurring among those identified organization, then the individuals hired using incumbent
as fakers (d values ranging from 2.05 to 3.63) within each of scores should be no different than those hired using applicant
these selection ratios, while also demonstrating the relative scores. The results of this analysis indicated that the majority
stability of LGO scores among the non-fakers. Finally, of non-fakers (17 out of 30; 55 %) that were hired using
Table 4 provides the percentage of the faker (n = 35, see applicant scores would also be hired if we used incumbent
Table 2) and non-faker (n = 127) populations in our total scores, suggesting a reasonable degree of consistency in
sample that were being hired under each selection ratio. For hiring outcomes. In contrast, only 4 individuals (out of 18;
example, under a .05 selection ratio, 22.8 % (8 out of 35) of the 22 %) identified as fakers that were hired using applicant
fakers present in our total sample were hired in this selection scores would also be hired using incumbent scores. In other

*£) Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
J Bus Psychol (2014) 29:479-493 489

words, the vast majority of these individuals would not be (e.g., McDaniel et al. 1996). More specifically, we com
hired using scores that likely are more representative of their pared the correlations between hiring LGO scores and our
true standing on the LGO dimension. This once again sug two performance criteria for those identified as fakers on
gests that faking significantly impacts the hiring process this goal orientation dimension (N = 35) with the corre
when organizations use self-report selection measures in a lations observed among the non-fakers (TV = 127).4 The
top-down fashion. results of this analysis indicated that, although there was no
However, as noted by Mueller-Hanson et al. (2003), significant correlation between LGO scores and either
hiring those individuals who engage in faking is not training performance (r = .10, ns) or market share per
problematic unless these same individuals exhibit lower formance (r = .04, ns) among non-fakers, the correlations
levels of performance than those who responded honestly observed for the faking group were negative and somewhat
to the selection measure. To examine this possibility, we substantial in size (r = — .30 and —.18, respectively),
gathered two forms of performance data (see "Methods") although these correlations were not statistically signifi
for study participants from their organization. First, we cant. While the lack of statistical significance for these
obtained a numerical measure of training performance that correlations precludes us from concluding that faking sta
assessed their success in learning and applying information tus moderated the predictive validity of the LGO dimen
presented during their organizational training. Second, we sion, these observed correlations hint at the potentially
obtained a measure of on-the-job performance that asses negative outcomes that may arise when hiring individuals
sed the increase in product market share achieved by each based upon an inflated LGO score attributable to faking;
individual in their geographic region over a 5-month period higher LGO scores among those identified as fakers were
following completion of the training program. Using these negatively related to both measures of performance. While
performance measures, we examined the performance of these results are by no means definitive proof that faking
those individuals who were "hired" under the .20 selection harms criterion-related validity, they are nonetheless
ratio (the only ratio that provides us with a reasonable informative regarding the possible effects of applicant
number of individuals to test for mean performance dif faking that organizations should be aware of. More
ferences) to determine if the fakers were different from the importantly, when taken together with the earlier results
non-fakers in terms of performance. The results of this demonstrating that faking alters the factor structure and
analysis indicated that those who were classified as fakers internal consistency of these measures, these results sug
(n = 18) demonstrated significantly lower levels of overall gest that, at a broader level, faking impacts the construct
training performance (t (46) = 2.48, p < .05, d = .76) validity of these measures.
than non-fakers (n = 30).3 In addition, although the dif
ference was not statistically significant at traditional levels
(t (46) = 1.34, p < .10, one tailed), the mean standardized Discussion
difference between fakers and non-fakers in terms of

market share change was nonetheless moderate in magni


Although non-cognitive, self-report measures have become
tude (d = .39). In combination, these effect sizes suggestincreasingly popular as a means of selecting employees, the
that there are substantial differences in the performance
possibility that applicants may misrepresent themselves
(i.e., fake) on these inventories when applying for a job has
levels exhibited by these two groups of individuals, indi
caused considerable concern among practitioners and
cating that faking has the capacity to not only influence
researchers. More specifically, past researchers and prac
who is hired, but also impact the level of performance that
these individuals ultimately display once on the job. titioners have voiced concerns that applicant faking may be
a common occurrence and that such faking may have
Although our stated research questions did not address
harmful effects on the utility and validity of these measures
the impact of faking on criterion-related validity (primarily
for personnel selection (e.g., Anderson et al. 1984; Cellar
because the use of LGO as a predictor of job performance
et al. 1996; Christiansen et al. 1994; Donovan et al. 2003;
is a relatively new proposition), we nonetheless conducted
exploratory analyses in which we examined the levelDouglas
of et al. 1996; Rosse et al. 1998; Schmit and Ryan
prediction obtained by our LGO hiring scores across the1993). The results of the present study suggest that these
concerns are quite valid and worthy of further exploration
faking and non-faking groups, as this particular question
in organizational contexts.
has been a topic of some debate in the research literature

3 A comparison of all fakers versus non-fakers (not just those 4 The results from moderated multiple regression analyses to assess
selected in this simulation) also demonstrates that fakers obtained
whether faking status moderated the relationship between LGO and
significantly lower training performance scores than non-fakers both performance measures indicated that faking status was not a
(f(160) = 2.03, p < .05, d = .47). significant moderator of either relationship.

'S Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
490 J Bus Psychol (2014) 29:479-493

Prevalence of Faking (i.e., the HPI and CPI, respectively), the present study used
an inventory that asks individuals to respond along a
The results of the present study suggest that applicant continuous 7-point scale. While this distinction may seem
faking may be a common occurrence in selection settings. trivial, the use of a dichotomous item format provides less
Roughly half (49.7 %) of the applicants were classified as opportunity for faking in that individuals trying to present
fakers on at least one of the three dimensions of goal ori themselves in a desirable manner only have one option to
entation employed in this study. In addition, almost 10 % do so (by choosing the "right" answer). In contrast, per
of the individuals appear to have faked two or more of the sonality measures collecting responses on a continuous
three dimensions. Based upon these results, we can draw format provide a much greater opportunity for faking. To
two conclusions about applicant faking. First, it is clear that illustrate, consider the case in which an applicant is pre
applicant faking was a commonplace behavior among these sented with a statement about a clearly desirable attribute
individuals, which suggests that past assertions about the that they do not actually possess (e.g., "I am dependable").
universally low base rate for this type of behavior (e.g., When faced with a dichotomous response format, there is
Dunnette et al. 1962; Hogan 1991; Hough et al. 1990) may only one option for faking on this item: Indicating that the
be inaccurate. Interestingly, the prevalence estimates statement is true. However, when answering that very same
obtained in this study are remarkably similar to the results question on a continuous scale, the applicant can attempt to
obtained by both Donovan et al. (2003), who found that enhance their chances of getting hired by either indicating
55 % of the applicants in their study admitted to having minimal agreement, moderate agreement, or strong agree
inflated their positive characteristics when filling out a ment. That is, there is more opportunity to fake on items
personality inventory, and O'Connell et al. (2011), who with varying degrees of agreement. This distinction, in and
estimated that up to 43 % of their sample could be iden of itself, may be responsible for the divergent results
tified by fakers. In comparison, Peterson et al. (2011a, b) obtained in the present study in that the measures used by
identified only 24 % of their sample as fakers. However, Hogan et al. (2007) and Ellingson et al. (2007) may have
unlike the present study and O'Connell et al. (2011), Pet limited the opportunity to fake, thereby leading to their
erson et al. only examined faking on a single self-report lower estimates regarding the prevalence of faking. This is
measure, whereas O'Connell et al. examined faking across not to say that their studies are misleading or less infor
multiple measures and the present study examined faking mative, but rather that their estimates and the conclusions
on three scales of a self-report inventory. The second drawn from these estimates may be best thought of as
conclusion that can be drawn is that, although less common characterizing faking on dichotomous response personality
than more moderate forms of faking, a meaningful number inventories, but not necessarily faking at a more general
of individuals were engaging in more severe forms of level.

faking (e.g., faking two or more dimensions of the goal A second difference that may account for the divergent
orientation instrument). Once again, this finding is con findings obtained in the present study is the type of par
sistent with the past research suggesting that extreme forms ticipants utilized by Hogan et al. (2007). In their study,
of faking are likely to occur, but are less common than participants were job applicants who had completed the
more minor forms (e.g., Donovan et al. 2003). selection measures at one point in time, were subsequently
However, while these findings are consistent with some rejected by the organization for employment, and then
of the faking literature, they stand in stark contrast to the completed the measures a second time as a means of trying
conclusions drawn by recent studies by Hogan et al. (2007) to obtain employment with the same organization. Thus,
and Ellingson et al. (2007), both of which were conducted although we have two data points for the same individuals
using actual job applicants in a within-subjects design, and (i.e., a within-subjects design), in both instances individu
both of which concluded that applicant faking was a very als were applying for a job (i.e., were applicants). While
uncommon occurrence. While these contradictory findings Hogan et al. (2007) argue that there would be a greater
may be somewhat unsettling, an examination of the motivation to fake the second time around, there is no
methodologies used by Hogan et al. (2007) and Ellingson empirical evidence provided that suggests that these
et al. (2007) provides two potential explanations for these applicants were more motivated to fake at this time. One
discrepancies. First, although both of these studies exam could just as easily argue that the applicants should have
ined faking on non-cognitive, self-report measures used for been equally motivated to fake both times. If this is true,
selection, it is important to realize that the measures uti then it makes sense that one would not observe evidence of
lized in these studies were distinct from the measure used faking, since faking was operationalized by changes in test
in this study in meaningful ways. While the Hogan et al. scores across the two data collection points. That is, if
(2007) and the Ellingson et al. (2007) studies utilized a people were faking in both instances, we would observe no
personality measure with a dichotomous response format change in test scores, which would result in the incorrect

Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
J Bus Psychol (2014) 29:479^*93 491

conclusion that faking was absent in this setting (based present study is one of the first to examine this question
upon the operationalization used by Hogan et al.)- In utilizing a within-subjects design in an actual organiza
contrast, the present study collected data from individuals tional setting. Given that within-subjects designs are likely
as applicants and incumbents, two distinct settings that to be more sensitive than between-subjects designs in
clearly provide differing levels of motivation to fake, faking research (Viswesvaran and Ones 1999), this study
which should provide us with a clearer examination of provides unique information that is highly relevant to the
changes in self-report scores that may occur via faking. question of whether faking matters. When taken together
with our results suggesting that the relationships between
Psychometric Impact of Faking these self-report measures and our employee performance
criteria differs for those engaging in faking and those
Given that applicant faking was a common occurrence in responding honestly, these findings indicate that faking has
our sample, our next set of analyses focused on determin the potential to alter the construct validity of non-cognitive,
ing if applicant faking had harmful effects on the psycho self-report measures. In other words, these measures may
metric properties of the measure utilized in this study. The operate differently in settings where applicant faking is
results of internal consistency and confirmatory factor present as compared to situations where there is little
analyses clearly indicated that, although this inventory incentive to misrepresent one's self. In essence, the con
functioned in a satisfactory manner when individuals struct validity of these scales appears to change across
responded as job incumbents, this very same inventory testing contexts. While the evidence presented here is by
functioned rather poorly when individuals were responding no means definitive, it does suggest that the impact of
in an applicant setting. More specifically, the internal faking on our selection measures is more significant than
consistency estimates obtained from applicants were far past research would lead one to believe.
below the estimates obtained for these very same respon
dents as incumbents. Similarly, while the expected factor Is Faking Harmful to Selection Decisions?
structure of this measure was supported using incumbent
responses, this same factor structure provided a poor fit to Analyses conducted to examine how faking impacts the
the responses provided as applicants. These findings are hiring decisions that would be made by an organization
particularly informative given that there is considerable provided further evidence that applicant faking can be
debate in the literature regarding whether faking has del harmful. Simulated top-down selection analyses indicated
eterious effects on measurement properties of personality that fakers rose to the top of the selection rakings, thus
inventories (e.g., Cellar et al. 1996; Ellingson et al. 2001; increasing the likelihood that an organization would hire
Smith et al. 2001). The results of our psychometric anal these individuals. More importantly, in line with research
yses become even more interesting when one realizes that by O'Connell et al. (2011) and Peterson et al. (2011a, b),
these analyses were conducted within the context of a the results of our analyses indicated that fakers tended to
repeated measures design. In contrast to past research have lower levels of performance in two domains (training
attempting to assess the effects of faking by comparing performance and sales performance) than non-fakers.
factor structure across distinct groups of individuals (e.g., Although only one of these mean performance comparisons
Ellingson et al. 2001; Schmit and Ryan 1993; Smith et al. was statistically significant, the effect sizes associated with
2001), the present study examined factor structure within these differences (d = .76 and .39, respectively) suggest
the same group of individuals across two settings. As such, that these performance differences are far from trivial and
we can be much more confident that our changes in internal that the hiring of fakers could have negative consequences
consistency and factor structure can not be attributed to for the overall performance of an organization. These
stable differences among our applicant and incumbent findings are particularly significant given the dearth of
groups. Given that there is considerable motivation to research establishing a link between applicant faking and
engage in socially desirable responding when attempting to employee performance, and suggest that the negative
obtain employment and little motivation to engage in this effects of faking may go far beyond harming the mea
type of responding once one already has the job in ques surement properties of a personality measure to impact the
tion, we can infer that the degradation of the psychometric quality of individuals hired within an organization. These
properties of this self-report measure observed in the results clearly are at odds with the opinion that faking does
applicant setting is at least partially due to faking on the not matter in organizations (e.g., Hogan 1991; Hough et al.
part of these applicants. Although past research has pro 1990; Ones et al. 1996) and support the link between faking
vided equivocal results as to whether faking has a negative behavior and subsequent job performance tentatively
impact on the psychometric properties of non-cognitive, established by O'Connell et al. (2011). It is worth noting
self-report measures, it is important to realize that the that, as our analyses focused exclusively on using our self

"ö Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
492 J Bus Psychol (2014) 29:479-493

report measures Conclusion


in a top-down hiring
may not represent the impact of fakin
when using Taken as a whole, themeasures
personality results of this study provide in
evidenceot
screen out tool). to support the claim that applicant faking is a common
occurrence and that these faking behaviors have the
potential to not only harm the psychometric properties of
Study Implications
selection measures, but also to reduce the quality of
selection decisions that are made with regard to actual job
One clear implication of this study
performance. These results are even more significant in
research should be conducted to elabor
light of the fact that this study addressed three of the more
of applicant faking in an organizati
serious limitations present in past faking research by uti
within-subjects design. This type of
lizing a within-person design with real job applicants, and
with information that simply cannot
collecting actual performance data for these individuals.
simulations or lab studies and allows us to make more
Future research should look to continue this trend by
confident conclusions about the outcomes of applicant
emphasizing field research on applicant faking as a means
faking in real-world settings. A second implication of this
of obtaining information about not only the prevalence of
study is that practitioners should begin to recognize that the
faking, but also about the relationship between applicant
issue of applicant faking can have serious ramifications for
faking and important organizational outcomes (e.g., job
the functioning of their organization. Although past studies
performance, turnover, counterproductive behaviors).
have attempted to link applicant faking behavior with
measures of performance, the present study provides some
of the strongest evidence to date that faking may have a
References
negative relationship with job performance in a real-world
environment. Given this information, it appears that a
Alliger, G. M., & Dwight, S. A. (2000). A meta-analytic investigation
logical next step for practitioners is to focus their attention
of the susceptibility of integrity tests to coaching and faking.
on identifying and developing methods for dealing with Educational and Psychological Measurement, 60, 59-72.
applicant faking (e.g., the use of warnings). Anderson, C. D., Warner, J. L., & Spencer, C. C. (1984). Inflation bias
in self-assessment examinations: Implications for valid employee
selection. Journal of Applied Psychology, 69, 574-580.
Study Limitations Arthur, W., Glaze, R. M., Villado, A. J., & Taylor, J. E. (2010). The
magnitude and extent of cheating and response distortion effects on
unproctored internet-based tests of cognitive ability and person
Although the present study is unique in its utilization of a
ality. International Journal of Selection and Assessment, 18,1-16.
repeated measures design within an organizational context
Becker, T. E., & Colquitt, A. L. (1992). Potential versus actual faking
to examine applicant faking, this research is somewhatof a biodata form: An analysis along several dimensions of item
type. Personnel Psychology, 45, 389-406.
limited by the fact that we did not have a direct measure of
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality
actual applicant faking (e.g., bogus statement scales;
dimensions and job performance: A meta-analysis. Personnel
Anderson et al. 1984; Dwight and Donovan 2003). Instead, Psychology, 44, 1-26.
we relied on statistical inference in determining who could
Brooks, P., & Arnold, D. (1988). Reid Report test-retest reliability
be classified as a faker. Although this approach has been (Research Memorandum No. 27). Chicago: Reid Psychological
Systems.
used successfully by past researchers (e.g., Griffith et al.
Cellar, D. F., Miller, M. L., Doverspike, D. D., & Klawsky, J. D.
2007), it is still not a direct measure of how much faking(1996). Comparison of factor structures and criterion related
the applicant was engaging in. validity coefficients for two measures of personality based on the

A second potential limitation of this study is the use offive factor model. Journal of Applied Psychology, 81, 694-704.
Christiansen. N. D., Goffin, R. D., Johnston, N. G., & Rothstein, M.
goal orientation as our focal measure in studying applicant
G. (1994). Correcting the 16PF for faking: Effects on criterion
faking. Although we feel that our goal orientation measurerelated validity and individual hiring decisions. Personnel
is quite similar to commonly used individual difference
Psychology, 47, 847-860.
Day, D. A., & Silverman, S. B. (1989). Personality and job
measures (e.g., conscientiousness) in terms of content,
performance: Evidence of incremental validity. Personnel Psy
response format, and psychometric properties, goal orien
chology, 42, 25-36.
tation is nonetheless a departure from the measures typi
Donovan, J. J., Dwight, S. A., & Hurtz, G. (2003). An assessment of
cally used to study faking. In addition, given the fact that the prevalence, severity, and verifiability of entry-level applicant
our self-report measure utilized a Likert response scalefaking using the randomized response technique. Human
Performance, 16, 81-106.
with multiple response options, the results observed here
Douglas, E. F., McDaniel, M. A., & Snell, A. F. (1996). The validity
may not generalize to instruments that rely on dichotomousof non-cognitive measures decays when applicants fake. Acad
response scales. emy of Management Proceedings, 16, 127-131.

<£) Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms
J Bus Psychol (2014) 29:479-493 493

Dunnette, McCartney, M. D.,


O'Connell, M. S., Kung, M., &J., Tristan, E.Carlson,
(2011). Beyond impressionH
(1962). of faking A study
management: Evaluating behavior
three measures of response on distortion a
description checklist. Personnel
and their Psychology
relationship to job performance. International Journal
Dwight, S. A., & Donovan, J. J.
of Selection and Assessment, (2003). Do
19, 340-351.
Ones, D. S., Viswesvaran,
actually reduce faking? Human C., & Reiss,Performan
A. (1996). Role of social
Ellingson, J. E., Sackett,desirability P. R., &forConnelly,
in personality.testing personnel selection: The red
herring. Journal
assessment across selection and of Applied Psychology, 81, 660-679.
development co
response distortion. Oswald,
JournalF. L., & Hough. L.
of M. (2008). Personality testing
Applied and 1-0
Psych
Ellingson, J. E., Smith, D.
psychology: B., exchange
A productive & Sackett, P.
and some future directions.
the influence of social Industrial and Organizational Psychology:
desirability Perspectives on
on personali
Journal of Applied Science and Practice, 1, 323-332.
Psychology, 86, 122-13
Elliot, A. J., & Church, M.
Pannone, R. D. (1984). A.
Predicting (1997).
test performance: A
A content valid
approach and avoidance achievement
approach to screening mot
applicants. Personnel Psychology, 37,
Personality and 507-514. Psychology,
Social 72, 218
Goffin, R. D., Peterson, M. M.
Rothstein, H., Griffith,
G., R. L.,&
Converse, P. D., & Gammon, A. R.
Johnston,
testingand the assessment center:
(201 la, April). Using within-subjects Increm
designs to detect applicant
managerial selection. Journal
faking. Paper of
presented at the 26th Annual Applie
Conference for the
746-756. Society for Industrial/Organizational Psychology, Chicago, IL.
Peterson, M. H., Griffith, R. L., Isaacson, J. A., O'Connell, M. S., &
Griffith, R. L., Chmielowksi, T., & Yoshita, Y. (2007). Do applicants
fake? An examination of the frequency of applicant faking Mangos, P. M. (201 lb). Applicant faking, social desirability, and
behavior. Personnel Review, 36, 341-355. the prediction of counterproductive work behaviors. Human
Guion, R. M., & Cranny, C. J. (1982). A note on concurrent and Performance, 24, 270-290.
predictive validity designs: A critical reanalysis. Journal of
Porath, C. L., & Bateman, T. S. (2006). Self-regulation: From goal
Applied Psychology, 67, 239-244. orientation to job performance. Journal of Applied Psychology,
Hogan, R. T. (1991). Personality and personality measurement. In 91, 185-192.
M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial Rosse, J. G., Stecher, M. D., Miller, J. L., & Levin, R. A. (1998). The
and organizational psychology (2nd ed., Vol. 2). Palo Alto: impact of response distortion on preemployment personality
Consulting Psychologists Press. testing and hiring decisions. Journal of Applied Psychology, 83,
Hogan, J., Barrett, P., & Hogan, R. (2007). Personality measurement, 634-644.

faking, and employment selection. Journal of Applied PsycholSchmit, M. J., & Ryan, A. M. (1993). The Big Five in personnel
ogy, 92, 1270-1285. selection: Factor structure in applicant and nonapplicant popu
Hough, L. M., Eaton, N. K., Dunnette, M. D., Kamp, J. D., & lations. Journal of Applied Psychology, 78, 966-974.
McCloy, R. A. (1990). Criterion-related validities of personalitySilver, L. S., Dwyer, S., & Alford, B. (2006). Learning and
constructs and the effect of response distortion on those performance goal orientation of salespeople revisited: The role
validities. Journal of Applied Psychology, 75, 581-595. of performance-approach and performance-avoidance orienta
Hough, L. M., & Oswald, F. L. (2008). Personality testing and 1-0 tions. Journal of Personal Selling & Sales Management, 26,
psychology: Reflections, progress and prospects. Industrial and 27-38.
Organizational Psychology: Perspectives on Science and Smith, D. B., & Ellingson, J. E. (2002). Substance versus style: A new
Practice, 1, 272-290. look at social desirability in motivating contexts. Journal of
Hurtz, G. M., & Donovan, J. J. (2000). Personality and job Applied Psychology, 87, 211-219.
performance: The Big Five revisited. Journal of Applied Smith, D. B., Hanges, P. J., & Dickson, M. W. (2001). Personnel
Psychology, 85, 869-879. selection and the five factor model: Reexamining the effects of
Komar, S. G., Brown, D. J., Komar, J. A., & Robie, C. (2008). Faking applicant's frame of reference. Journal of Applied Psychology,
and the validity of conscientiousness: A Monte Carlo investiga 86, 304-315.
tion. Journal of Applied Psychology, 93, 140-154. Stark, S., Chernyshenko, O. S., Chan, K. Y., Lee, W. C., & Drasgow,
McFarland, L. A., & Ryan, A. M. (2000). Variance in faking across F. (2001). Effects of the testing situation on item responding:
noncognitive measures. Journal of Applied Psychology, 85, Cause for concern. Journal of Applied Psychology, 86, 943-953.
812-821. Tan, J. A., & Hall, R. J. (2005). The effects of social desirability bias
on applied measures of goal orientation. Personality and
McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M. A., &
Ashworth, S. (1990). Project A validity results: The relationshipIndividual Differences, 38, 1891-1902.
VandeWalle, D. (1997). Development and validation of a work
between predictor and criterion domains. Personnel Psychology,
43, 335-354. domain goal orientation instrument. Educational and Psycho
Midgley, C., Kaplan, A., Middleton, M., Maehr, M. L., Urdan, T., logical Measurement, 57, 995-1015.
VandeWalle, D„ Brown, S. P., Cron, W. L„ & Slocum, J. W. (1999).
Anderman, L. H., et al. (1998). The development and validation
of scales assessing students' achievement goal orientations.The influence of goal orientation and self-regulation tactics on
Contemporary and Educational Psychology, 23, 113-131. sales performance: A longitudinal field test. Journal of Applied
Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hollenbeck, J. R.,Psychology, 84, 249-259.
Murphy, K., & Schmitt, N. (2007). Reconsidering the use Viswesvaran,
of C., & Ones, D. S. (1999). Meta-analysis of fakability
personality tests in personnel selection contexts. Personnelestimates: Implications for personality measurement. Educa
Psychology, 60, 683-729. tional and Psychological Measurement, 59, 197-210.
Mueller-Hanson, R., Heggestad, E. D., & Thornton, G. C. (2003).
Zickar, M. J., Gibby, R. E., & Robie, C. (2004). Uncovering faking
Faking and selection: Considering the use of personality fromsamples in applicant, incumbent, and experimental data sets: An
select-in and select-out perspectives. Journal of Applied Psy application of mixed-model item response theory. Organiza
chology, 88, 348-355. tional Research Methods, 7, 168-190.

Ô Springer

This content downloaded from 132.174.251.61 on Thu, 02 Apr 2020 19:39:21 UTC
All use subject to https://about.jstor.org/terms

You might also like