You are on page 1of 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/343741952

AI Evaluation in Selection: Effects on Application and Pursuit Intentions

Article  in  Journal of Personnel Psychology · July 2020


DOI: 10.1027/1866-5888/a000258

CITATIONS READS

11 344

1 author:

Agata Mirowska
Rennes School of Business
11 PUBLICATIONS   28 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Prospective <2031>: Artificial Intelligence and Future of Work View project

Projet ECHOS View project

All content following this page was uploaded by Agata Mirowska on 14 March 2022.

The user has requested enhancement of the downloaded file.


Research Note
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

AI Evaluation in Selection
Effects on Application and Pursuit Intentions
Agata Mirowska

Department of Management & Organization, Rennes School of Business, Rennes, France

Abstract. This study investigates how information provided prior to the application stage of the selection process affects application intentions
toward the job and organization. Existing research has focused on applicants who have already entered into the selection process; however,
information revealed prior to application may cause candidates to self-select themselves out of the process. Utilizing a randomized exper-
imental design, participants read a job ad specifying that their prerecorded interviews would be reviewed by a human or an artificial intelligence-
based evaluator. The results show increased intentions to apply and pursue the job in the human evaluation condition.
Keywords: application intentions, intentions to pursue, technology in selection

During the recruitment process, potential applicants in- based” will be used to refer to any technology that au-
terpret organizational signals (Bangerter, Roulin, & König, tomates the decision stage, largely augmenting or re-
2012) to make subjective perceptions of potential fit placing the human role in choosing among decision
(Chapman, Uggerslev, Carroll, Piasentin, & Jones, 2005; options (Parasuraman, Sheridan, & Wickens, 2000).
Ehrhart & Ziegert, 2005). However, factors like the
adoption of new technologies may not be interpreted as
intended by all audiences (Drover, Wood, & Corbett, 2018),
increasing the risk of signal misinterpretation. This may Hypothesis Development
affect potential applicants’ choice to self-select into or
withdraw from the selection process (Born, Hiemstra, & Signaling theory predicts that in situations of goal mis-
Oostrom, 2018; Langer, König, & Krause, 2017). alignment and imperfect information, parties will seek to
Interviews are one of the most widely used (Buehl & learn about each other’s unseen characteristics by inter-
Melchers, 2018; McCarthy et al., 2017), yet time and preting readily observable signals (Bangerter et al., 2012;
resource intensive, selection tools. Not surprisingly, Spence, 1973). In the hiring context, the classic example is
technology designed to automate the interpretation of education as a signal of applicant ability (Spence, 1973);
verbal and nonverbal behaviors has attracted interest however, signals can include reputation, letters of rec-
(Chamorro-Premuzic, Akhtar, Winsborough, & Sherman, ommendation, and even clothing – anything that is costly
2017). Most research to date on this subject has evaluated or hard to fake for the sender (Bangerter et al., 2012). The
applicant reactions to technology once they have entered choice of selection tools may be used to signal what
the selection process (e.g., Langer et al., 2017), ignoring the employees can expect from a particular employer (Bowen
potential effects on the makeup of the applicant pool. This & Ostroff, 2004) and may be costly in terms of lost ap-
study applies signaling theory in a first step to under- plications if not targeted properly.
standing how awareness of the use of human versus ar- However, such signals are only useful in as far as their
tificial intelligence (AI)-based evaluation in the selection interpretation is in line with the intended message, which
process affects intentions to apply for and pursue a job with may not always be the case (Bowen & Ostroff, 2004;
a given organization. These dependent variables were Drover et al., 2018). When signals are not yet well es-
chosen as they represent preentry behavioral intentions, tablished, the two sides must go through a process of
which have been found to mediate the relationship be- interpreting signals, validating this interpretation and
tween attitudinal variables and the choice of job or or- taking into account any adaptations from either side before
ganization as an employer and may therefore impact the a steady state is reached where something can be safely
quality of a company’s final hire (Chapman et al., 2005; interpreted as a valid signal of a given characteristic
Gilliland, 1993; Highhouse, Lievens, & Sinar, 2003; Reeve (Bangerter et al., 2012). With respect to the use of AI-based
& Schultz, 2004). Throughout the paper, the term “AI- evaluation technology, companies may initially aim to

Journal of Personnel Psychology (2020), 19(3), 142–149 © 2020 Hogrefe Publishing


https://doi.org/10.1027/1866-5888/a000258
A. Mirowska, AI Evaluation in Selection 143

make the process more standardized, objective, and cost- and given course credit in return for their participation.
efficient or to signal financial fitness, efficiency, ob- This sample was representative of potential job seekers as
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

jectivity, or a culture of innovation (Chamorro-Premuzic all students are required to complete end-of-year intern-
et al., 2017). Alternatively, the adoption of this technology ships throughout the duration of their studies. Out of a
may itself be an adaptation that increases cheating costs in total of 259 participants who attempted the survey, 64
response to impression management and faking behaviors were not allowed to continue due to failing one of the
(Bangerter et al., 2012; Buehl & Melchers, 2018). manipulation or attention checks, and another seven chose
Previous research on the use of digital and highly au- not to complete the survey (i.e., stopped somewhere in the
tomated interviews shows that they are perceived as less process) and their responses were not included in the final
fair and controllable (Langer, König, & Fitili, 2019) and dataset. Finally, four students had taken the survey twice,
may give rise to feelings of creepiness and greater privacy so their second responses were removed, resulting in a
concerns (Langer et al., 2017). Additionally, they may final sample of 184 participants (111 female, Mage = 22.3,
hamper applicants’ ability to engage in various influence SD = 1.54).
tactics (Dürr & Klehe, 2018), as they do not provide
transparency and an unstructured format that would allow
for applicants to influence the progression of the interview Procedure
(Levashina & Campion, 2006). It is this lower explain-
ability of the decision systems underlying this technology Each participant was randomly assigned to read one of
(Biran & Cotton, 2017), coupled with lower social band- two job ads, developed specifically for this study, fea-
width and interactivity (Langer et al., 2019; Potosky, turing a fictional company entitled “Misopton Tech-
2008), which may make applicants hesitant to subject nologies” to avoid confounding effects with familiarity
themselves to evaluation by AI-based technology. with existing organizations (Harrison & Stone, 2018). Job
Candidates may also extrapolate from the selection ads were chosen as an example of the type of first contact
process to company values and the expected work envi- that an applicant may have with an organization. There is
ronment (Bowen & Ostroff, 2004). Interviews are seen as some evidence that job advertisement content affects
a communication exchange (Potosky, 2008), where both perceptions of the organization; however, this effect was
verbal and nonverbal behaviors convey important infor- found to be weaker for those with less work experience
mation (McCarthy et al., 2017). Human nonverbal be- (Walker, Feild, Giles, & Bernerth, 2008), which would
haviors are highly contextualized and, given that their apply to the sample in this study. Additionally, there is
interpretation relies on understanding of context, hierar- some evidence that female students were more likely to
chy, and social structures (Bonaccio, O’Reilly, O’Sullivan, apply for jobs based on the wording of the profile being
& Chiocchio, 2016), are best evaluated by other human sought (Born & Taris, 2010); however, in the current
beings. AI-based evaluation may be seen as ill-suited to study, the job ads differed only regarding information
these aims and, absent other information about the about the upcoming selection process: Participants were
company, may be interpreted as signaling poor future told their recorded interviews would be reviewed either
interpersonal treatment (Langer et al., 2017). by a company representative (Human Resources (HR)
evaluator condition; n = 97) or an AI interview assess-
Hypothesis 1: Participants will report higher intention ment software (AI evaluator condition; n = 87). Partici-
to apply to jobs with human, rather than AI-based, pants were not provided any additional information
evaluators. about the selection process.

Hypothesis 2: Participants will report higher intentions


to pursue jobs with human, rather than AI-based, Measures
evaluators.
All measures were evaluated on a 1 (strongly disagree) to 5
(strongly agree) scale, unless noted otherwise.

Method Dependent Variables


Application intentions were captured using Taylor and
Participants Bergmann’s (1987) two-item measure (α = .86). In-
tention to pursue a job with the organization was
Participants were senior-level marketing students re- measured using Highhouse et al.’s (2003) 5-item scale
cruited via a school-wide research participation platform (α = .77).

© 2020 Hogrefe Publishing Journal of Personnel Psychology (2020), 19(3), 142–149


144 A. Mirowska, AI Evaluation in Selection

Table 1. Descriptive statistics and correlations


Variable M SD 1 2 3 4 5 6 7 8
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

1. Age 22.26 1.54 (–)


2. Gender 0.39 0.49 .05 (–)
3. Comfort with technology 4.19 0.76 .03 .02 (.89)
4. Organizational attractiveness 3.39 0.77 .00 .06 .03 (.83)
5. Organizational prestige 3.59 0.56 .03 .08 .06 .58** (.78)
6. HR vs. AI evaluator 0.47 0.50 .07 .02 .07 .02 .11 (–)
7. Application intentions 3.46 0.97 .06 .02 .01 .76** .48** .12 (.86)
8. Intentions to pursue 3.46 0.68 .02 .08 .01 .71** .61** .15* .69** (.77)
Note. N = 184. Internal consistency reliability coefficients (Cronbach’s α) appear in parentheses along the main diagonal. Gender: 0 = female, 1 = male; HR
evaluator = 0, AI evaluator = 1. AI = artificial intelligence. *p < .05, **p < .01.

Table 2. Summary of regression results


Dependent variable: application intentions Dependent variable: intentions to pursue
Variables B SE B β B SE B β B SE B β B SE B β
Control variables
Age .04 .03 .06 .04 .03 .06 .01 .02 .01 .00 .02 .01
Gender .06 .10 .03 .06 .10 .03 .03 .07 .02 .03 .07 .02
Comfort with technology .04 .06 .03 .05 .06 .04 .04 .04 .04 .05 .04 .06
Organizational attractiveness .91 .08 .72** .92 .08 .73** .47 .05 .54** .48 .05 .54**
Organizational prestige .12 .10 .07 .10 .10 .05 .36 .07 .30** .35 .07 .29**
Main effect
HR vs. AI evaluator .19 .09 .10† .14 .07 .11*
2
Adjusted R .57 .58 .55 .56
ΔR2 .01† .01*
ΔF 3.85 4.65
Cohen’s f2 .03 .03
Note. N = 184. Gender: 0 = female, 1 = male; HR evaluator = 0, AI evaluator = 1. AI = artificial intelligence. *p < .05, **p < .01, †p = .05.

Control Variables responses to these attention and manipulation check


Organizational attractiveness (α = .83) and prestige (α = questions were not included in the final sample. Table 1
.78) were measured using Highhouse et al.’s (2003) 5-item shows descriptive statistics and correlations among study
scales. These variables have been found to be important variables, while Table 2 presents the results of the re-
predictors of recruitment outcomes and may serve as gression analysis.
antecedents of pursuit intentions (Chapman et al., 2005; Considering standardized mean differences, the effect
Highhouse et al., 2003; Uggerslev, Fassina, & Kraichy, of human versus AI-based evaluation on both application
2012). Participants were also asked to report their age, intentions and intention to pursue was small but signifi-
gender, and how comfortable they were with new tech- cant, Cohen’s d = .24 and .30, respectively. When in-
nology [4-item scale adapted from Lassar, Mannolis, and cluding the control variables in the regression model, there
Lassar (2005), measured on a 1 (very uncomfortable) to 5 was a negative effect of AI evaluation on both dependent
(very comfortable) scale; α = .89]. variables. Being faced with an AI-based evaluation sig-
nificantly predicts lower application intentions (β = .10,
p = .05) and intentions to pursue the job (β = .11, p < .05).
Analysis and Results Table 2 also reports the Cohen’s f2 for both models, as an
alternative method to quantify the effect size of the in-
Several questions were included to ensure participants dependent variable, taking into account the variation left
had read the job ad and were aware of the selection unexplained by the control variables. For both models,
procedure to come; participants providing incorrect Cohen’s f2 = .03, indicating a small but significant effect

Journal of Personnel Psychology (2020), 19(3), 142–149 © 2020 Hogrefe Publishing


A. Mirowska, AI Evaluation in Selection 145

Table 3. Results of relative weight analysis


Dependent variable: application intentions Dependent variable: intentions to pursue
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

Predictor variables RWraw (95% CI) Rescaled RW (%) RWraw (95% CI) Rescaled RW (%)
Age 0.00 ( 0.01, 0.03) 0.54 0.00 ( 0.02, 0.01) 0.04
Gender 0.00 ( 0.01, 0.01) 0.08 0.00 ( 0.01, 0.03) 0.46
Comfort with technology 0.00 ( 0.01, 0.02) 0.14 0.00 ( 0.01, 0.03) 0.29
Organizational attractiveness 0.46 (0.38, 0.56) 78.07 0.35 (0.28, 0.43) 60.37
Organizational prestige 0.15 (0.06, 0.18) 19.43 0.21 (0.14, 0.29) 36.18
HR vs. AI evaluator 0.01 ( 0.00, 0.05) 1.73 0.02 ( 0.00, 0.06) 2.66
100 100
Note. N = 184. AI = artificial intelligence; Rescaled RW = relative weight as a percentage of R2; RWraw = raw relative weight.

size (Selya, Rose, Dierker, Hedeker, & Mermelstein, 2012). was insignificant for application intentions (b = .13, SE = .13,
Interestingly, in both regressions, organizational attrac- p > .05) and intention to pursue (b = .11, SE = .08, p > .05), as
tiveness is a significant predictor of the dependent vari- were the indirect effects, application intentions (indirect
able, but organizational prestige is only significant when effect = .10, 95% CI [ 0.25, 0.03]), and intentions to pur-
intention to pursue is considered (β = .35, p < .01). sue (indirect effect = .09, 95% CI [ 0.22, 0.03]). Therefore,
Although the high correlation between organizational there is no mediating effect of either attitudinal variable.
prestige and attractiveness gives rise to potential concerns However, there appears to be a suppressor effect of organi-
regarding multicollinearity, all variance inflation factor values zational attraction, as in both cases the direct effect of the
were below 1.55, well below the recommended conservative experimental condition is strengthened with the inclusion
threshold of 4 (Salmerón, J. Garcı́a, Garcı́a, & Martı́n, 2017). of this attitudinal variable in the model.
Nevertheless, a relative weights analysis (Tonidandel &
LeBreton, 2015) was conducted to better interpret the con-
tribution of each variable in the model. As Table 3 shows, the
results largely mirror those reported above. The highest Discussion
weight in both models belongs to organizational attractive-
ness (78.07% and 60.37%), with organizational prestige also In general, the results here provide preliminary evidence
showing a significant contribution, albeit this weight is that information about the selection process may affect
stronger in the model with the intention to pursue as the individuals’ early decisions regarding applying for and
dependent variable (36.18%) than with application intentions pursuing a job, precluding them from ever becoming a true
(19.43%). Finally, the experimental condition shows a small candidate. In both cases, these effects remained even
relative weight (1.73% and 2.66%), in both cases approaching after controlling for the effects of organizational attrac-
significance at the 5% level, although this technique has been tiveness and prestige, which prior meta-analyses have
found to lead to higher than expected Type II error rates established as important predictors of recruitment out-
(Tonidandel, LeBreton, & Johnson, 2009). comes (Chapman et al., 2005; Uggerslev et al., 2012). Not
Due to the strong effects of organizational attraction and surprisingly, organizational attractiveness and prestige
prestige, and previous research (see, for example, Uggerslev showed the greatest weight in explaining both dependent
et al., 2012) supporting a potential mediating effect of the variables; therefore, companies should continue to pay
attitudinal variables, organizational attraction, and prestige, attention to these attitudinal variables. However, a small
a post-hoc analysis was conducted. When looking at the me- proportion of variance explained, as that shown for the
diating effect of organizational attraction, the direct effect experimental condition, does not necessarily make this
of the experimental condition was significant for both de- variable unimportant (Tonidandel & LeBreton, 2015).
pendent variables, application intentions (b = .20, SE = .09, Interestingly, there was no direct effect of the experi-
p < .05) and intentions to pursue (b = .18, SE = .07, p < .05). mental condition on either of these attitudinal variables,
However, the same cannot be said for the indirect effect as and the significant effect of the experimental condition
the 95% CIs for both application intentions (indirect was only visible when attractiveness and prestige were
effect = .03, 95% CI [ 0.24, 0.18]) and intentions to pur- included in the model. It appears that their exclusion from
sue (indirect effect = .02, 95% CI [ 0.17, 0.12]) contained the model, particularly organizational attraction, hides the
zero. When considering organizational prestige as the medi- true relationship between the experimental condition and
ating variable, the direct effect of the experimental condition dependent variables (Cheung & Lau, 2008). This potential

© 2020 Hogrefe Publishing Journal of Personnel Psychology (2020), 19(3), 142–149


146 A. Mirowska, AI Evaluation in Selection

suppression effect implies that they are capturing some selection process where they find it more difficult to gauge
variance of the dependent variables that is not related to their probability of success. Intentions to pursue the job,
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

the experimental condition, and their inclusion allows us however, require one to successfully pass through the
to clarify the relationship of interest (Watson, Clark, selection process; thereby, this intention may be affected
Chmielewski, & Kotov, 2013). Therefore, it appears that by one’s perceptions of lower control over the desired
there is something about knowing one will be faced with outcome (being hired) due to an element outside of one’s
AI-aided versus human evaluation that is affecting be- control (AI evaluation in the selection process).
havioral intention, above and beyond the established path Despite previous research showing that characteristics of
via attitudes toward an organization, underscoring the the recruitment and selection process affect attitudes toward
importance of considering additional factors beyond those an organization (Uggerslev et al., 2012), there was no sig-
already established in the literature. nificant effect of the knowledge of a human versus AI-based
It appears that the selection process to come may act as evaluation on neither organizational attraction nor prestige
a signal, albeit one that is not interpreted the same way by ratings. Therefore, it appears that people have a reaction to
both companies and applicants. Applicants seemingly have the idea of AI evaluation independent of any ideas about the
a negative view of AI-based evaluation technology, which specific organization utilizing this technology. This opens up
may affect their willingness to enter the applicant pool for important avenues for future research, considering people’s
a given job or to fully pursue an application that is made. In reactions to the idea of being evaluated by AI-based tech-
the case of application intentions, the effect was on the nologies in general, not just in recruitment contexts.
cusp of significance, which may reflect an assumption that If this is the case, organizations should consider providing
there will be other opportunities within the selection more information about the selection process to temper the
process for face-to-face interaction, thereby relieving signaling effects of only certain elements being known to
some initial applicant concerns. These are questions that candidates. Such information may not only affect candi-
should be explored more deeply using qualitative research date reactions but also their subsequent performance and
designs to determine what the source of this aversion is motivation when moving through the selection process
and what actions a company could take to overcome them. (Truxillo, Bodner, Bertolino, Bauer, & Yonce, 2009). For
However, the choice to actually work for an organization example, organizations could reveal various selection steps
may require the interpretation of signals to gain infor- to candidates early in the process, so candidates can clearly
mation about the company’s values or culture as well as see the proportion of technology-based versus interpersonal
how others will react to one’s association with the orga- assessments involved. Previous research shows that more
nization. The inclusion of an AI-based evaluation tool may information can increase perceptions of openness and
signal an impersonal nature, which turns applicants off fairness, increasing attraction to the company (Langer et al.,
from pursuing employment with the organization. Orga- 2019; Truxillo et al., 2009). Additionally, as AI-based
nizational prestige captures perceived social consensus evaluation is still a relatively new tool, companies may
about the value of a company’s characteristics (Highhouse provide further information regarding their choice to use
et al., 2003). Its significance only when predicting in- this technology in order to support the derivation and rit-
tention to pursue, a more long-term oriented action in- ualization processes of the signal, thereby increasing the
tention, supports the fact that one’s chosen workplace may chances that this choice is interpreted correctly by potential
have repercussions on one’s social identity (Ashforth & applicants. For example, organizations could consider ex-
Mael, 1989; Carter & Highhouse, 2014). plaining why they chose to use AI-based evaluation, clearly
Another potential explanation for the differing effects on tying it to the organizational attributes – efficiency, inno-
these two dependent variables may be found through the vation, objectivity – they believe it reflects.
lens of the theory of planned behavior (TPB). TPB argues Given that the organization presented was a fictional
that intentions are affected not only by one’s attitudes and one, with which the participants would not have had any
the perceived social pressure to engage in a given behavior familiarity, it is possible that they did not have any existing
but also by one’s perceived control over the success of that information structures, or “nodes,” in memory to which
behavior (Azjen, 1991). Being faced with an AI evaluation, to attach this information about the selection process
which may represent an unknown and seemingly less (Cable & Turban, 2001; Collins & Kanar, 2014). Their
controllable situation than a human evaluator, may affect ratings of organizational attraction and prestige may have
one’s perceived self-efficacy of achieving the ultimate goal been global, preliminary attitudes albeit ones that did
of working for the organization. The effect on application not incorporate specific details that would be better as-
intentions is smaller, as applying for the job is not affected similated into the memory structures of participants fa-
by the evaluation system to follow, although participants miliar with the focal organization. This supports Cable and
may have still considered the utility of applying for a Turban’s (2001) recommendations that companies should

Journal of Personnel Psychology (2020), 19(3), 142–149 © 2020 Hogrefe Publishing


A. Mirowska, AI Evaluation in Selection 147

be aware of the level of organizational familiarity of their components are affected by elements of the selection
company as an employer before deciding what additional process, and in what way.
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

information to provide. Comparative or longitudinal studies Future research should also consider mediators as po-
should be conducted to further investigate whether such tential explanatory mechanisms of the results shown. The
documented effects as paying more attention to and re- current study was not well suited to capturing these medi-
acting more strongly to familiar companies than unfamiliar ating mechanisms, but as scholars continue to investigate
ones (Brooks, Highhouse, Russell, & Mohr, 2003; Dineen & reactions to various elements of the selection process, AI-
Allen, 2016), and better recall of information about familiar based technologies should be included in this list. Exploring
versus unfamiliar companies at a later date (Cable & perceptions of justice and fairness is a particularly promis-
Turban, 2003) also holds for information regarding the ing area, as these have been found to predict organiza-
selection process, particularly as it applies to the use of new tional attractiveness, job purist, and acceptance intentions
technologies. Potentially, the experimental condition would (McCarthy et al., 2017; Truxillo et al., 2009). Although AI-
have had a stronger effect on attitudes toward already based systems may provide the consistency of evaluation
recognized employers. Alternatively, there may be some- that companies are looking for to maximize the validity of
thing about the reactions to AI-assisted evaluation specif- their selection systems (Lievens & Sackett, 2017), predic-
ically that operate independently of attitudes about the tion and explainability are important features when con-
company in question. For example, would a similar effect sidering the acceptance of AI technologies (Biran & Cotton,
hold for the use of AI evaluation in performance appraisals, 2017). If potential candidates do not understand how they
where individuals would presumably already be familiar are being evaluated in the selection system, they may not
with the organization using the technology? consider the evaluation fair and be reluctant to enter the
The self-report and cross-sectional nature of the data selection process (Gilliland, 1993). This issue may remedy
give rise to concerns of common method variance, al- itself as AI evaluation systems become more commonly
though these are tempered somewhat by the finding of used, but organizations may want to consider providing
significant differences between experimental conditions. information about what they are looking to measure to
The study utilized a young sample and a technological create more transparency around the process. Other ele-
company, both of which may have positively affected the ments of classic justice models, such as perceptions of in-
acceptance of technological innovations in the selection terpersonal justice via the lack of two-way communication
process (Fox & Connolly, 2018; Venkatesh & Bala, 2008). (Gilliland, 1993), also deserve attention.
If so, then these results may underreport true effects as A final point worth mentioning concerns the choice of
participants nonetheless showed a preference for human wording in the current study. Terms such as “AI-based
evaluation. Finally, the study may have lacked realism due evaluation” may carry certain meaning for some indi-
to the fictional nature of the job ad, lack of a true appli- viduals, ranging from extremely objective and reliable
cation and selection process, and inclusion of selection measurement to an impersonal tool shrouded in secrecy.
process information in the ad itself. However, the study However, this difference in meaning gets at the heart of
design was meant to capture the phenomena of applicants the signaling problem: Organizations may be drawn to
possessing information about the selection process, due to evaluations systems touted as AI-based, while potential
either the popularity of employer review sites and social applicants may be turned off for the same reason. Future
media or compliance with ethical or legal requirements research could investigate whether the terminology
(i.e., the General Data Protection Regulation).1 Given the around this new technology – alternatively calling it AI-
relatively small effect sizes seen here, potential moderators based or simply automated evaluation – leads to different
such as industry or job level should be investigated. Ad- perceptions, reactions, and behavioral intentions.
ditionally, the suppression effect of the attitudinal vari-
ables was key in being able to see the effect of the
experimental condition. Although this could be a sign that
the true effect of AI evaluation on the intention variables is References
obscured by the variance accounted for by the attitudinal
variables (Watson et al., 2013), further research may want Ashforth, B. E., & Mael, F. (1989). Social identity theory and the
to better parcel out the various components of the atti- organization. Academy of Management Review, 14, 20–39.
tudinal and criterion variables to better understand which https://doi.org/10.5465/AMR.1989.4278999

1
I would like to thank an anonymous reviewer for this suggestion.

© 2020 Hogrefe Publishing Journal of Personnel Psychology (2020), 19(3), 142–149


148 A. Mirowska, AI Evaluation in Selection

Azjen, I. (1991). The theory of planned behavior. Organizational Places to Work” certifications. Academy of Management Jour-
Behavior and Human Decision Processes, 50, 179–211. https:// nal, 59, 90–112. https://doi.org/10.5465/amj.2013.1091
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

doi.org/10.1016/0749-5978(91)90020-T Drover, W., Wood, M. S., & Corbett, A. C. (2018). Toward a cognitive
Bangerter, A., Roulin, N., & König, C. J. (2012). Personnel selection view of signalling theory: Individual attention and signal set
as a signaling game. Journal of Applied Psychology, 97, 719–738. interpretation. Journal of Management Studies, 55, 209–231.
https://doi.org/10.1037/a0026078 https://doi.org/10.1111/joms.12282
Biran, O., & Cotton, C. (2017). Explanation and justification in ma- Dürr, D., & Klehe, U.-C. (2018). Using the theory of planned behavior
chine learning: A survey. IJCAI-17 Workshop on Explainable AI to predict faking in selection exercises varying in fidelity. Journal
(XAI), 8, 8–13. of Personnel Psychology, 17, 155–160. https://doi.org/10.1027/
Bonaccio, S., O’Reilly, J., O’Sullivan, S. L., & Chiocchio, F. (2016). 1866-5888/a000211
Nonverbal behavior and communication in the workplace: A Ehrhart, K. H., & Ziegert, J. C. (2005). Why are individuals attracted
review and an agenda for research. Journal of Management, 42, to organizations? Journal of Management, 31, 901–919. https://
1044–1074. https://doi.org/10.1177/0149206315621146 doi.org/10.1177/0149206305279759
Born, M. Ph., Hiemstra, A. M. F., & Oostrom, J. K. (2018). Applicants’ role Fox, G., & Connolly, R. (2018). Mobile health technology adoption
as (pro-)active agents in the recruitment and selection process: A across generations: Narrowing the digital divide. Information
frequently overlooked perspective. Journal of Personnel Psychol- Systems Journal, 28, 995–1019. https://doi.org/10.1111/isj.
ogy, 17, 103–106. https://doi.org/10.1027/1866-5888/a000215 12179
Born, M. Ph., & Taris, T. W. (2010). The impact of the wording of Gilliland, S. W. (1993). The perceived fairness of selection sys-
employment advertisements on students’ inclination to apply tems: An organizational justice perspective. Academy of
for a job. The Journal of Social Psychology, 150, 485–502. Management Review, 18, 694–734. https://doi.org/10.5465/
https://doi.org/10.1080/00224540903365422 amr.1993.9402210155
Bowen, D. E., & Ostroff, C. (2004). Understanding HRM-firm per- Harrison, T., & Stone, D. L. (2018). Effects of organizational values
formance linkages: The role of the “strength” of the HRM sys- and employee contact on e-recruiting. Journal of Managerial
tem. The Academy of Management Review, 29, 203–221. https:// Psychology, 33, 311–324. https://doi.org/10.1108/JMP-03-2017-
doi.org/10.2307/20159029 0118
Brooks, M. E., Highhouse, S., Russell, S. S., & Mohr, D. C. (2003). Highhouse, S., Lievens, F., & Sinar, E. F. (2003). Measuring attraction
Familiarity, ambivalence, and firm reputation: Is corporate fame to organizations. Educational and Psychological Measurement,
a double-edged sword? Journal of Applied Psychology, 88, 63, 986–1001. https://doi.org/10.1177/0013164403258403
904–914. https://doi.org/10.1037/0021-9010.88.5.904 Langer, M., König, C. J., & Fitili, A. (2019). Information as a double-
Buehl, A.-K., & Melchers, K. G. (2018). Do attractiveness and edged sword: The role of computer experience and information
competition influence faking intentions in selection interviews? on applicant reactions towards novel technologies for personnel
Journal of Personnel Psychology, 17, 204–208. https://doi.org/ selection. Computers in Human Behavior, 81, 19–30. https://doi.
10.1027/1866-5888/a000208 org/10.1016/j.chb.2017.11.036
Cable, D. M., & Turban, D. B. (2001). Establishing the dimensions, Langer, M., König, C. J., & Krause, K. (2017). Examining digital in-
sources and value of job seekers’ employer knowledge during terviews for personnel selection: Applicant reactions and in-
recruitment. In G. Ferris (Ed.), Research in personnel and human terviewer ratings. International Journal of Selection and
resources management (Vol. 20, pp. 115–163). Bingley, UK: Em- Assessment, 25, 371–382. https://doi.org/10.1111/ijsa.12191
erald Group Publishing Limited. Levashina, J., & Campion, M. A. (2006). A model of faking likelihood
Cable, D. M., & Turban, D. B. (2003). The value of organizational in the employment interview. International Journal of Selection
reputation in the recruitment context: A brand-equity per- and Assessment, 14, 299–316. https://doi.org/10.1111/j.1468-
spective. Journal of Applied Social Psychology, 33, 2244–2266. 2389.2006.00353.x
https://doi.org/10.1111/j.1559-1816.2003.tb01883.x Lievens, F., & Sackett, P. R. (2017). The effects of predictor method
Carter, N. T., & Highhouse, S. (2014). You will be known by the factors on selection outcomes: A modular approach to per-
company you keep: Understanding the social-identity concerns sonnel selection procedures. Journal of Applied Psychology,
of job seekers. In D. M. Cable & K. Y. T. Yu (Eds.), The Oxford 102(1), 43–66. https://doi.org/10.1037/apl0000160
handbook of recruitment (pp. 454–462). New York, NY: Oxford McCarthy, J. M., Bauer, T. N., Truxillo, D. M., Anderson, N. R., Costa,
University Press. A. C., & Ahmed, S. M. (2017). Applicant perspectives during
Chamorro-Premuzic, T., Akhtar, R., Winsborough, D., & Sherman, R. A. selection: A review addressing “so what?,” “what’s new?,” and
(2017). The datafication of talent: How technology is advancing the “where to next?”. Journal of Management, 43, 1693–1725.
science of human potential at work. Current Opinion in Behavioral https://doi.org/10.1177/0149206316681846
Sciences, 18, 13–16. https://doi.org/10.1016/j.cobeha.2017.04.007 Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model
Chapman, D. S., Uggerslev, K. L., Carroll, S. A., Piasentin, K. A., & for types and levels of human interaction with automation. IEEE
Jones, D. A. (2005). Applicant attraction to organizations and job Transactions on Systems, Man, and Cybernetics – Part A: Sys-
choice: A meta-analytic review of the correlates of recruiting tems and Humans, 30, 286–297. https://doi.org/10.1109/3468.
outcomes. Journal of Applied Psychology, 90, 928–944. https:// 844354
doi.org/10.1037/0021-9010.90.5.928 Potosky, D. (2008). A conceptual framework for the role of the
Cheung, G. W., & Lau, R. S. (2008). Testing mediation and sup- administration medium in the personnel assessment process.
pression effects of latent variables: Bootstrapping with struc- Academy of Management Review, 33, 629–648. https://doi.org/
tural equation models. Organizational Research Methods, 11, 10.5465/amr.2008.32465704
296–325. https://doi.org/10.1177/1094428107300343 Reeve, C. L., & Schultz, L. (2004). Job-seeker reactions to selection
Collins, C. J., & Kanar, A. M. (2014). Employer brand equity and re- process information in job ads. International Journal of Selection
cruitment research. In D. M. Cable & K. Y. T. Yu (Eds.), The Oxford and Assessment, 12, 343–355. https://doi.org/10.1111/j.0965-
handbook of recruitment (pp. 284–297). New York, NY: Oxford 075X.2004.00289.x
University Press. Salmerón, R., Garcı́a, J., Garcı́a, C. B., & Martı́n, M. M. L. (2017). A
Dineen, B. R., & Allen, D. G. (2016). Third party employment note about the corrected VIF. Statistical Papers, 58, 929–945.
branding: Human capital inflows and outflows following “Best https://doi.org/10.1007/s00362-015-0732-9

Journal of Personnel Psychology (2020), 19(3), 142–149 © 2020 Hogrefe Publishing


A. Mirowska, AI Evaluation in Selection 149

Selya, A. S., Rose, J. S., Dierker, L. C., Hedeker, D., & Mermelstein, R. Walker, H. J., Feild, H. S., Giles, W. F., & Bernerth, J. B. (2008). The
J. (2012). A practical guide to calculating Cohen’s f2, a measure interactive effects of job advertisement characteristics and
https://econtent.hogrefe.com/doi/pdf/10.1027/1866-5888/a000258 - Agata Mirowska <agata.mirowska@rennes-sb.com> - Sunday, August 23, 2020 11:00:58 PM - IP Address:90.27.237.142

of local effect size, from PROC MIXED. Frontiers in Psychology, 3, applicant experience on reactions to recruitment messages.
111. https://doi.org/10.3389/fpsyg.2012.00111 Journal of Occupational and Organizational Psychology, 81,
Spence, M. (1973). Job market signaling. The Quarterly Journal of 619–638. https://doi.org/10.1348/096317907X252487
Economics, 87, 355–374. https://doi.org/10.2307/1882010 Watson, D., Clark, L. A., Chmielewski, M., & Kotov, R. (2013). The
Tonidandel, S., LeBreton, J. M., & Johnson, J. W. (2009). Determining value of suppressor effects in explicating the construct validity
the statistical significance of relative weights. Psychological of symptom measures. Psychological Assessment, 25, 929–941.
Methods, 14, 387–399. https://doi.org/10.1037/a0017735 https://doi.org/10.1037/a0032781
Tonidandel, S., & LeBreton, J. M. (2015). RWA web: A free, com-
prehensive, web-based, and user-friendly tool for relative History
weight analyses. Journal of Business and Psychology, 30, Received September 10, 2019
207–216. https://doi.org/10.1007/s10869-014-9351-z Revision received April 27, 2020
Truxillo, D. M., Bodner, T. E., Bertolino, M., Bauer, T. N., & Yonce, C. Accepted April 27, 2020
A. (2009). Effects of explanations on applicant reactions: A Published online August 19, 2020
meta-analytic review. International Journal of Selection and
Assessment, 17, 346–361. https://doi.org/10.1111/j.1468-2389. ORCID
2009.00478.x Agata Mirowska
Uggerslev, K. L., Fassina, N. E., & Kraichy, D. (2012). Recruiting  https://orcid.org/0000-0002-5475-5826
through the stages: A meta-analytic test of predictors of ap-
plicant attraction at different stages of the recruiting process. Agata Mirowska
Personnel Psychology, 65, 597–660. https://doi.org/10.1111/j. Rennes School of Business
1744-6570.2012.01254.x 2 rue Robert d’Arbrissel
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 35065 Rennes
and a research agenda on interventions. Decision Sciences, 39, France
273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x Agata.mirowska@rennes-sb.com

© 2020 Hogrefe Publishing Journal of Personnel Psychology (2020), 19(3), 142–149

View publication stats

You might also like