Professional Documents
Culture Documents
Selection System Final Paper
Selection System Final Paper
Jessica Stelter
Brief General Job Analysis
KSAs
● Knowledge of human resources procedures, such as recruiting, selection,
compensation and benefits, payroll, etc.
● Excellent interpersonal communication, conflict resolution, supervisory, and
leadership skills (“Vice President, Human Resources”, n.d.)
● Ability to deductively reason and express ideas well
Task based
● Work with executives of Zambeel, Inc. to ensure personnel supports the
organization’s mission and long-term goals
● Train, manage, and provide guidance to HR management-level staff in HR
procedures
● Maintains best practices in employment law; “handling complaints, settling disputes,
and resolving grievances and conflicts, or otherwise negotiating with others”
(“Summary Report”, 2019)
KSAs
● Deep knowledge of exotic flora and its potential uses for new product development
● Expertise in botany, chemistry, and biology
● knowledge of/ability to develop innovative products from start to finish, especially as
it relates to developing natural products
Task based
● Ability to project manage a large workforce and budget, keep the team on schedule,
ensure project goals remain stable
● Lead team in the process of bringing new products from concept to
commercialization
● Ability to spend lots of time traveling in unfamiliar, exotic environments, keeping in
mind safety hazards/procedures of uncharted territory
Developing a selection system
The first step would, of course, be to conduct a job analysis of the positions I am
developing a selection system for. I would do this via observational learning to see what an
Executive Vice President of Human Resources (EVPHR) and a Vice President of Jungle
Exploration and New Product Development (VPJENPD) does on the job. This is likely
something I can do with the EVPHR position, but can only conduct this observational
analysis with the in-office part of the VPJENPD position. For the remainder of the job
analysis (for both positions), I would interview people in similar positions to get an idea of
what KSAs are needed to do the job effectively (Pulakos, 2005). If I could not find these
experts, especially because VPJENPD is a more rare position, I would at least interview VPs,
maybe even VPs of startups, to see what is needed to be a leader of a team developing
innovative projects. I would survey the positions' supervisors and incumbents to get their
input on what are the most important KSAs needed for the job (Pulakos, 2005). Lastly, based
on this information, I would need to see what kinds of assessments would be most effective
at measuring the tasks and KSAs I just learned about. For high-potential positions, I would
focus on the applicants’ aptitude, attitude, and application; meaning: “What is the candidate’s
ability to learn?”, “What are the candidate’s tendencies?”, and “How can the candidate apply
After reviewing the requirements for the position, I would strategically select a
combination of assessments that are as relevant and valid as possible to measuring these
requirements for the job. If I was collecting data for a job analysis on an air traffic controller,
for example, I might have learned from observation, interviews, and surveys of
effectively towards aircraft personnel in high stress situations. With this knowledge, I might
have them take a cognitive ability test to analyze their verbal reasoning and communication
skills and couple that with a personality test to analyze their levels of neuroticism in order to
get the most valid evaluation of their ability to communicate calmly in a high-stress
environment.
Because both of these Vice President positions are high-level leadership positions, it
is likely that a job knowledge test will not be needed, as they should have experience in the
general practices of the job. This is something we can simply observe and screen candidates
on by using their résumés, experience, and references to determine if they meet the basic
The things I would be looking for in regards to these upper-level positions would be
more focused on project management and leading teams to develop new products. I would
conduct a cognitive ability test- as intelligence often strongly correlates with effectiveness in
leaders (Judge et al., 2004). I would couple that with a personality test to see if the candidate
has personality traits that correlate with leadership performance, such as extraversion and
openness. Studies show that managers who are agreeable, extroverted, emotionally stable
(low in neuroticism), and high in openness to experiences have higher performance than their
counterparts (Camps et al., 2016). Because both of these executive positions would be
managing teams, this makes me want to conduct a personality assessment such as the Hogan
Personality Inventory as it relates to the Five Factor Model in the workplace (Goldberg et al.,
2006).
assessment center- combining a few types of assessment methods focused on supervisory and
simulations such as role-plays, work samples, and SJTs to test the candidate’s strategic
decision making and team management abilities (Phillips, 2020). Assessment centers have
especially when including “dimensions such as global awareness and strategic vision”
(Thornton et al., 2010). Given Zambeel, Inc. is an international company, and given the
VPJENPD will be traveling to exotic territories, global awareness and strategic vision are
imperative.
These types of assessment processes overlap in how we test the applicants’ contextual
job performance by mainly asking questions relating to the applicants’ soft skills of leading
and decision making rather than their ability to do the technical skills of the job, such as their
knowledge of HR policies or familiarity with plants of a particular region. Like personality
However, overall, correlations between cognitive ability tests and personality tests are
generally low (less than .20), so when coupled together, these tests would be effective in
2014). These assessments also differ between positions, in that the role of the VPJENPD is
represent this difference and, of the assessments, would vary the most between these two
positions. For instance, a work sample for the EVPHR may test how the candidate would
handle a complaint or grievance, whereas a work sample for the VPJENPD may test the
Furthermore, there may be an additional aspect of the process for the VPJENPD.
During the interview, the candidate may be asked questions about specific physical abilities
as it “relate[s] to the applicant’s ability to perform the core duties of the role” (Barker, 2012).
While keeping legal issues and adverse impact in mind, it is important to ask questions
specific to the job requirements in order to determine how capable they are to explore foreign
Overall, it’s important to have a few different predictors that do not correlate (such as
cognitive ability and personality tests) to maximize incremental validity (Schmitt, 2014).
However, predictive validity due to a variety of predictors is only beneficial to a certain
extent, as having more than 3 predictors is likely to cause overlap rather than give more
There are several types of assessments that would be applicable to both positions
(biodata, integrity tests, work samples, and more) however, they may not all have the
greatest utility for these executive positions in particular. Furthermore, it would be extremely
expensive, time consuming, and result in negative reactions of the applicants if too many of
these assessments were to be used. In order to narrow down these options into an efficient
assessment center, my next step would be to evaluate the tests' validity and reliability as it
relates to the individual job at hand. Then, I would use a strategic combination of tests that
makes the most sense and has the highest validity when coupled together.
the highest predictive criterion validity as it corresponds to job performance are work
samples, GMA tests, structured interviews, and job knowledge tests (Schmidt & Hunter,
1998). Too many tests can be overwhelming for candidates and for the organization's
resources; and as stated previously, a job knowledge test may not be necessary for such
high-level positions as we can get an idea of the candidates’ knowledge, experience, and
skills from their résumés. Therefore, the tests I would choose, due to high validity for
samples.
I chose a GMA/cognitive ability test because it is one of the most reliable assessments
for evaluating leaders and their individual differences (Campbell, 1990). Correlations
between IQ and job performance of “around 0.5 have been regularly cited as evidence of test
validity” (Richardson, 2015). Cognitive ability tests evaluate constructs deeper than job
knowledge or achievement that executives should already have at this point in their career.
Instead, these tests evaluate the applicant’s ability to learn on the job- an important trait for
any Vice President. I would administer this test first because it is highly valid, a necessary
Consider...”, n.d.). This would be the first filter before conducting a personality
questionnaire.
While personality tests may not be sufficient on their own (Cash, 2017), when
coupled with other assessments, a personality questionnaire can effectively measure social
skills and emotional intelligence- which are important KSAs listed in the brief job
description for both positions (Kuncel et al., 2004). A meta-analysis of criterion validity of
personality tests shows that while conscientiousness indicates consistent correlations with job
performance across all job types, extraversion is a “valid predictor” in managerial positions
specifically (Barrick, 1991). The same meta-analysis also shows that “both openness to
experience and extraversion were valid predictors of the training proficiency criterion”
(Barrick, 1991). These are important traits to look for in our assessments as both Vice
Lastly, I would conduct a structured interview with work samples and situational
judgement tests because this step would take the most amount of time and resources, so it
would be ideal to have filtered out several candidates by this point. This dynamically
samples, would allow the hiring managers to gain an idea of how the candidate would apply
incremental validity (Schmidt & Hunter, 1998). Upon further research, I found that a
“combination of a GMA test and an integrity test (which measures mostly conscientiousness)
has the highest high validity (.65) for predicting job performance” (Schmidt & Hunter, 1998).
I chose a GMA test plus a personality test (rather than integrity test) because I would like to
low neuroticism as they are congruous with executive leadership (Camps et al., 2016).
Pairing GMA tests with structured interviews also results in high validity (.63), “which may
in part measure conscientiousness and related personality traits” (Schmidt & Hunter, 1998).
In order to gain content validity in the structured interview, the simulations created should
demonstrate the competencies needed, as discovered via the job analysis (Fetzer & Tuzinski,
2013). The exercises should represent essential job functions and scenarios that a EVPHR or
VPJENPD would experience in practice. This is where the job analysis, interviews, and
observations would be the most valuable in establishing high content validity of the
situational exercises.
In order to attain a comprehensive and valid selection system with little adverse
money, time, and resources, I would combine that with a hurdle system to filter the
candidates taking these tests (Phillips, 2020). After doing a general screening of applications
to see if they have the basic requirements and experience for the job, as listed in the job
description, the first official assessment would be the GMA test. This would measure the
candidates’ cognitive abilities as it relates to the job (such as evaluating their ability to
deductively reason). I would use a cutoff system for this to filter out those who do not meet
certain benchmarks. If they achieve the required score, they would continue to the next
hurdle, which would be the personality test, then finally the structured interview.
When the assessments are through and we have narrowed down our candidates, I
would make decisions and offers using a top down system, ensuring we get the best
candidate available who scored the highest on all of the tests overall (Phillips, 2020).
Although, it may be beneficial to consider looking at each of the top candidate's scores per
individual assessment rather than their overall composite score. The reason for this is that a
high score on certain assessments may be more valuable than a high score on other
assessments (Phillips, 2020). For example, perhaps it is more important for a Vice President
to possess executive leadership qualities rather than knowing the details of a specific task. If
this is the case, a candidate who placed 3rd on his or her overall composite score, but scored
higher on an important SJT than a candidate who placed 1st in the composite score might be
a better hire.
In addition to that, certain assessments may predict job performance better than other
these factors bring certain issues with top-down selection, especially regarding cognitive
ability tests, as this could lead to adverse impact (Goldstein et al., 2010). For these reasons, I
would use a banding technique in combination with a top-down technique so that candidates
in the higher scoring bands are considered first, but candidates with lower scores are
evaluated equally as their higher scoring counterparts within that band (Kehoe, 2010). For
example, if a band of candidates whose composite score was between 80 and 90, the
candidate who scored an 80 but has a lot of experience could be equivalent to the candidate
who scored a 90 but has little experience. I would evaluate factors other than composite score
(such as previous experience and the score of each individual test) to see if they potentially
have a better fitted profile overall than someone who scored higher in their band (Phillips,
2020).
Legal Issues
Adverse impact
Because general ability is not just determined by one’s DNA or natural cognitive
ability, but also by one’s educational and socio-economic background, there is a relatively
high likelihood of adverse impact among these tests(Kandola & Kartara, 2008). While
cognitive ability and GMA tests have high adverse impact against minorities, I chose two
assessments with low adverse impact (personality and structured interviews) to couple the
enhanced and adverse impact reduced by assessing a comprehensive array of skills and
abilities that are related to both technical task performance and contextual job performance"
(Pulakos, 2005). Research shows that assessing soft skills/behavioral skills versus hard
skills/technical skills may lead to lower adverse impact (Pulakos, 2005). This is also why I
chose the assessments of cognitive ability, personality, and situational judgement tests
because they test important soft skills needed for high-potential leadership positions;
Still, even in structured interviews and simulations, adverse impact can occur. To
avoid adverse impact in simulations, one should ensure that the group being tested represents
the potential candidate pool in terms of diversity. Not only the candidate pool, but if possible,
the test makers and evaluators themselves should be a diverse group of professionals to
incumbents; a result that can be dependent on the recruitment process (Fetzer & Tuzinski,
2013). Recruiting higher qualified minorities, elders, and protected populations will help to
assessment; such as: if the candidate feels the assessment gave them the opportunity to
perform, if the candidate feels the assessment respected their privacy, if the candidate feels
the assessment had face validity, and more (Anderson et al., 2010). The types of assessments
most favorable among applicants when looking at all of these dimensions as a whole are
interviews, work samples, résumés, and references. Cognitive ability tests, personality tests,
and biodata were found moderately favorable, while personal contacts, honesty tests, and
graphology were found the least favorable among applicants (Goldstein et al., 2017).
According to this meta-research, my selection system should have moderate to high reactions
among applicants, as it uses cognitive ability and personality tests, but also interviews and
work samples (and initial screening with résumés and references). Unstructured interviews
are more preferred by applicants than structured interviews, but they are, unfortunately, not
as valid (Goldstein et al., 2017). In order to balance this, including work samples or SJTs in
the structured interviews improves candidates’ reactions because it gives the candidates a
realistic job preview, so they know what to expect on the job (Phillips, 2020).
While there is still the risk of candidates having negative reactions and potentially
causing legal issues due to factors such as cultural differences, there may be ways to mediate
or prevent these reactions. Applicants sometimes have negative reactions to cognitive ability
tests due to adverse impact, so it is important to ensure the test hits on a wide array of
knowledge and skills, is coupled with other tests, and that every dimension tested is relevant
to the job (Pulakos, 2005). Furthermore, offering explanations to candidates about the
validity of the assessments improves the candidate’s perception of fairness, and thus, their
International/global issues
Because Zambeel, Inc. is a multinational organization, and recruiting will likely occur
internationally, it is important to consider potential global issues that may occur. One of the
issues with this selection system is that personality tests can be culturally biased. Despite
there being “considerable evidence that people of different cultures vary in their attitudes and
the candidates’ demographics or nationality, but rather the specific characteristics as it relates
to their personality. However, these characteristics of their culture will shine through in
certain assessments, such as personality tests, which may cause adverse impact against
certain cultures. Take individualism versus collectivism, for example, one of Hofstede’s
necessary mission/goal analysis of the organization, an I/O Psychologist may find that
individualism is valued in the culture of Zambeel, Inc. This may serve as an advantage for
from Japan, for example, a collectivistic country, may score poorly on the personality
segment that assesses individualistic versus collectivistic tendencies. Luckily, multiple
based on their individual traits as they relate to the societal norms of their specific culture?
It may be expensive to develop this selection system as it has many predictors and
simulations. The ongoing cost of administering these assessments “can include usage fees,
rather than developing new ones from scratch. While purchasing a test for a limited number
of uses may seem more costly as “few test publishers will license a test for unlimited usage”,
Zambeel, Inc. will likely not need to use these tests very often, as both positions are
high-potential positions and will likely not have a high turnover rate (“Information to
Consider...”, n.d.). Developing your own “high-quality and effective test requires time,
money, and people to research and develop, revise, and validate the tests” (“Information to
Consider...”, n.d.). In some cases, this may be worth it in the long run if you plan on using it
several times; for example, in the case of an entry level position. For these VP positions,
however, it will be worth purchasing a pre-existing test that we know is reliable and valid
and do not need an for unlimited number of times in order to keep implementation costs low.
Overall, I am assuming the company has ample resources as a Fortune 500 company;
however, it is essential to weigh the costs and benefits with the company’s budget and
business plan. Therefore, a deeper company analysis is needed to ensure these assessments
When getting buy in from stakeholders, the low implementation costs, as referred to
the created selection assessment to prove its validity, then translate the importance of that
research into economic terms (Fetzer & Tuzinski, 2013). Mainly, we will be wanting to
persuade stakeholders in their language, bringing attention to the high ROI of the selection
system (Phillips, 2020). Investing in proper assessments that can predict job performance can
important points such as the fact that “the average cost of on-boarding a new employee can
be as high as $240,000, while the US Department of labor suggests the price of a bad hire is
30% of first-year earnings” can change their perspective on the importance of a quality
environment as, not only technology advances, but as the needs of the organization change.
For example, the SJTs and simulations in the structured interview may start off as
in-person/paper and pencil to keep implementation costs low. However, “as the Internet
continues to evolve, applicants increasingly expect assessments that are interesting,
engaging, and delivered and evaluated instantly”, an important aspect while considering
applicants’ reactions (Armstrong, et al., 2016, p. 674). Furthermore, as validity analyses are
conducted post-assessment, I/O psychologists may find that developing new technological
elements may increase the soundness of the assessment. For example, web-based
validity, testing the candidate’s “multitasking [ability], processing speed, and attention to
Another thing to consider to enhance the validity and utility of this assessment is
analyzing fit on different dimensions: positional/job fit, team fit, and organizational fit of the
fit at one level of analysis can lead to declines in fit at other levels of analysis" (Farr &
Tippins, 2017). For example, when analyzing the positional fit of the VPJENPD,
extraversion may be complementary for leading a team, however, "independence needed for
valued" (Farr & Tippins, 2017). Further research would be advantageous to understanding
Conclusion
application (“The Three A’s”, I may call it). Measuring “What is the candidate’s ability to
learn?”, “What are the candidates tendencies?”, and “How can the candidate apply their
knowledge and skills to the job”. The tests that would best correspond with these constructs
and also have high validity in measuring leadership effectiveness are as follows: a cognitive
ability or general mental ability test (GMA), a personality questionnaire, and a structured
interview with a work sample and/or situational judgement test. Upon further research, I
found that a selection system very similar to this already exists and is called an Individual
Of course, there are important things to be wary of while creating this selection
system, such as the potential for legal issues with adverse impact and candidate’s responses.
assessments that evaluate the constructs of aptitude, attitude, and application positively
affects the applicant’s reactions, the diversity of the workforce, and the validity and
reliability of the selection system as it, hopefully, parallels to job performance. Another
important consideration is the company budget and business plan, especially while
developing the final stage of the process (the “application” stage), as this will be the most
costly. I would want to interview supervisors and subordinates of the positions in order to
determine which types of situational exercises and simulations would be valuable enough to
utility, applicant reactions, and cost) are well-represented among this selection system
(Pulakos, 2005). While these tests may be valid, reliable, and valuable for determining
needed KSAs, they should, ultimately, be used as a guide to make decisions. As I do not
know very much about Zambeel, Inc., I would like to gather more information on “the
qualities they are looking for, their staffing strategy and philosophy, and their past success
with different measures” while developing a selection system for the organization (Thornton
et al., 2010). Rather than relying on these test scores alone (especially the composite of the
test scores alone), I would look at it as one of many elements- in addition to behavior,
attitudes, previous experience, and more- while determining the final decision of who is best
Anderson, Neil & Salgado, Jesus & Hülsheger, Ute. (2010). Applicant Reactions in
10.1111/j.1468-2389.2010.00512.x.
Armstrong, M. & Ferrell, J. & Collmus, A. & Landers, R. (2016). Correcting misconceptions
about gamification of assessment: More than SJTs and badges. Industrial and
Barker, Kate. “Legal Q&A: What Health Questions Can Employers Ask during the
www.personneltoday.com/hr/legal-qa-what-health-questions-can-employers-ask-during
-the-recruitment-process/.
Barrick, M.R. and Mount, M.K. (1991). The big five personality dimension and job
https://doi.org/10.1111/j.1744-6570.1991.tb00688.x.
industrial and organizational psychology (Vol. 1, pp. 687-732). Palo Alto, CA:
2016, doi:10.3389/fpsyg.2016.00112.
successfinder.com/predicting-job-performance-do-personality-tests-work/.
Fetzer, M., & Tuzinski, K. (2013). Simulations for Personnel Selection. New York, NY:
Springer.
Farr, J. L., & Tippins, N. T. (2017). Handbook of employee selection. New York: Routledge.
Goldberg, Lewis R., et al. “The International Personality Item Pool and the Future of
Goldstein, H., Pulakos, E. D., Semedo, C., & Passmore, J. (2017). The Wiley Blackwell
frontiers series. Adverse impact: Implications for organizational staffing and high
stakes selection (pp. 95-134). New York, NY, US: Routledge/Taylor & Francis Group.
www.siop.org/Business-Resources/Employment-Testing/Considerations.
Jeanneret, R., & Silzer, R. (2011). Individual Psychological Assessment: A Practice and
Judge, Timothy & Colbert, Amy & Ilies, Remus. (2004). Intelligence and Leadership: A
Kandola, R., & Kartara, S. (2008). Diversity and Individual Development. Individual
10.1002/9780470753392.ch20
Kehoe, J. F., (2010). Cut Scores and Adverse Impact. In J. L. Outtz (Ed.) Adverse Impact:
Implications for Organizational Staffing and High Stakes Selection, (pp. 289-323). New
Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential,
creativity, and job performance: Can one construct predict them all? Journal of
www.shrm.org/foundation.
Schmidt, F. L., & Hunter, J. E. (1998). The Validity and Utility of Selection Methods in
“Summary Report for: 11-3121.00 - Human Resources Managers.” O*NET OnLine, 2019,
www.onetonline.org/link/summary/11-3121.00?redir=11-3040.00.
Theron, Callie. (2009). The diversity-validity dilemma: in search of minimum adverse impact
and maximum utility. SA Journal of Industrial Psychology, 35(1), 183-195. Retrieved
1&lng=en&tlng=en.
Thornton, George C, and Stefanie K Johnson. “Selecting Leaders: Executives and High
Truxillo, D. M., & Bauer, T.N. (1999). Applicant reactions to test scores banding in
www.shrm.org/resourcesandtools/tools-and-samples/job-descriptions/pages/vice-presid
ent,-human-resources.aspx.