You are on page 1of 24

Developing and Evaluating a Selection System for Executive Positions

Jessica Stelter
Brief General Job Analysis

Executive Vice President of Human Resources

KSAs
● Knowledge of human resources procedures, such as recruiting, selection,
compensation and benefits, payroll, etc.
● Excellent interpersonal communication, conflict resolution, supervisory, and
leadership skills (​“Vice President, Human Resources”, n.d.)
● Ability to deductively reason and express ideas well

Task based
● Work with executives of Zambeel, Inc. to ensure personnel supports the
organization’s mission and long-term goals
● Train, manage, and provide guidance to HR management-level staff in HR
procedures
● Maintains best practices in employment law; “handling complaints, settling disputes,
and resolving grievances and conflicts, or otherwise negotiating with others”
(“Summary Report”, 2019)

Vice President of Jungle Exploration and New Product Development

KSAs
● Deep knowledge of exotic flora and its potential uses for new product development
● Expertise in botany, chemistry, and biology
● knowledge of/ability to develop innovative products from start to finish, especially as
it relates to developing natural products

Task based
● Ability to project manage a large workforce and budget, keep the team on schedule,
ensure project goals remain stable
● Lead team in the process of bringing new products from concept to
commercialization
● Ability to spend lots of time traveling in unfamiliar, exotic environments, keeping in
mind safety hazards/procedures of uncharted territory
Developing a selection system

The first step would, of course, be to conduct a job analysis of the positions I am

developing a selection system for. I would do this via observational learning to see what an

Executive Vice President of Human Resources (EVPHR) and a Vice President of Jungle

Exploration and New Product Development (VPJENPD) does on the job. This is likely

something I can do with the EVPHR position, but can only conduct this observational

analysis with the in-office part of the VPJENPD position. For the remainder of the job

analysis (for both positions), I would interview people in similar positions to get an idea of

what KSAs are needed to do the job effectively (Pulakos, 2005). If I could not find these

experts, especially because VPJENPD is a more rare position, I would at least interview VPs,

maybe even VPs of startups, to see what is needed to be a leader of a team developing

innovative projects. I would survey the positions' supervisors and incumbents to get their

input on what are the most important KSAs needed for the job (Pulakos, 2005). Lastly, based

on this information, I would need to see what kinds of assessments would be most effective

at measuring the tasks and KSAs I just learned about. For high-potential positions, I would

focus on the applicants’ aptitude, attitude, and application; meaning: “What is the candidate’s

ability to learn?”, “What are the candidate’s tendencies?”, and “How can the candidate apply

their knowledge and skills to the job?”.


Types of assessments

After reviewing the requirements for the position, I would strategically select a

combination of assessments that are as relevant and valid as possible to measuring these

requirements for the job. If I was collecting data for a job analysis on an air traffic controller,

for example, I might have learned from observation, interviews, and surveys of

incumbents/supervisors that air traffic controllers need to communicate calmly and

effectively towards aircraft personnel in high stress situations. With this knowledge, I might

have them take a cognitive ability test to analyze their verbal reasoning and communication

skills and couple that with a personality test to analyze their levels of neuroticism in order to

get the most valid evaluation of their ability to communicate calmly in a high-stress

environment.

Because both of these Vice President positions are high-level leadership positions, it

is likely that a job knowledge test will not be needed, as they should have experience in the

general practices of the job. This is something we can simply observe and screen candidates

on by using their résumés, experience, and references to determine if they meet the basic

requirements of the job.

The things I would be looking for in regards to these upper-level positions would be

more focused on project management and leading teams to develop new products. I would

conduct a cognitive ability test- as intelligence often strongly correlates with effectiveness in

leaders (Judge et al., 2004). I would couple that with a personality test to see if the candidate
has personality traits that correlate with leadership performance, such as extraversion and

openness. Studies show that managers who are agreeable, extroverted, emotionally stable

(low in neuroticism), and high in openness to experiences have higher performance than their

counterparts (Camps et al., 2016). Because both of these executive positions would be

managing teams, this makes me want to conduct a personality assessment such as the Hogan

Personality Inventory as it relates to the Five Factor Model in the workplace (Goldberg et al.,

2006).

Because we are dealing with higher-potential positions, it would be worth utilizing an

assessment center- combining a few types of assessment methods focused on supervisory and

managerial competencies. In addition to cognitive ability and personality, I would include

simulations such as role-plays, work samples, and SJTs to test the candidate’s strategic

decision making and team management abilities (Phillips, 2020). Assessment centers have

proved to be a valid and advantageous method of evaluating executives in particular,

especially when including “dimensions such as global awareness and strategic vision”

(Thornton et al., 2010). Given Zambeel, Inc. is an international company, and given the

VPJENPD will be traveling to exotic territories, global awareness and strategic vision are

imperative.

How these assessments overlap and how they differ

These types of assessment processes overlap in how we test the applicants’ contextual

job performance by mainly asking questions relating to the applicants’ soft skills of leading

and decision making rather than their ability to do the technical skills of the job, such as their
knowledge of HR policies or familiarity with plants of a particular region. Like personality

assessments, GMA tests also indirectly measure innate characteristics, specifically

conscientiousness, when coupled with either personality tests or structured interviews

(Schmidt & Hunter, 1998).

However, overall, correlations between cognitive ability tests and personality tests are

generally low (less than .20), so when coupled together, these tests would be effective in

measuring performance as they would not be measuring overlaps in predictors (Schmitt,

2014). These assessments also differ between positions, in that the role of the VPJENPD is

more hands on and in international environments. The simulations, of course, would

represent this difference and, of the assessments, would vary the most between these two

positions. For instance, a work sample for the EVPHR may test how the candidate would

handle a complaint or grievance, whereas a work sample for the VPJENPD may test the

candidate’s ability to handle a large workforce or budget to develop new products.

Furthermore, there may be an additional aspect of the process for the VPJENPD.

During the interview, the candidate may be asked questions about specific physical abilities

as it “​relate[s] to the applicant’s ability to perform the core duties of the role” (Barker, 2012).

While keeping legal issues and adverse impact in mind, it is important to ask questions

specific to the job requirements in order to determine how capable they are to explore foreign

lands in potentially demanding climates or terrain.

Overall, it’s important to have a few different predictors that do not correlate (such as

cognitive ability and personality tests) to maximize incremental validity (Schmitt, 2014).
However, predictive validity due to a variety of predictors is only beneficial to a certain

extent, as having more than 3 predictors is likely to cause overlap rather than give more

insight on those criterion (Phillips, 2020).

Choosing the right assessments: Validity and utility

There are several types of assessments that would be applicable to both positions

(biodata, integrity tests, work samples, and more) however, they may not all have the

greatest utility for these executive positions in particular. Furthermore, it would be extremely

expensive, time consuming, and result in negative reactions of the applicants if too many of

these assessments were to be used. In order to narrow down these options into an efficient

assessment center, my next step would be to evaluate the tests' validity and reliability as it

relates to the individual job at hand. Then, I would use a strategic combination of tests that

makes the most sense and has the highest validity when coupled together.

According to a study analyzing 85 years of research, the selection assessments with

the highest predictive criterion validity as it corresponds to job performance are work

samples, GMA tests, structured interviews, and job knowledge tests (Schmidt & Hunter,

1998). Too many tests can be overwhelming for candidates and for the organization's

resources; and as stated previously, a job knowledge test may not be necessary for such

high-level positions as we can get an idea of the candidates’ knowledge, experience, and

skills from their résumés. Therefore, the tests I would choose, due to high validity for

executives and the ability to test a combination of dimensions would be a GMA/cognitive


ability test, a personality test, and a structured interview with situational simulations/work

samples.

I chose a GMA/cognitive ability test because it is one of the most reliable assessments

for evaluating leaders and their individual differences (Campbell, 1990). Correlations

between IQ and job performance of “around 0.5 have been regularly cited as evidence of test

validity” (Richardson, 2015). Cognitive ability tests evaluate constructs deeper than job

knowledge or achievement that executives should already have at this point in their career.

Instead, these tests evaluate the applicant’s ability to learn on the job- an important trait for

any Vice President. I would administer this test first because it is highly valid, a necessary

screening requirement, and low in costs and resources to administer ​ (“Information to

Consider...”, n.d.).​ This would be the first filter before conducting a personality

questionnaire.

While personality tests may not be sufficient on their own (Cash, 2017), when

coupled with other assessments, a personality questionnaire can effectively measure social

skills and emotional intelligence- which are important KSAs listed in the brief job

description for both positions (Kuncel et al., 2004). A meta-analysis of criterion validity of

personality tests shows that while conscientiousness indicates consistent correlations with job

performance across all job types, extraversion is a “valid predictor” in managerial positions

specifically (Barrick, 1991). The same meta-analysis also shows that “​both openness to

experience and extraversion were valid predictors of the training proficiency criterion”
(Barrick, 1991). These are important traits to look for in our assessments as both Vice

President positions will be managing, and perhaps training, their subordinates.

Lastly, I would conduct a structured interview with work samples and situational

judgement tests because this step would take the most amount of time and resources, so it

would be ideal to have filtered out several candidates by this point. This dynamically

structured interview combining situational exercises/simulations, such as SJTs and work

samples, would allow the hiring managers to gain an idea of how the candidate would apply

their KSAs in practice (Pulakos, 2005).

Studies show that combining a variety of assessments increases a selection system’s

incremental validity (Schmidt & Hunter, 1998). Upon further research, I found that a

“combination of a GMA test and an integrity test (which measures mostly conscientiousness)

has the highest high validity (.65) for predicting job performance” (Schmidt & Hunter, 1998).

I chose a GMA test plus a personality test (rather than integrity test) because I would like to

measure more dimensions of personality, such as openness, agreeableness, extroversion, and

low neuroticism as they are congruous with executive leadership (Camps et al., 2016).

Pairing GMA tests with structured interviews also results in high validity (.63), “which may

in part measure conscientiousness and related personality traits” (Schmidt & Hunter, 1998).

In order to gain content validity in the structured interview, the simulations created should

demonstrate the competencies needed, as discovered via the job analysis (Fetzer & Tuzinski,

2013). The exercises should represent essential job functions and scenarios that a EVPHR or

VPJENPD would experience in practice. This is where the job analysis, interviews, and
observations would be the most valuable in establishing high content validity of the

situational exercises.

How decisions will be made

In order to attain a comprehensive and valid selection system with little adverse

impact, I would use a combination of techniques to make decisions. I would create a

compensatory assessment center to measure a range of competencies. In consideration of

money, time, and resources, I would combine that with a hurdle system to filter the

candidates taking these tests (Phillips, 2020). After doing a general screening of applications

to see if they have the basic requirements and experience for the job, as listed in the job

description, the first official assessment would be the GMA test. This would measure the

candidates’ cognitive abilities as it relates to the job (such as evaluating their ability to

deductively reason). I would use a cutoff system for this to filter out those who do not meet

certain benchmarks. If they achieve the required score, they would continue to the next

hurdle, which would be the personality test, then finally the structured interview.

When the assessments are through and we have narrowed down our candidates, I

would make decisions and offers using a top down system, ensuring we get the best

candidate available who scored the highest on all of the tests overall (Phillips, 2020).

Although, it may be beneficial to consider looking at each of the top candidate's scores per

individual assessment rather than their overall composite score. The reason for this is that a

high score on certain assessments may be more valuable than a high score on other

assessments (Phillips, 2020). For example, perhaps it is more important for a Vice President
to possess executive leadership qualities rather than knowing the details of a specific task. If

this is the case, a candidate who placed 3rd on his or her overall composite score, but scored

higher on an important SJT than a candidate who placed 1st in the composite score might be

a better hire.

In addition to that, certain assessments may predict job performance better than other

assessments, therefore they could also be considered to be of a higher importance. Both of

these factors bring certain issues with top-down selection, especially regarding cognitive

ability tests, as this could lead to adverse impact (Goldstein et al., 2010). For these reasons, I

would use a banding technique in combination with a top-down technique so that candidates

in the higher scoring bands are considered first, but candidates with lower scores are

evaluated equally as their higher scoring counterparts within that band (Kehoe, 2010). For

example, if a band of candidates whose composite score was between 80 and 90, the

candidate who scored an 80 but has a lot of experience could be equivalent to the candidate

who scored a 90 but has little experience. I would evaluate factors other than composite score

(such as previous experience and the score of each individual test) to see if they potentially

have a better fitted profile overall than someone who scored higher in their band (Phillips,

2020).

Legal Issues

Adverse impact

Because general ability is not just determined by one’s DNA or natural cognitive

ability, but also by one’s educational and socio-economic background, there is a relatively
high likelihood of adverse impact among these tests(Kandola & Kartara, 2008). While

cognitive ability and GMA tests have high adverse impact against minorities, I chose two

assessments with low adverse impact (personality and structured interviews) to couple the

GMA tests with in an attempt to minimize adverse impact among minorities.

Furthermore, studies prove that “the validity of an assessment process can be

enhanced and adverse impact reduced by assessing a comprehensive array of skills and

abilities that are related to both technical task performance and contextual job performance"

(Pulakos, 2005). Research shows that assessing soft skills/behavioral skills versus hard

skills/technical skills may lead to lower adverse impact (Pulakos, 2005). This is also why I

chose the assessments of cognitive ability, personality, and situational judgement tests

because they test important soft skills needed for high-potential leadership positions;

therefore, will hopefully diminish discriminatory effects.

Still, even in structured interviews and simulations, adverse impact can occur. To

avoid adverse impact in simulations, one should ensure that the group being tested represents

the potential candidate pool in terms of diversity. Not only the candidate pool, but if possible,

the test makers and evaluators themselves should be a diverse group of professionals to

ensure impartiality (Theron, 2009). Candidates should also, preferably, be high-performing

incumbents; a result that can be dependent on the recruitment process (Fetzer & Tuzinski,

2013). Recruiting higher qualified minorities, elders, and protected populations will help to

prevent adverse impact (Pulakos, 2005).


Applicant reactions

There are many dimensions of applicant reactions to consider when evaluating an

assessment; such as: if the candidate feels the assessment gave them the opportunity to

perform, if the candidate feels the assessment respected their privacy, if the candidate feels

the assessment had face validity, and more (Anderson et al., 2010). The types of assessments

most favorable among applicants when looking at all of these dimensions as a whole are

interviews, work samples, ​résumés​, and references. Cognitive ability tests, personality tests,

and biodata were found moderately favorable, while personal contacts, honesty tests, and

graphology were found the least favorable among applicants (Goldstein et al., 2017).

According to this meta-research, my selection system should have moderate to high reactions

among applicants, as it uses cognitive ability and personality tests, but also interviews and

work samples (and initial screening with résumés and references). Unstructured interviews

are more preferred by applicants than structured interviews, but they are, unfortunately, not

as valid (Goldstein et al., 2017). In order to balance this, including work samples or SJTs in

the structured interviews improves candidates’ reactions because it gives the candidates a

realistic job preview, so they know what to expect on the job (Phillips, 2020).

While there is still the risk of candidates having negative reactions and potentially

causing legal issues due to factors such as cultural differences, there may be ways to mediate

or prevent these reactions. Applicants sometimes have negative reactions to cognitive ability

tests due to adverse impact, so it is important to ensure the test hits on a wide array of

knowledge and skills, is coupled with other tests, and that every dimension tested is relevant
to the job (Pulakos, 2005). Furthermore, offering explanations to candidates about the

validity of the assessments improves the candidate’s perception of fairness, and thus, their

overall reactions of the assessment (Truxillo et al., 1999).

International/global issues

Because Zambeel, Inc. is a multinational organization, and recruiting will likely occur

internationally, it is important to consider potential global issues that may occur. One of the

issues with this selection system is that personality tests can be culturally biased. Despite

there being “considerable evidence that people of different cultures vary in their attitudes and

values, the translation of these differences into organizational behaviors ‘remains

problematic’” (Ogilvie, 2000).

A potential solution would be to avoid making judgements or accusations based on

the candidates’ demographics or nationality, but rather the specific characteristics as it relates

to their personality. However, these characteristics of their culture will shine through in

certain assessments, such as personality tests, which may cause adverse impact against

certain cultures. Take individualism versus collectivism, for example, one of Hofstede’s

cultural dimensions (Hofstede & Bond, 1984). As an illustration, after conducting a

necessary mission/goal analysis of the organization, an I/O Psychologist may find that

individualism is valued in the culture of Zambeel, Inc. This may serve as an advantage for

applicants from the United States, as it is an individualistic country. However, applicants

from Japan, for example, a collectivistic country, may score poorly on the personality
segment that assesses individualistic versus collectivistic tendencies. Luckily, multiple

assessments will be conducted to help balance these biases, but is it enough?

The question is: Is it fair to determine a candidate’s cultural fit in an organization

based on their individual traits as they relate to the societal norms of their specific culture?

Costs and resources

It may be expensive to develop this selection system as it has many predictors and

simulations. The ongoing cost of administering these assessments “can include ​usage fees,

recruiter or administrator time, and equipment/computer costs if the simulation is

administered on-site” (​Fetzer & Tuzinski, 2013​).

For these particular positions, it may be beneficial to purchase current assessments

rather than developing new ones from scratch. While purchasing a test for a limited number

of uses may seem more costly as “​few test publishers will license a test for unlimited usage”,

Zambeel, Inc.​ will likely not need to use these tests very often, as both positions are

high-potential positions and will likely not have a high turnover rate (“Information to

Consider...”, n.d.). Developing your own “high-quality and effective test requires time,

money, and people to research and develop, revise, and validate the tests” (“Information to

Consider...”, n.d.). In some cases, this may be worth it in the long run if you plan on using it

several times; for example, in the case of an entry level position. For these VP positions,

however, it will be worth purchasing a pre-existing test that we know is reliable and valid

and do not need an for unlimited number of times in order to keep implementation costs low.
Overall, I am assuming the company has ample resources as a Fortune 500 company;

however, it is essential to weigh the costs and benefits with the company’s budget and

business plan. Therefore, a deeper company analysis is needed to ensure these assessments

are worth it.

Getting buy-in from stakeholders

When getting buy in from stakeholders, the low implementation costs, as referred to

above, is something worth mentioning. Additionally, I would conduct validation research on

the created selection assessment to prove its validity, then translate the importance of that

research into economic terms (Fetzer & Tuzinski, 2013). Mainly, we will be wanting to

persuade stakeholders in their language, bringing attention to the high ROI of the selection

system (Phillips, 2020). Investing in proper assessments that can predict job performance can

prevent every HR and stakeholder’s nightmare: bad hires. ​Reminding stakeholders of

important points such as the fact that “the average cost of on-boarding a new employee can

be as high as $240,000, while the US Department of labor suggests the price of a bad hire is

30% of first-year earnings” can change their perspective on the importance of a quality

selection system (Cash, 2017).

The future of this assessment

The evolution of any assessment should represent developments in the current

environment as, not only technology advances, but as the needs of the organization change.

For example, the SJTs and simulations in the structured interview may start off as

in-person/paper and pencil to keep implementation costs low. However, “as the Internet
continues to evolve, applicants increasingly expect assessments that are interesting,

engaging, and delivered and evaluated instantly”, an important aspect while considering

applicants’ reactions (Armstrong, et al., 2016, p. 674). Furthermore, as validity analyses are

conducted post-assessment, I/O psychologists may find that developing new technological

elements may increase the soundness of the assessment. For example, web-based

simulations, as opposed to traditional assessments, show higher levels of incremental

validity, testing the candidate’s “multitasking [ability], processing speed, and attention to

detail” (Fetzer & Tuzinski, 2013).

Another thing to consider to enhance the validity and utility of this assessment is

analyzing fit on different dimensions: positional/job fit, team fit, and organizational fit of the

candidate. Especially because this assessment consists of so many dimensions, "maximizing

fit at one level of analysis can lead to declines in fit at other levels of analysis" (Farr &

Tippins, 2017). For example, when analyzing the positional fit of the VPJENPD,

extraversion may be complementary for leading a team, however, "independence needed for

innovation potential in a research and development (R&D) scientist may militate

against...climate fit in an organization that possesses a strong climate in which conformity is

valued" (Farr & Tippins, 2017). Further research would be advantageous to understanding

the complexities of multilevel fit in this particular selection system.

Conclusion

In conclusion, my assessments would cover the constructs of aptitude, attitude, and

application (“The Three A’s”, I may call it). Measuring “What is the candidate’s ability to
learn?”, “What are the candidates tendencies?”, and “How can the candidate apply their

knowledge and skills to the job”. The tests that would best correspond with these constructs

and also have high validity in measuring leadership effectiveness are as follows: a cognitive

ability or general mental ability test (GMA), a personality questionnaire, and a structured

interview with a work sample and/or situational judgement test. Upon further research, I

found that a selection system very similar to this already exists and is called an Individual

Psychological Assessment, which combines tests of cognitive ability, personality, and

interviews (Thornton et al., 2010). Furthermore, gauging executives’ competencies, in

particular, is a focal point of said assessment (Jeanneret & Silzer, 2011).

Of course, there are important things to be wary of while creating this selection

system, such as the potential for legal issues with adverse impact and candidate’s responses.

We want to incorporate certain tactics (such as banding composite scores) to avoid

discriminatory effects or feelings of unfairness in general. Having a combination of

assessments that evaluate the constructs of aptitude, attitude, and application positively

affects the applicant’s reactions, the diversity of the workforce, and the validity and

reliability of the selection system as it, hopefully, parallels to job performance. Another

important consideration is the company budget and business plan, especially while

developing the final stage of the process (the “application” stage), as this will be the most

costly. I would want to interview supervisors and subordinates of the positions in order to

determine which types of situational exercises and simulations would be valuable enough to

outweigh the costs.


As covered in this paper, the four dimensions of selection assessments (validity,

utility, applicant reactions, and cost) are well-represented among this selection system

(Pulakos, 2005). While these tests may be valid, reliable, and valuable for determining

needed KSAs, they should, ultimately, be used as a guide to make decisions. As I do not

know very much about Zambeel, Inc., I would like to gather more information on “the

qualities they are looking for, their staffing strategy and philosophy, and their past success

with different measures” while developing a selection system for the organization (Thornton

et al., 2010). Rather than relying on these test scores alone (especially the composite of the

test scores alone), I would look at it as one of many elements- in addition to behavior,

attitudes, previous experience, and more- while determining the final decision of who is best

fit for the job.


References

Anderson, Neil & Salgado, Jesus & Hülsheger, Ute. (2010). Applicant Reactions in

Selection: Comprehensive Meta-Analysis into Reaction Generalization Versus

Situational Specificity. ​International Journal of Selection and Assessment​. 19. 291-304.

10.1111/j.1468-2389.2010.00512.x.

Armstrong, M. & Ferrell, J. & Collmus, A. & Landers, R. (2016). Correcting misconceptions

about gamification of assessment: More than SJTs and badges. ​Industrial and

Organizational Psychology.​ 9. 671-677. 10.1017/iop.2016.69.

Barker, Kate. “Legal Q&A: What Health Questions Can Employers Ask during the

Recruitment Process?” ​Personnel Today,​ 5 Sept. 2012,

www.personneltoday.com/hr/legal-qa-what-health-questions-can-employers-ask-during

-the-recruitment-process/​.

Barrick, M.R. and Mount, M.K. (1991). The big five personality dimension and job

performance: a metaanalysis. Department of Management and Organizations,

University of Iowa. Personnel Psychology, Vol. 44.

https://doi.org/10.1111/j.1744-6570.1991.tb00688.x.

Campbell, J. P. (1990a). Modeling the performance prediction problem in industrial and

organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of

industrial and organizational psychology (Vol. 1, pp. 687-732). Palo Alto, CA:

Consulting Psychologists Press.


Camps, Jeroen, et al. “The Relation Between Supervisors’ Big Five Personality Traits and

Employees’ Experiences of Abusive Supervision.” ​Frontiers in Psychology​, vol. 7,

2016, doi:10.3389/fpsyg.2016.00112.

Cash, L. (2017). Predicting Job Performance: Do Personality Tests Work?. https://www.

successfinder.com/predicting-job-performance-do-personality-tests-work/.

Fetzer, M., & Tuzinski, K. (2013). ​Simulations for Personnel Selection.​ New York, NY:

Springer.

Farr, J. L., & Tippins, N. T. (2017). Handbook of employee selection. New York: Routledge.

Goldberg, Lewis R., et al. “The International Personality Item Pool and the Future of

Public-Domain Personality Measures.” ​Journal of Research in Personality​, vol. 40, no.

1, 2006, p. 84., doi:10.1016/j.jrp.2005.08.007.

Goldstein, H., Pulakos, E. D., Semedo, C., & Passmore, J. (2017). ​The Wiley Blackwell

handbook of the psychology of recruitment, selection and employee retention​.

Chichester, West Sussex: Wiley Blackwell.

Goldstein, H. W., Scherbaum, C. A., & Yusko, K. P. (2010). Revisiting g: Intelligence,

adverse impact, and personnel selection. In J. L. Outtz (Ed.), SIOP organizational

frontiers series. Adverse impact: Implications for organizational staffing and high

stakes selection (pp. 95-134). New York, NY, US: Routledge/Taylor & Francis Group.

Hofstede, G., & Bond, M. H. (1984). Hofstedes Culture Dimensions. ​Journal of

Cross-Cultural Psychology,​ ​15(​ 4), 417–433. doi: 10.1177/0022002184015004003


Information to Consider When Creating or Purchasing an Employment Test ,​

www.siop.org/Business-Resources/Employment-Testing/Considerations​.

Jeanneret, R., & Silzer, R. (2011). Individual Psychological Assessment: A Practice and

Science in Search of Common Ground. ​Industrial and Organizational Psychology,

4​(3), 270-296. doi:10.1111/j.1754-9434.2011.01341.x.

Judge, Timothy & Colbert, Amy & Ilies, Remus. (2004). Intelligence and Leadership: A

Quantitative Review and Test of Theoretical Propositions. The Journal of applied

psychology. 89. 542-52. 10.1037/0021-9010.89.3.542.

Kandola, R., & Kartara, S. (2008). Diversity and Individual Development. ​Individual

Differences and Development in Organisations,​ 363–378. doi:

10.1002/9780470753392.ch20

Kehoe, J. F., (2010). Cut Scores and Adverse Impact. In J. L. Outtz (Ed.) Adverse Impact:

Implications for Organizational Staffing and High Stakes Selection, (pp. 289-323). New

York, NY: Routledge Taylor & Francis Group.

Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential,

creativity, and job performance: Can one construct predict them all? ​Journal of

Personality and Social Psychology​, 86, 148-161.


“Managing Selection in Changing Organizations: Human Resource Strategies.” ​Managing

Selection in Changing Organizations: Human Resource Strategies,​ by Jerard F. Kehoe,

Jossey-Bass, 2000, pp. 346–348.

Phillips, Jean. Strategic Staffing. Chicago Business Press, 2020.

Pulakos, Elaine D. “Selection Assessment Methods: A Guide to Implementing Formal

Assessments to Build a High-Quality Workforce.” ​SHRM Foundation,​ 2005,

www.shrm.org/foundation​.

Schmidt, F. L., & Hunter, J. E. (1998). The Validity and Utility of Selection Methods in

Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research

Findings. ​Psychological Bulletin,​ ​124(​ 2), 262–274. doi: 10.1.1.172.1733.

Schmitt, Neal. “Personality and Cognitive Ability as Predictors of Effective Performance at

Work.” ​Annual Review of Organizational Psychology and Organizational Behavior,​

vol. 1, no. 1, 2014, p. 46., doi:10.1146/annurev-orgpsych-031413-091255.

“Summary Report for: 11-3121.00 - Human Resources Managers.” ​O*NET OnLine​, 2019,

www.onetonline.org/link/summary/11-3121.00?redir=11-3040.00​.

Theron, Callie. (2009). The diversity-validity dilemma: in search of minimum adverse impact

and maximum utility. ​SA Journal of Industrial Psychology​, ​35​(1), 183-195. Retrieved

February 23, 2020, from


http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S2071-0763200900010002

1&lng=en&tlng=en.

Thornton, George C, and Stefanie K Johnson. “Selecting Leaders: Executives and High

Potentials.” ​Handbook of Employee Selection,​ edited by George P Hollenbeck,

Routledge, 2010, pp. 825–829.

Truxillo, D. M., & Bauer, T.N. (1999). Applicant reactions to test scores banding in

entry-level and promotional contexts. ​Journal of Applied Psychology,​ 94, 322-339.

“Vice President, Human Resources.” ​Society for Human Resource Management,​

www.shrm.org/resourcesandtools/tools-and-samples/job-descriptions/pages/vice-presid

ent,-human-resources.aspx.

You might also like