You are on page 1of 16

LECTURE 11

11-11-20

CHAPTER NO 06

Why Employee Selection is Important

After reviewing the applicants’ résumés, the manager turns to selecting the best candidate
for the job. This usually means reducing the applicant pool by using the screening tools. The aim
of employee selection is to achieve person-job fit. This means matching the knowledge, skills,
abilities, and other competencies (KSACs) that are required for performing the job (based on job
analysis) with the applicant’s KSACs

Negligent hiring

Hiring workers with questionable backgrounds without proper safeguards

Reliability

Reliability is a selection tool’s first requirement and refers to its consistency: “A reliable
test is one that yields consistent scores when a person takes two alternate forms of the test or when
he or she takes the same test on two or more different occasions.” 7 If a person scores 90 on an
intelligence test on a Monday and 130 when retested on Tuesday, you probably wouldn’t have
much faith in the test.

You can measure reliability in several ways. One is to administer a test to a group one day,
readminister the same test several days later to the same group, and then correlate the first set of
scores with the second (called test-retest reliability estimates). Or you could administer a test and
then administer what experts believe to be an equivalent test later; this would be an equivalent or
alternate form estimate. (The Scholastic Assessment Test [SAT] is one example.) Or, compare the
test taker’s answers to certain questions on the test with his or her answers to a separate set of
questions on the same test aimed at measuring the same thing. This is an internal comparison
estimate. For example, a psychologist includes 10 items on a test believing that they all measure
interest in working outdoors, and then determines the degree to which responses to these 10 items
vary together.
Many things cause a test to be unreliable. These include physical conditions (quiet one day,
noisy the next), differences in the test taker (healthy one day, sick the next), and differences in test
administration (courteous one day, curt the next). Or the questions may do a poor job of sampling
the material; for example, test one focuses more on Chapters 1 and 3, while test two focuses more
on Chapters 2 and 4. Because measuring reliability generally involves comparing two measures
that assess the same thing, it is typical to judge a test’s reliability in terms of a reliability coefficient.
This basically shows the degree to which the two measures (say, test score one day and test score
the next day) are correlated. Figure 6-1 illustrates correlation. In both the left and the right scatter
plots, the psychologist compared each applicant’s time 1 test score (on the x-axis) with his or her
subsequent (time 2) test score (on the y-axis). On the left, the scatter plot points (each point
showing one applicant’s test score and subsequent test performance) are dispersed. There seems
to be no correlation between test scores obtained at time 1 and at time 2. On the right, the
psychologist tried a new test. Here the resulting points fall in a predictable pattern. This suggests
that the applicants’ test scores correlate closely with their previous scores.

Validity

Reliability, while indispensable, tells you only that the test is measuring something
consistently. Validity tells you whether the test is measuring what you think it’s supposed to be
measuring. 9 Test validity answers the question “Does this test measure what it’s supposed to
measure?” Put another way, it refers to the correctness of the inferences that we can make based
on the test. 10 For example, if Jane’s scores on mechanical comprehension tests are higher than
Jim’s, can we be sure that Jane possesses more mechanical comprehension than Jim? 11 With
employee selection tests, validity often refers to evidence that the test is job related—in other
words, that performance on the test accurately predicts job performance. A selection test must be
valid since, without proof of validity, there is no logical or legally permissible reason to use it to
screen job applicants. A test, as we said, is a sample of a person’s behavior, but some tests are
more clearly representative of the behavior being sampled than others. A swimming test clearly
corresponds to a lifeguard’s on-the-job behavior. On the other hand, there may be no apparent
relationship between the test and the behavior. Thus, in Figure 6-2, the psychologist asks the person
to interpret the picture, and then draws conclusions about the person’s personality and behavior.
Here it is more difficult to prove that the tests are measuring what they are said to measure, in this
case, some trait of the person’s personality—in other words, prove that they’re valid. There are
several ways to demonstrate a test’s validity. 12 Criterion validity involves demonstrating
statistically a relationship between scores on a selection procedure and job performance of a
sample of workers. For example, it means demonstrating that those who do well on the test also
do well on the job, and that those who do poorly on the test do poorly on the job. The test has
validity to the extent that the people with higher test scores perform better on the job. In
psychological measurement, a predictor is the measurement (in this case, the test score) that you
are trying to relate to a criterion, such as performance on the job. The term criterion validity reflects
that terminology.

Content validity is a demonstration that the content of a selection procedure is


representative of important aspects of performance on the job. For example, employers may
demonstrate the content validity of a test by showing that the test constitutes a fair sample of the
job’s content. The basic procedure here is to identify job tasks that are critical to performance, and
then randomly select a sample of those tasks to test. In selecting students for dental school, one
might give applicants chunks of chalk, and ask them to carve something like a tooth. If the content
you choose for the test is a representative sample of the job, then the test is probably content valid.
Clumsy dental students need not apply. Subject matter experts (SMEs, such as practicing dentists)
help choose the tasks.

Construct validity means demonstrating that (1) a selection procedure measures a construct
(an abstract idea such as morale or honesty) and (2) that the construct is important for successful
job performance

At best, invalid tests are a waste of time; at worst, they are discriminatory. Tests you buy
“off the shelf” should include information on their validity. 13 But ideally, you should revalidate
the tests for the job(s) at hand. In any case, tests rarely predict performance with 100% accuracy
(or anywhere near it). Therefore, don’t make tests your only selection tool; also use other tools like
interviews and background checks.

Talent analytics is revolutionizing employee selection. 14 Its numbers-crunching data


analysis tools including statistical techniques, algorithms, data mining, and problem solving let
employers search through their employee data to identify patterns and correlations that show what
types of people succeed or fail. For example, department store chain Bon-Ton Stores Inc. had very
high turnover among its cosmetics sale’s associates. Bon-Ton chose 450 current cosmetics
associates who filled out anonymous surveys aimed at identifying employee traits. By using talent
analytics to analyze these and other data, the company identified cosmetics associates’ traits that
correlated with performance and tenure. Bon-Ton had assumed that the best associates were
friendly and enthusiastic about cosmetics. However, the best were actually problem solvers. They
take information about what the customer wants and needs, and solve the problem. 15 Talent
analysis thereby helped Bon Ton formulate better selection criteria. Talent analytics is gaining
popularity in India as well. Leading IT firm, HCL Technologies, implemented talent analytics
using intelligent neural network engine where database of 5 million applicants’ information and
employee records were analyzed. The results helped improve quality of hiring because it provided
insights about workforce characteristics. Using the information, recruiters could hire right
candidates. 16 Companies like Stock Holding Corporation of India and Taj Hotels also utilize
talent analytics to improve their hiring process.

Lecture no 12

12-11-20

Types of test

1. Tests of Cognitive Abilities (The way you think) (Thinking patterns)

Cognitive tests include tests of general reasoning ability (intelligence) and tests of specific mental
abilities like memory and inductive reasoning.

• INTELLIGENCE TESTS

Intelligence (IQ) tests are tests of general intellectual abilities. They measure not a single trait but
rather a range of abilities, including memory, vocabulary, verbal fluency, and numerical ability.
Like NTS tests etc
• SPECIFIC COGNITIVE ABILITIES

There are also measures of specific mental abilities, such as deductive reasoning, verbal
comprehension, memory, and numerical ability. Psychologists often call such tests aptitude tests,
since they purport to measure aptitude for the job in question.

2. Tests of Motor and Physical Abilities (physical reactions)

You might also want to measure motor abilities, such as finger dexterity, manual dexterity,
and (if hiring pilots) reaction time. These include static strength (such as lifting weights), dynamic
strength (pull-ups), body coordination (jumping rope), and stamina. Applicants for the Indian
Armed Forces or police must pass a physical test which includes running a specific distance within
an allotted time

• Measuring Personality and Interests

A person’s cognitive and physical abilities alone seldom explain his or her job
performance. As one consultant put it, most people are hired based on qualifications, but are fired
because of attitude, motivation, and temperament. Personality tests measure basic aspects of an
applicant’s personality, such as introversion, stability, and motivation. Industrial psychologists
often focus on the “big five” personality dimensions: extraversion, emotional stability/neuroticism,
agreeableness, conscientiousness, and openness to experience.

Some personality tests are projective. The psychologist presents an ambiguous stimulus
(like an inkblot or clouded picture) and the person reacts. The person supposedly projects into the
ambiguous picture his or her attitudes, such as insecurity. Other projective techniques include
Make a Picture Story (MAPS) and the Forer Structured Sentence Completion Test.

• INTEREST INVENTORIES

Interest inventories compare one’s interests with those of people in various occupations. Thus, the
Strong-Campbell Interest Inventory provides a report comparing one’s interests to those of people
already in occupations like accounting or engineering. Someone taking the Self-Directed Search
(SDS) (www.selfdirected-search.com) uses it to identify likely high-fit occupations. The
assumption is that someone will do better in occupations in which he or she is interested, and
indeed such inventories can predict employee performance and turnover.
3. Achievement Tests
Achievement tests measure what someone has learned. Most of the tests you take in school are
achievement tests. They measure your “job knowledge” in areas like economics, marketing, or
human resources. Achievement tests are also popular at work. For example, the Purdue Test for
Machinists and Machine Operators tests the job knowledge of experienced machinists with
questions like “What is meant by ‘tolerance’”? Some achievement tests measure the applicant’s
abilities; a swimming test is one example Computerized and/or online testing is increasingly
replacing paper-and-pencil tests. Zing HR, which is an HR software system, has the feature for
online testing of candidates for recruitment.

4. COMPUTERIZED MULTIMEDIA CANDIDATE ASSESSMENT


TOOLS

Development Dimensions International developed a computerized multimedia skill test that Ford
Motor Company uses for hiring assembly workers. “The company can test everything from how
people tighten the bolt, to whether they followed a certain procedure correctly, to using a weight
sensitive mat on the floor that, when stepped on at the wrong time, will mark a candidate down in
a safety category.”

Work Samples and Simulations (these are demonstrations)


With work samples, you present examinees with situations representative of the job for which
they’re applying, and evaluate their responses.

1. Using Work Sampling for Employee Selection


The work sampling technique tries to predict job performance by requiring job candidates to
perform one or more samples of the job’s tasks. For example, work samples for a cashier may
include operating a cash register and counting money.

Work sampling has several advantages. It measures actual job tasks, so it’s harder to fake
answers. The work sample’s content—the actual tasks the person must perform—is not as likely
to be unfair to minorities (as might a personnel test that possibly emphasizes middle-class concepts
and values). Work sampling doesn’t delve into the applicant’s personality, so there’s almost no
chance of applicants viewing it as an invasion of privacy. Designed properly, work samples also
exhibit better validity than do other tests designed to predict performance.

The basic procedure is to select a sample of several tasks crucial to performing the job, and
then to test applicants on them. An observer monitors performance on each task, and indicates on
a checklist how well the applicant performs.

2. Situational Judgment Tests


Situational judgment tests are personnel tests “designed to assess an applicant’s judgment
regarding a situation encountered in the workplace.” sales. A test situation in which a person is
observed as he or she performs a task or an actual sample of the job or role to be performed.

Management Assessment Centers


An assessment center is a process where candidates are examined to determine their suitability
for specific types of employment, especially management or military command. These centres are
basically for mangers posts or to hire a person between middle and top posts.

A management assessment center is a 2- to 3-day simulation in which 10 to 12 candidates


perform realistic management tasks (like making presentations) under the observation of experts
who appraise each candidate’s leadership potential. For example, The Cheesecake Factory created
its Professional Assessment and Development Center to help select promotable managers.
Candidates undergo 2 days of exercises, simulations, and classroom learning to see if they have
the skills for key management positions.

Typical simulated tasks include:

The in-basket. The candidate gets reports, memos, notes of incoming phone calls, e-mails, and
other materials collected in the actual or computerized in-basket of the simulated job he or she is
about to start. The candidate must take appropriate action on each item. Trained evaluators review
the candidate’s efforts.

Leaderless group discussion. Trainers give a leaderless group a discussion question and tell
members to arrive at a group decision. They then evaluate each group member’s interpersonal
skills, acceptance by the group, leadership ability, and individual influence.
Management games. Participants solve realistic problems as members of simulated companies
competing in a marketplace.

Individual oral presentations. Here trainers evaluate each participant’s communication skills and
persuasiveness.

Testing. These may include tests of personality, mental ability, interests, and achievements.

The interview. Most require an interview with a trainer to assess interests, past performance, and
motivation.

Lecture 13

18-11-20
Situational Testing and Video-Based Situational Testing

Situational tests require examinees to respond to situations representative of the job.

The video-based simulation presents the candidate with several online or computer video
situations, each followed by one or more multiple-choice questions. For example, the scenario
might depict an employee handling a situation on the job. At a critical moment, the scenario ends
and the video asks the candidate to choose from several courses of action.

The Miniature Job Training and Evaluation Approach

Miniature job training and evaluation involves training candidates to perform several of the job’s
tasks, and then evaluating their performance prior to hire. The approach assumes that a person who
demonstrates that he or she can learn and perform the sample of tasks will be able to learn and
perform the job itself. Like work sampling, miniature job training and evaluation tests applicants
with actual samples of the job, so it is inherently content relevant and valid.
Realistic Job Previews

A Realistic Job Preview (RJP) is a recruiting tool used to communicate both the good and bad
aspects of a job. Essentially, it is used to provide a prospective employee a realistic view of what
the job entails. Sometimes, a dose of realism makes the best screening tool.

Background Investigations and Other Selection Methods

Testing is only part of an employer’s selection process. Other tools may include background
investigations and reference checks, pre-employment information services, honesty testing, and
substance abuse screening.

Using Pre-employment Information Services

Pre-employment tests are an objective, standardized way of gathering data on candidates


during the hiring process. All professionally developed, well-validated pre-employment
tests have one thing in common: they are an efficient and reliable means of gaining insights
into the capabilities and traits of prospective employees. Depending on the type of test being
used, pre-employment assessments can provide relevant information on a job appli cant's
ability to perform in the workplace.

• The Polygraph and Honesty Testing

The polygraph is a device that measures physiological changes like increased perspiration. The
assumption is that such changes reflect changes in emotional state that accompany lying.

• Written honesty tests

These are psychological tests designed to predict job applicants’ proneness to dishonesty and other
forms of counter productivity. Most measure attitudes regarding things like tolerance of others
who steal and admission of theft-related activities.

• Graphology

Graphology is the use of handwriting analysis to determine the writer’s basic personality traits. It
thus has some resemblance to projective personality tests, although graphology’s validity is highly
suspect. The handwriting analyst studies an applicant’s handwriting and signature to discover the
person’s needs, desires, and psychological makeup

• “Human Lie Detectors”

Some employers are using so-called “human lie detectors,” experts who may (or may not) be able
to identify lying just by watching candidates. One Wall Street firm uses a former FBI agent. He
sits in on interviews and watches for signs of candidate deceptiveness. Signs include pupils
changing size (fear), irregular breathing (nervousness), crossing legs (“liars’ distance themselves
from an untruth”), and quick verbal responses (scripted statements).

Physical Exams
Once the employer extends the person a job offers, a medical exam is often the next step in
selection (although it may also occur after the new employee starts work). There are several
reasons for pre-employment medical exams: to verify that the applicant meets the job’s physical
requirements, to discover any medical limitations you should consider in placement, and to
establish a baseline for future workers’ compensation claims. Exams can also reduce absenteeism
and accidents and detect communicable diseases.

Lecture 14

19-11-20

Chapter No 7

Interviewing Candidates
3 ways to conduct interviews

• Structure vise
• Content vise
• Process vise
Structured Versus Unstructured Interviews

In unstructured (or nondirective) interviews, the manager follows no set format. A few questions
might be specified in advance, but they’re usually not, and there is seldom a formal guide for
scoring “right” or “wrong” answers. Typical questions here might include, for instance, “Tell me
about yourself,” “Why do you think you’d do a good job here?” and “What would you say are
your main strengths and weaknesses?” Some describe this type of interview as little more than a
general conversation

At the other extreme, in structured (or directive) interviews, the employer lists questions ahead of
time, and may even weight possible alternative answers for appropriateness. 5 McMurray’s
Patterned Interview was one early example. The interviewer followed a printed form to ask a series
of questions, such as “How was the person’s present job obtained?” Comments printed beneath
the questions (such as “Has he/she shown self-reliance in getting his/her jobs?”) then guide the
interviewer in evaluating the answers. Some experts still restrict the term structured interview to
interviews like these, which are based on carefully selected job-related questions with
predetermined answers.

In practice, interview structure is a matter of degree. Sometimes the manager may just want to
ensure he or she has a set list of questions to ask so as to avoid skipping any questions. Here, he
or she might choose questions from a list.

Structured (or directive) Interview An interview following a set sequence of questions.

Unstructured (or nondirective) Interview An unstructured conversational-style interview in which the


interviewer pursues points of interest as they come up in response to questions.

Situational Interview, Behavioral Interview

Interview Content (What Types of Questions to Ask)

We can also classify interviews based on the “content” or the types of questions interviewers ask.

Whereas situational interviews ask applicants to describe how they would react to a hypothetical
situation today or tomorrow, behavioral interviews ask applicants to describe how they reacted to
actual situations in the past.
Situational questions start with phrases such as, “Suppose you were faced with the
following situation…. What would you do?” Behavioral questions start with phrases like, “Can
you think of a time when…. What did you do?”

Situational Interview A series of job-related questions that focus on how the candidate would behave in a
given situation.

Behavioral Interview A series of job-related questions that focus on how the candidate reacted to actual
situations in the past.

Job-related Interview
In a job-related interview, the interviewer asks applicants questions about job-relevant past
experiences. The questions here don’t revolve around hypothetical or actual situations or scenarios.
Instead, the interviewer asks questions such as, “Which courses did you like best in business
school?” The aim is to draw conclusions about, say, the candidate’s ability to handle the financial
aspects of the job in question.

Job-related Interview A series of job-related questions that focus on relevant past job-related behaviors.

Stress Interview
There are other, lesser-used types of questions. In a stress interview, the interviewer seeks to make
the applicant uncomfortable with occasionally rude questions. The aim is supposedly to spot
sensitive applicants and those with low (or high) stress tolerance. Thus, a candidate for a customer
relations manager position who obligingly mentions having had four jobs in the past 2 years might
be told that frequent job changes reflect irresponsible and immature behavior. If the applicant then
responds with a reasonable explanation of why the job changes were necessary, the interviewer
might pursue another topic. On the other hand, if the formerly tranquil applicant reacts explosively,
the interviewer might deduce that the person has a low tolerance for stress.

Stress Interview An interview in which the applicant is made uncomfortable by a series of often rude
questions. This technique helps identify hypersensitive applicants and those with low or high stress
tolerance.
Unstructured Sequential and Structured Sequential
In one-on-one interview, two people meet alone, and one interviews the other by seeking oral
responses to oral inquiries. Employers tend to schedule these interviews sequentially. In a
sequential (or serial) interview, several persons interview the applicant, in sequence, one-on-one,
and then make their hiring decision. In an unstructured sequential interview, each interviewer
generally just asks questions as they come to mind. In a structured sequential interview, each
interviewer rates the candidates on a standard evaluation form, using standardized questions. The
hiring manager then reviews these ratings before deciding whom to hire.

Unstructured sequential Interview An interview in which each interviewer forms an independent opinion
after asking different questions

Structured sequential Interview An interview in which the applicant is interviewed sequentially by several
persons; each rates the applicant on a standard form.

Panel interview
A panel interview, also known as a board interview, is an interview conducted by a team of
interviewers (usually two to three), who together question each candidate and then combine their
ratings of each candidate’s answers into a final panel score. This contrasts with the one-on-one
interview (in which one interviewer meets one candidate) and a serial interview (where several
interviewers assess a single candidate one-on-one, sequentially)

Panel Interview An interview in which a group of interviewers questions the applicant.

The panel format enables interviewers to ask follow-up questions, much as reporters do in press
conferences. This may elicit more meaningful responses than a series of one-on-one interviews.
On the other hand, some candidates find panel interviews more stressful, so they may actually
inhibit responses. (An even more stressful variant is the mass interview (also referred as group
interviewing or discussion-based interviewing.) Here a panel interviews several candidates
simultaneously. The panel might pose a problem, and then watches to see which candidate takes
the lead in formulating an answer.)

Mass Interview A panel interviews several candidates simultaneously


Whether panel interviews are more or less reliable and valid than sequential interviews depends
on how the employer actually does the panel interview. For example, structured panel interviews
in which members use scoring sheets with descriptive scoring examples for sample answers are
more reliable and valid than those that don’t. Training panel interviewers may boost interview
reliability.

For better or worse, some employers use “speed dating” interviewing. One sent e-mails to all
applicants for an advertised position. Four hundred (of 800 applicants) showed up. Over several
hours, applicants first mingled with employees, and then (in a so-called “speed dating area”) had
one-on-one contacts with employees for a few minutes. Based on this, the recruiting team chose
68 candidates for follow-up interviews

PHONE INTERVIEWS
Employers also conduct interviews via phone. Somewhat counterintuitively, these can actually be
more useful than face-to-face interviews for judging one’s conscientiousness, intelligence, and
interpersonal skills. Because they needn’t worry about appearance or handshakes, each party can
focus on answers. And perhaps candidates—somewhat surprised by an unplanned call from the
recruiter—give more spontaneous answers.

Avoiding Errors That Can Undermine an Interview’s Usefulness

First Impressions (Snap Judgments)

Probably the most widespread error is that interviewers tend to jump to conclusions—make snap
judgments—about candidates during the first few minutes of the interview (or even before the
interview starts, based on test scores or résumé data).

Not Clarifying What the Job Requires

Interviewers who don’t have an accurate picture of what the job entails and what sort of candidate
is best for it usually make their decisions based on incorrect impressions or stereotypes of what a
good applicant is. They then erroneously match interviewees with their incorrect stereotypes. You
should clarify what sorts of traits you’re looking for, and why, before starting the interview.
Candidate-Order (Contrast) Error and Pressure to Hire

Candidate-order (or contrast) error means that the order in which you see applicants affects how
you rate them.

Nonverbal Behavior and Impression Management

The applicant’s nonverbal behavior (smiling, avoiding your gaze, and so on) can also have a
surprisingly large impact on his or her rating.

Impression management

Clever candidates capitalize on that fact. One study found that some used ingratiation to persuade
interviewers to like them. For instance, the candidates praised the interviewers or appeared to agree
with their opinions, thus signaling they shared similar beliefs. Sensing that a perceived similarity
in attitudes may influence how the interviewer rates them, some interviewees try to emphasize (or
fabricate) such similarities. Others make self-promoting comments about their accomplishments.
Self-promotion means promoting one’s own skills and abilities to create the impression of
competence. Psychologists call using techniques like ingratiation and self-promotion “impression
management.” Self-promotion is an effective tactic, but faking or lying generally backfires.

Effect of Personal Characteristics: Attractiveness, Gender and Race

Unfortunately, physical attributes also distort assessments. For example, people usually ascribe
more favorable traits and more successful life outcomes to attractive people. Similarly, race can
play a role, depending on how you conduct the interview. But in all cases, structured interviews
produced less of a difference between minority and white interviewees than did unstructured ones.
Interviewers’ reactions to various stereotypes are complex.

Interviewer Behavior
Finally, the interviewer’s behavior affects interviewee performance and rating. For example, some
interviewers inadvertently telegraph the expected answers, 68 as in: “This job involves a lot of
stress. You can handle that, can’t you?” Even subtle cues (like a smile or nod) can telegraph the
desired answer. Some interviewers talk so much that applicants have no time to answer questions.
At the other extreme, some interviewers let the applicant dominate the interview, and so don’t ask
all their questions. When interviewers have favorable pre-interview impressions of the applicant,
they tend to act more positively toward that person (smiling more, for instance). Other interviewers
play interrogator, forgetting that it’s uncivil to play “gotcha” by gleefully pouncing on
inconsistencies. Some interviewers play amateur psychologist, unprofessionally probing for
hidden meanings in what the applicant says. Others ask improper questions, forgetting that
discriminatory questions “had a significant negative effect on participant’s reactions to the
interview and interviewer.” Other interviewers just can’t conduct interviews.

You might also like