P. 1
Strategic Human resource Management

Strategic Human resource Management

|Views: 196|Likes:
Published by Apeksha Kadam

More info:

Published by: Apeksha Kadam on Jan 20, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOCX, PDF, TXT or read online from Scribd
See more
See less

02/15/2015

pdf

text

original

Q.1. Explain briefly what is meant by Individual differences? State its importance.

That people differ from each other is obvious. How and why they differ is less clear and is the subject of the study of Individual differences (IDs). Although to study individual differences seems to be to study variance, how are people different, it is also to study central tendency, how well can a person be described in terms of an overall within-person average. Indeed, perhaps the most important question of individual differences is whether people are more similar to themselves over time and across situations than they are to others, and whether the variation within a single person across time and situation is less than the variation between people. A related question is that of similarity, for people differ in their similarities to each other. Questions of whether particular groups (e.g., groupings by sex, culture, age, or ethnicity) are more similar within than between groups are also questions of individual differences.

Personality psychology addresses the questions of shared human nature, dimensions of individual differences and unique patterns of individuals. Research in IDs ranges from analyses of genetic codes to the study of sexual, social, ethnic, and cultural differences and includes research on cognitive abilities, interpersonal styles, and emotional reactivity. Methods range from laboratory experiments to longitudinal field studies and include data reduction techniques such as Factor Analysis and Principal Components Analysis, as well as Structural Modeling and Multi-Level Modeling procedures. Measurement issues of most importance are those of reliability and stability of Individual Differences. Research in Individual Differences addresses three broad questions: 1) developing an adequate descriptive taxonomy of how people differ; 2) applying differences in one situation to predict differences in other situations; and 3) testing theoretical explanations of the structure and dynamics of individual differences. Taxonomies of individual differences:
Taxonomic work has focused on categorizing the infinite ways in which individuals differ in terms of a limited number of latent or unobservable constructs. This is a multi-step, cyclical process of intuition, observation, deduction, induction, and verification that has gradually converged on a consensual descriptive organization of broad classes of variables as well as on methods for analyzing them. Most of the measurement and taxonomic techniques used throughout the field have been developed in response to the demand for selection for schooling, training, and business applications. Test Theory Consider the case of differences in vocabulary in a particular language (e.g., English). Although it is logically possible to organize people in terms of the specific words they know in English, the more than 2^(500,000) possible response patterns that could be found by quizzing people on each of the more than 500,000 words in English introduces more complexity rather than less. Classical Test Theory (CTT) ignores individual response patterns and estimates an individual's

total vocabulary size by measuring performance on small samples of words. Words are seen as random replicates of each other and thus individual differences in total vocabulary size are estimated from observed differences on these smaller samples. The Pearson Product Moment Correlation Coefficient (r) compares the degree of covariance between these samples with the variance within samples. As the number of words sampled increases, the correlation of the individual differences within each sample and with those in the total domain increases accordingly.

Estimates of ability based upon Item Response Theory (IRT) take into account parameters of the words themselves (i.e., the difficulty and discriminability of each word) and estimate a single ability parameter for each individual. Although CTT and IRT estimates are highly correlated, CTT statistics are based on decomposing the sources of variance within and between individuals while IRT statistics focus on the precision of an individual estimate without requiring differences between individuals. CTT estimates of reliability of ability measures are assessed across similar items (internal consistency), across alternate forms, and across different forms of assessment as well as over time (stability). Tests are reliable to the extent that differences within individuals are small compared to those between individuals when generalizing across items, forms, or occasions. CTT reliability thus requires between subject variability. IRT estimates, on the other hand, are concerned with the precision of measurement for a particular person in terms of a metric defined by item difficulty. The test theory developed to account for sampling differences within domains can be generalized to account for differences between domains. Just as different samples of words will yield somewhat different estimates of vocabulary, different cognitive tasks (e.g., vocabulary and arithmetic performance) will yield different estimates of performance. Using multivariate procedures such as Principal Components Analysis or Factor Analysis, it is possible to decompose the total variation into between domain covariance, within domain covariance, and within domain variance. One of the most replicable observations in the study of individual differences is that almost all tests thought to assess cognitive ability have a general factor (g) that is shared with other tests of ability. That is, although each test has specific variance associated with content (e.g., linguistic, spatial), form of administration (e.g., auditory, visual), or operations involved (e.g., perceptual speed, memory storage, memory retrieval, abstract reasoning), there is general variance that is common to all tests of cognitive ability. Personality and Ability
Although to some the term personality refers to all aspects of a person's individuality, typical usage divides the field into studies of ability and personality. Tests of ability are viewed as maximal performance measures. Ability is construed as the best one can do on a particular measure in a limited time (speed test) or with unlimited time (power test). Personality measures

are estimates of average performance and typically include reports of preferences and estimates of what one normally does and how one perceives oneself and is perceived by others.

The same procedures used to clarify the structure of cognitive abilities have been applied to the question of identifying the domains of personality. Many of the early and current personality inventories use self-descriptive questions (e.g., do you like to go to lively parties; are you sometimes nervous) that are rationally or theoretically relevant to some domain of interest for a particular investigator. Although there is substantial consistency across inventories developed this way, some of this agreement could be due to conceptually overlapping item pools. Other researchers have advocated a lexical approach to the taxonomic problem, following the basic assumption that words in the natural language describe all important individual differences. This shifts the taxonomic question from how are individuals similar and different from each other to how are the words used to describe individuals (e.g., lively, talkative, nervous, anxious) similar and different from each other. Dimensional analyses of tests developed based on lexical, rational, or theoretical bases suggest that a limited number (between three and seven) of higher order trait domains adequately organize the thousands of words that describe individual differences and the logically infinite way that these words can be combined into self or peer report items. The broadest domains are those of introversion-extraversion and emotional stability-neuroticism, with the domains of agreeableness, conscientiousness and intellectual openness or culture close behind. These domains can be seen as asking the questions that one wants to know about a stranger or a potential mate: are they energetic and dominant (extraverted), emotionally stable (low neurotic), trustworthy (conscientious), loveable (agreeable), and interesting (intelligent and open). Measures of ability and personality reflect observations aggregated across time and occasion and require inferences about stable latent traits thought to account for the variety of observed behaviors. However there are other individual differences that are readily apparent to outside observers and require little or no inference about latent traits. The most obvious of such variables include sex, age, height, and weight. Differences that require some knowledge and inference are differences in ethnicity and social economic status. These obvious group differences are sometimes analyzed in terms of the more subtle measures of personality and ability or of real life outcomes (e.g, sex differences in neuroticism, mathematics ability, or income). Predictive Validity Individual differences are important only to the extent that they make a difference. Does knowing that people differ on a trait X help in predicting the likelihood of their

These correlations are moderated by job complexity and are much higher for professional-managerial positions than they are for completely unskilled jobs. Conscientiousness. Conclusions from behavioral genetics for most personality traits tend to be similar: Across different designs. General mental ability (g) also has substantial predictive powers in predicting non-job related outcomes. risk for divorce and even risk for criminality. Extraversion is highly correlated with total sales in dollars among salespeople. and biological versus adoptive parents. roughly 40-60% of the phenotypic variance seems to be under genetic control with only a very small part of the remaining environmental variance associated with shared family environmental effects. Sources of individual differences The taxonomic and predictive studies of individual differences are descriptive organizations of thoughts. Descriptive taxonomies are used to organize the results of studies that examine genetic bases of individual differences. a superior manager (one standard deviation above the mean ability for managers) produces almost 50% more than an average manager. The non-cognitive measures of individual differences also predict important real life criteria. Causal theories of individual differences are being developed but are in a much earlier stage than are the descriptive taxonomies. feelings. and behaviors that go together and how they relate to other outcomes. Similarly. Although the size of the correlation is much lower. . when added to g substantially increases the predictability of job performance. In their review of 85 years of selection in personnel psychology. conscientiousness measured in adolescence predicts premature mortality over the next fifty years. Additional designs include twins reared together or apart. These relationships diminish as a function of years of experience and degree of training.50 for mid complexity jobs. The most common family configurations that are used are comparisons of identical (monozygotic) with fraternal (dizygotic) twins. But this categorization is descriptive rather than causal and is analogous to grouping rocks in terms of density and hardness rather than atomic or molecular structure. In terms of applications to personnel psychology. 1998. Frank Schmidt and John Hunter (Psychological Bulletin. Additional results suggest that genetic sources of individual differences remain important across the lifespan. impulsivity can be used to predict traffic violations.doing behavior Y? For many important outcome variables the answer is a resounding yes. 124. with different samples from different countries. children and siblings. 262-274) show how differences in cognitive ability predict differences in job performance with correlations averaging about . By applying structural modeling techniques to the variances and covariances associated with various family constellations it is possible to decompose phenotypic trait variance into separate sources of genetic and environmental variance. such as likelihood of completing college.

Therefore. The study of individual differences is essential because important variation between individuals can be masked by averaging. The average reported based on the results is masking multiple dimensions that should be used to determine daily caloric intake.900 calories a day. his or her conclusions are misleading if not outright false. . measures their metabolic rate and gets a single average. Reports relating specific alleles to specific personality traits emphasize that the broad personality traits are most likely under polygenic influence and are moderated by environmental experience. The researcher then tells the whole population that they should be eating 1. With time we can expect to increase our taxonomic and predictive power by using these causal bio-social theories of individual differences. body size. and children. Genes do not code for thoughts. Specific neurotransmitters and brain structures can be associated with a broad class of approach behaviors and positive affects while other neurotransmitters and structures can be associated with a similarly broad class of avoidance behaviors and negative affects. feelings or behavior but rather code for proteins that regulate and modulate biological systems. but it illustrates the problems that can arise by averaging across groups. Current work on the bases of individual differences is concerned with understanding this delicate interplay of biological propensities with environmental opportunities and constraints as they are ultimately represented in an individual's information processing system. The researcher gathers a sample of men. and other factors that influence metabolic rate.However. atttended to. stored. For example. sex. this should not be taken to mean that people do not change as they mature but rather that the paths one takes through life are similar to those taken by genetically similar individuals. a researcher is interested in resting metabolic rate in humans. It is the way these cues are detected. What's wrong with this study? The researcher has neglected individual differences in activity level. women. Subtle differences in neurotransmitter availability and re-uptake vary the sensitivity of individuals to cues about their environment that predict future resource availability and external rewards and punishments. and integrated with previous experiences that makes each individual unique. Although promising work has been done searching for the biological bases of individual differences it is possible to sketch out these bases only in the broadest of terms. This is an extreme example to make a point. age.

Once you are getting to this point. Thus. such as a ³Standard Age Score´ or SAS.. The overall message is that identifying gifted children isn¶t as exact of a science as we might hope and that parents and educators shouldn¶t use one score or one subtest of an ability test to rule in or rule out giftedness in a child. None of the widely used IQ tests was designed to distinguish well between degrees of giftedness. The publisher of the CogAT states that it is designed to test developed abilities.Q.com/gifted-education-in-fortcollins/using-group-ability-tests-and-individual-ability-tests-for-identifying-gifted-children#ixzz1BHFthPEE . 2. Even on an IQ test. Gifted children who are not fast processors are at a distinct disadvantage here.examiner. despite some of the scores.com http://www. it may be more helpful to look at other indicators of giftedness to attempt to ascertain how gifted the child is.´ Individually administered intelligence (IQ) tests are the gold standard for identifying high ability. School districts usually utilize group ability tests in lieu of individually administered IQ tests in identifying gifted children due to cost constraints. much like older versions of the SAT. valid intelligence test. One means of validating an intelligence test involves correlating the scores obtained by individuals on the new test with scores obtained by those same individuals on an older. What are Group ability tests? How different are they from the individual tests. With the prior two points in mind. a gifted child may not score similarly at all. This report on ability and achievement score fluctuation in elementary school children illustrates how these scores can vary over time and how it isn¶t always prudent to assume that one high score indicates permanent superiority. owner of Hoagies¶ Gifted Education Page. looking a lot like an IQ number. Various definitions of the term ³intelligence´ exist and different intelligence tests hit upon different components of intelligence. yet it is a disadvantage on these tests. Finally. some experts do not consider intelligence scores obtained by young children to be fully stable.com: Using group ability tests and individual ability tests for identifying gifted children . None the less. Carolyn K. again. They do not give us an IQ score. such as the CogAT or OLSAT. The greater body of evidence over time and truly getting to know the child may give us more insight into which scores stand up to the test of time. but they too have their weaknesses. do correlate with IQ scores for many people. notes that. an individual will rarely score identically on two different ability tests or even on two separate administrations of the same test.´ Gifted children who are divergent thinkers.Fort Collins gifted education | Examiner. can work as screening tools to give us an approximation of an individual¶s intelligence. Group ability tests. ³while an average child will score very similarly on a group test and an individual IQ test. poor administration can artificially inflate or deflate a score. Group tests are multiple choice and do not allow for the test administrator to ask the child to ³tell me more. they are not intelligence tests. distinguishing between a moderately gifted child (approximately 98th percentile composite score) and a highly gifted child (approximately 99th percentile composite score) can be fraught with uncertainty. y y Continue reading on Examiner. are disadvantaged over more convergent thinking children. What abilities are measured by these tests varies somewhat from test to test. While group tests. Additionally. it is important to remember that the test is only as good as the test administrator. not innate abilities or intelligence. though. Formal identification of a child with very high intellectual abilities often involves having the child take some sort of an ability test. Divergent thinking can be an indicator of gifted cognition. Group tests do have some additional drawbacks for the gifted: Both the CogAT (after grade 2) and the OLSAT are timed tests.

. A student knows what your standards are for passing and only competes against him or herself while completing the test.3.Q. What are Criterion-Reference tests? Definition: A test in which questions are written according to specific predetermined criteria.

since it's not based directly on the curriculum that they followed and because they don't know ahead of time exactly what will be on the test. the students are familiar with what they are responsible to know. vocabulary. schools and grade levels. The CRT is based solely on an individual's performance and not in reference to the performance of others. This is what makes criterion-referenced testing different from norm-based referenced tests. Via this method. The questions on the test are directly correlated with the class's overall objective. The designated point or "cut-off" score is already determined. Some of the contents of the test may be unfamiliar to them.Criterion-referenced tests (known as CRTs) are used to evaluate an individual's comprehension and skills in regards to a specific subject and focus. The norm group could be made up of students from different classes. Using this method makes it harder to determine the competency of an individual student. seeing as the legitimacy of the score will vary. After a student completes the CRT. Criterion-referenced testing gets around a lot of the confusion that comes with norm-referenced tests. The CRT would be comprised of questions involving a predetermined criteria and illustrate the individual's scores in relation to a designated point. state. The variation . The tests are created from predetermined criteria. For the most part. If it were a norm-referenced test. It may also give them insight into how they can teach more effectively. The criteria can be determined by determined by a school. The percentile or score he receives is a rank within that group. if a student took a norm-referenced test in English and scored in the 75th percentile. a 5th-grade English CRT might include questions on grammar. This means that it's subjective as opposed to objective. Criterion-referenced tests are administered to show whether a student has mastered the information that's taught to them in a particular topic or grade. the teacher can immediately interpret the results. For example. it would mean that he performed the same or better than 58 percent of the students who were in the norm group. city. Considerations 4. If he scored a 58 percent. CRTs are given by teachers to establish how well their students have learned the data and skills that were taught in the class. Significance 2. Prior to taking a CRT. all students who took that 5th-grade English class should be able to pass the test if they were taught effectively and absorbed the content. History 1. In the education arena. the results would only be conclusive after the "norm group" was established. Features 3. sentence structure and reading comprehension. A student's grade on an NRT is attained in relation to the performance of a sizable collection of comparable students who were given the test. There would not be any material that was unfamiliar or that was not gone over in the class. For example. government or independent organization. instructors can judge the students' strong areas and those that need work. her skill set would be assessed according to the norm group.

The Education Policy Analysis Archives is a peer-reviewed scholarly journal that contains invaluable information and data about issues such as these. The influence of test scores (particularly those that have vital stakes like entrance exams). In short. it is the repeatability of your measurement. A measure is considered reliable if a person's score on the same test given twice is similar. Test/Retest Test/retest is the more conservative method to estimate reliability. or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects. have remained a topic of debate for quite some time. The three main components to this method are as follows: ." which can be viewed at: epaa. Supplementary information can be found on the website of Practical Home Schooling Magazine at home-school. on the other hand. while others don't consider standardized tests to be an accurate measure of one's overall standing and capability. the idea behind test/retest is that you should get the same score on test 1 as you do on test 2. Reliability Definition: Reliability is the consistency of your measurement. Q. Of particular note is their article "Educational Assessment Reassessed: The Usefulness of Standardized and Alternative Measures of Student Achievement as Indicators for the Assessment of Educational Outcomes. it is estimated. Simply put. There are two ways that reliability is usually estimated: test/retest and internal consistency. Still. would reflect a solid criterion that's relevant to the person taking the test. A number of educational institutions place enormous influence on the results of such scores. test standards that are imposed by states tend to hold a sound impact on the core curriculum and teaching within the local levels.4. It is important to remember that reliability is not measured.would depend on whether or not the NRT was consistent with what he was taught in the curriculum. Theories/Speculation 5.html. What is Validity and Reliability? Explain the difference between them.com. There are many cases where the test standards have been criticized because of the inconsistencies or limitations they have.asu. The results of the CRT.edu/epaa/v3n6.

. you could write two sets of three questions that measure the same concept (say class participation) and after collecting the responses. Cronbach's alpha is a less conservative estimate of reliability than test/retest.and just like a correlation coefficient. we saw that class participation did increase after the policy was established. Each type of validity would highlight a different aspect of the relationship between our treatment (strict attendance policy) and our observed outcome (increased class participation). Cook and Campbell (1979) define it as the "best available approximation to the truth or falsity of a given inference. Internal Consistency Internal consistency estimates reliability by grouping questions in a questionnaire that measure the same concept. your computer output generates one number for Cronbach's alpha . run a correlation between those two groups of three questions to determine if your instrument is reliably measuring that concept. In short. One common way of computing correlation values among the questions on your instruments is by using Cronbach's Alpha. Say we are studying the effect of strict attendance policies on class participation. and 3) assume there is no change in the underlying condition (or trait you are trying to measure) between test 1 and test 2. proposition or conclusion. 2).1.) implement your measurement instrument at two separate times for each subject. Types of Validity: There are four types of validity commonly examined in social research. compute the correlation between the two separate measurements. In the end." In short. were we right? Let's look at a simple example. More formally. the closer it is to one. In our case. whereas the internal consistency method involves only one administration of that instrument. the higher the reliability estimate of your instrument. Cronbach's alpha splits all the questions on your instrument every possible way and computes correlation values for them all (we use a computer program for this part). inferences or propositions. Validity Definition:Validity is the strength of our conclusions. The primary difference between test/retest and internal consistency estimates of reliability is that test/retest involves two administrations of the measurement instrument. For example.

Construct validity is the hardest to understand in my opinion. External validity refers to our ability to generalize the results of our study to other settings. all of these threats can be greatly reduced by adding a control group that is comparable to your program group to your study. not the stricter attendance policy.increased class participation . could we generalize our results to other classrooms? Threats To Internal Validity There are three main types of threats to internal validity .1. In our earlier example. and did our measured outcome . A Maturation Threat to internal validity occurs when standard events over the course of time cause your outcome. we are trying to generalize our conceptualized treatment and outcomes to broader constructs of the same concepts. in our example. Conclusion validity asks is there a relationship between the program and the observed outcome? Or. is it a causal relationship? For example. Internal Validity asks if there is a relationship between the program and the outcome we saw. A History Threat occurs when an historical event affects your program group such that it causes the outcome you observe (rather than your treatment being the cause). the expulsion of several students due to low participation from school impacted your program group such that they increased their participation as a result. 4. but rather. this would mean that the stricter attendance policy did not cause an increase in class participation. In our example.reflect the construct of participation? Overall. the students who participated in your study on class participation all "grew up" naturally and realized that class participation increased their learning (how likely is that?) . . multiple group and social interaction threats. Thus. if by chance.single group. did our treatment (attendance policy) reflect the construct of attendance. is there a connection between the attendance policy and the increased participation we saw? 2. Single Group Threats apply when you are studying a single group receiving a program or treatment. For example.that could be the cause of your increased participation. It asks if there is there a relationship between how I operationalized my concepts in this study to the actual causal relationship I'm trying to study/? Or in our example. did the attendance policy cause class participation to increase? 3.

does not solve all our problems. and whether or not any other factor other than your treatment causes the outcome. Multiple Group Threats to internal validity involve the comparability of the two groups in your study. and students became forewarned that there was about to be an emphasis on participation. They also (conveniently) mirror the single group threats to internal validity. Simply put. The last single group threat to internal validity is a Regression Threat. these single group threats must be addressed in your research for it to remain credible. This is a common occurrence. as I'll now highlight the multiple group threats to internal validity. go to Bill Trochim's Center for Social Research Methods.not your treatment. An Instrumentation Threat to internal validity could occur if the effect of increased participation could be due to the way in which that pretest was implemented. you measured class participation prior to implementing your new attendance policy. leaving only those more serious students in the class (those who would participate at a high level naturally) this could mean your effect is overestimated and suffering from a mortality threat. . and this leads to an inflated measure of your effect.and thus. For a great discussion of regression threats. One primary way to accomplish this is to include a control group comparable to your program group. In sum. if as a result of a stricter attendance policy. if in your study of class participation.A Testing Threat to internal validity is simply when the act of taking a pre-test affects how that group does on the post-test. Because it is common. your outcome could be a result of a testing threat . A Selection-History threat occurs when an event occurring between the pre and post test affects the two groups differently. it is easily remedied through either the inclusion of a control group or through a carefully designed research plan (this is discussed later). For example. and will happen between almost any two variables that you take two measures of. This is the most intimating of them all (just its name alone makes one panic). a regression threat means that there is a tendency for the sample (those students you study for example) to score close to the average (or mean) of a larger population from the pretest to the posttest. most students drop out of a class. they may increase it simply as a result of involvement in the pretest measure . Don't panic. A Mortality Threat to internal validity occurs when subjects drop out of your study. For example. This however.

Compensatory Equalization of Treatment is the only threat that is a result of the actions of the research staff .it occurs when the staff begins to compensate the . which will lead to an equalization of outcomes between the groups (you will not see an effect as easily). make sure they are as comparable as is humanly possible. Compensatory Rivalry means that the comparison group develops a competitive attitude towards the program group. A Selection-Instrumentation threat occurs when the test implementation affects the groups differently between the pre and post test. How do we minimize these threats without going insane in the process? The best advice I've been given is to use two groups when possible.A Selection-Maturation threat occurs when there are different rates of growth between the two groups between the pre and post test. Finally. Okay. so know that you have dragged yourself through these extensive lists of threats to validity . These are known as social interaction threats to internal validity. Whether you conduct a randomized experiment or a non-random study --> YOUR GROUPS MUST BE AS EQUIVALENT AS POSSIBLE! This is the best way to strengthen the internal validity of your research. a Selection-Regression threat occurs when the two groups regress towards the mean at different rates. and if you do. Resentful Demoralization is a threat to internal validity that exaggerates the posttest differences between the two groups.The last type of threat to discuss involves the social pressures in the research context that can impact your results. Selection-Testing threat is the result of the different effect from taking tests between the two groups. A Selection-Mortality Threat occurs when there are different rates of dropout between the groups which leads to you detecting an effect that may not actually occur. This is because the comparison group (upon learning of the program group) gets discouraged and no longer tries to achieve on their own. Diffusion or "Imitation of Treatment occurs when the comparison group learns about the program group and imitates them.you're wondering how to make sense of it all. and this also makes it harder to detect an effect due to your treatment rather than the comparison groups reaction to the program group.

Mono-method bias simply put.don't be intimidated by their lengthy academic names . Inadequate Preoperational Explication of Constructs simply means we did not define our concepts very well before we measured them or implemented our treatment. limit the breadth of our study's results. Interaction of Different Treatments means that it was a combination of our treatment and other things that brought about the effect. means that you only used one measure or observation of an important concept. if we are measuring self-esteem as an outcome. and hence. and then we'll look at the threats to it one at a time. I know . let's address the threats to construct validity slowly . if you were studying the ability of Tylenol to reduce headaches and in actuality it was a combination of . The solution? Define your concepts well before proceeding to the measurement phase of your study. Interaction of Testing and Treatment occurs when the testing in combination with the treatment produces an effect. The solution? Try to implement multiple versions of your program to increase your study's utility. Constuct validity is the degree to which inferences we have made from our study can be generalized to the concepts underlying our program in the first place. OK? OK. The solution? Implement multiple measures of key concepts and do pilot studies to try to demonstrate that your measures are valid. which in the end." as testing becomes part of it due to its influence on the outcome. reduces the evidence that your measure is a valid one.I'll provide an English translation. and this leads to an equalization between the groups and makes it harder to detect an effect due to your program.you're thinking . For example.comparison group to be "fair" in their opinion. Thus you have inadequately defined your "treatment.no I just can't go on. The solution? Label your treatment accurately. Mono-operation bias simply means we only used one version of our independent variable (our program or treatment) in our study. For example. Threats to Construct Validity I know. Let's take a deep breath and I'll remind you what construct validity is. can our definition (operationalization) of that term in our study be generalized to the rest of the world's concept of selfesteem? Ok.

Reliability estimates the consistency of your measurement. involves the degree to which your are measuring what you are supposed to. These include: 1. Restricted Generalizability Across Constructs simply put. It is my belief that validity is more important than reliability because if an instrument does not accurately measure what it is supposed to.Evaluator Apprehension: When participant's are fearful of your study to the point that it influences the treatment effect you detect. 2. And. more simply. As with internal validity. 3.the simple answer is it seems as if the more critical threats involve internal and construct validity. Validity. Hypothesis Guessing: when participants base their behavior on what they think your study is about . We broke things down and attacked them one at a time. on the other hand. Summary The real difference between reliability and validity is mostly a matter of definition. means that there were some unanticipated effects from your program. .Experimenter Expectancies: when researcher reactions shape the participant's responses . there is no reason to use it even if it measures consistently (reliably). that may make it difficult to say your program was effective.Tylenol and Advil or Tylenol and exercise that reduced headaches -. there are a few social threats to construct validity also. that wasn't so bad.so your outcome is really not due solely to the program .you would have an interaction of different treatments threatening your construct validity.so you mislabel the treatment effect you see as due to the program when it is more likely due to the researchers behavior.but also to the participants' reaction to you and your study. the accuracy of your measurement. See. or more simply the degree to which an instrument measures the same way each time it is used in under the same conditions with the same subjects. the means by which we improve conclusion and external validity will be highlighted in the section on Strengthening Your Analysis. You may be wondering why I haven't given you along list of threats to conclusion and external validity . Confounding Constructs occurs when you are unable to detect an effect from your program because you may have mislabeled your constructs or because the level of your treatment wasn't enough to cause an effect.

the candidate interacts in various ways with another person. what wrong? How would you fix it?). teamwork. These groups can be used to assess such skills as negotiation. persuasion. Group exercises Group exercises test how people interact in a group. . management roles). Another variant is simply to give a give topic for group to discuss (has less face validity). Other exercises may have elements of role-play but are in more 'normal' positions. Assessment centers are particularly useful where: y y y Required skills are complex and cannot easily be assessed with interview or simple tests. Leaderless group discussions (often of a group of candidates) start with everyone on a relatively equal position (although this may be affected by such as the shape of the table). What do you mean by Assessment Centers? Description The Assessment Center is an approach to selection whereby a battery of tests and exercises are administered to a person or a group of people across a number of hours (usually within a single day). Required skills include significant interpersonal elements (e. but which have a common theme of giving the person an unstructured large pile of work and then see how they go about doing it. They are often used to assess listening.5. as well as other job-related knowledge and skills. The classic exercise is the in-tray. In role-play exercises. such as making a presentation or doing an interview (interesting reversal!). Business simulations may be used. This may range from dealing with a disaffected employee to putting a persuasive argument to conducting a fact-finding interview. of which there are many variants. Other variants include planning exercises (here¶s problems. decision-making and. Individual exercises Individual exercises provide information on how the person works by themselves. how will you address them) and case analysis (here¶s a scenario. These often work with 'turns' that are made of data given to the group.Q. One-to-one exercises In one-to-one exercises.g. communication and interpersonal skills. for example showing in practice the Belbin Team Roles that they take. being observed (as with other exercises) by the assessor(s). Individual exercises (and especially the 'in tray') are very common and have a correlation with cognitive ability. leadership. A typical variant is to assign roles to each candidate and give them a brief of which others are unaware. sometimes with computers being used to add information and determine outcomes of decisions. the person takes on a role (possibly the job being applied for) and interacts with someone who is acting (possibly one of the assessors) in a defined scenario. planning and organization. followed by a discussion and decision which is entered into the computer to give the results for the next round. Multiple candidates are available and it is acceptable for them to interact with one another.

Select assessors Select assessors based on their ability to make effective judgments. Those with low self-assessment accuracy are likely to find behavioral modification and adaptation difficult (perhaps as they have low emotional intelligence). You should be assessing them on the exercise. You can use a small pool of assessors who become better at the job. how to set them up ready for use.). Identify criteria Identify the criteria by which you will assess the candidates. There is usually a high correlation between candidate and assessor ratings (indicating honesty). Studies (Bass. not on their memory. Derive these from a sound job analysis. organizational psychologists (who may well be external. Include clear guidelines for player so they can get 'into' the exercises as easily as possible. 1954) have shown high inter-rater reliability (. Do use assessors who are aware of organizational norms and values (this militates against using external assessors). or you can use many people to help diffuse acceptance of the candidates and the selection method. but age and rank are. A key area of preparation is with assessors. Design the exercises around the criteria so they can be identified rather than find a nice exercise and see if you can spot any useful criteria. Ways of improving these exercises include: Increasing length of assessment form to include behavioral dimensions based on selection competencies y Change instructions to promote a more realistic appraisal by applicant of their skills y Imply that candidate would be held accountable if a discrepancy is found between their and assessor ratings. but do also include specialists.less than six is good -. Keep the number of criteria low -.g. assessors and also for those who will set up the exercises (eg. There are two approaches to selecting assessors.82) and test-re-test results (.in order to help assessors remember and focus.Relevant topics increases face validity. Allow for confirmation and for disconfirmation of criteria. Triangulate for results across multiple exercises so each exercise supports others. although much can be selected 'off the shelf'.72). This will help both candidates and assessors and will give a good idea what the candidate is like in real situations. unless you are in a large company). showing different facets of the person and their behavior against the criteria. e. This also helps simplify the final judgment process. y Development Developing assessment centers involves much test development. for example by asking them to rate themselves after each exercise. what parts to include in exercise packs. etc. on whose judgment candidates will be rejected and selected. Include guidelines also for role-players. Develop exercises Make exercises as realistic as possible. . Self-assessment exercises A neat trick is to ask candidates to assess themselves. Gender is not important.

Assessment centers are not cheap to put on and require multiple assessors who must be available. managerial competences. Tools can be developed to help them score candidates accurately and consistently. Origins The assessment center was originated by AT&T.Develop tools for assessors Asking assessors to make personal judgments is likely to result in bias. Include theory of social information processing. Assessment centers allow assessment of potential skill and so are good when seeking new recruits. They allows a wide range of criteria to be assessed. Schemabasedassessment has examples of poor. and by the end of the 1990s it had leapt again to 65%. including by road. Swift and smooth correction of assessors who are not using criteria. who included the following nine components: . Include behavioral checklists (lists of behaviors that display criteria) and behavioral coding that uses prepared data-gathering sheets (this standardizes between-gatherers data). Run the assessment center If you have planned everything well. A timetable for everyone that runs on time. rail and air. social cognition and decision-making theory. Two days of training are better than one. including group activity and aggregations of higher-level. A good practice is to give helpful feedback to candidates who are unsuccessful so they can understand their strengths and weaknesses. record. Welcome for candidates. interpersonal judgment. Things to remember include: y y y y y y y y y Directions to the center sent well beforehand. classify. are ready beforehand. This encourages them to be careful with their assessments. Organizational psychologists can be of particular value to assess and identify the subtler aspects of behavior. Lunch! Coffee breaks! Thanks to everyone involved. etc. Finishing the exercises in time for the assessors to do the final scoring/discussion session. Discussion Assessments have grown hugely in popularity. Traditional assessment has a process of observe. this had grown to 20%. with refreshments and waiting area between exercises. In 1973 only about 7% of companies were using them. it will go well. follow up with candidates and assessors as appropriate. A focus with assessors on criteria. role-playing. The assessment center should not be a learning exercise for assessors. average and good behavior (there is no separation of evaluation and observation). evaluate. Make assessors responsible for giving feedback to candidates and accountable to organization for their decisions. Capturing feedback from assessors immediately after sessions. By the mid-1980s. Prepare assessors and others Ensure the people who will be assessing. Follow-up After the center.

often collapsing multiple criteria into a generic µperformance¶ criterion. as there are so many parts and so much variation. 6.1. Not only are judgments subject to human bias but they also are affected by the group psychology effects of assessors interacting. based on Prof. Across the world more than 30.6. Tom Hendrickson of USA. with the aim of bringing into foray the Personal Profile Analysis (PPA). Q. 2. The Human Matrix offers a range of Thomas Management tools which are administered by Yogesh Pahuja who is a certified Thomas Profiler and also a Post graduate in HR from XLRI. Criticisms The outcome of assessment centers are based on the judgments of the assessors and hence the quality of those judgments. 3. A 1966 study showed high validity in identifying middle managers. Thomas International was started in UK in 1981. Assessors often deviate from marking schemes. a revolutionary system in Human resources development created by Prof.0 How can Thomas Systems help your organization? Thomas International Management Systems take the guesswork out of making decisions that involve people. 9. 4. Assessors even use their own private criteria ± especially organizational fit. psychometric tests). What is Thomas Profiling? Explain its application in various fields. It helps you make the right choices in the development and selection of staff. 8. 2. 7. The system is highly effective in the following key areas: y .000 clients including 300 of the Fortune 500 companies are using the following Thomas instruments and systems in 51 countries. Business game Leaderless group discussion In-tray exercise Two-hour interview Projective test Personality test µq sort¶ intelligence tests Autobiographical essay and questionnaire Validity Reliability and validity is difficult. psychometrics). William Marston¶s theory of ³Emotions of Normal people´. 5. There is a lower adverse effect on individuals than separate tests (eg. This is often due to overburdening of assessors with more than 4-5 criteria (so use less). More attention is often given to direct observation than other data (eg.

y Pressure Profile: Describes how an individual is expected to behave under pressure Situations. value to the organization.(if carried out with HJA . that is to say the unique behavioral characteristics that define a person. It gives an idea of the preferred style of a person.human job analysis) y Identifying individual management styles. motivators. strengths & limitations. extent of fit with the job etc.0 Thomas International ± Instrument ± PPA ± Personal Profile Analysis Profiling People Personal Profile Analysis (PPA) PPA is used to capture and ascertain a person¶s behavior at work place. y Identifying achievable training needs. 3. The person is asked to fill the PPA form. .Recruitment and development of employees. y Assessing the jobholder and job compatibility. y Identifying personal stress and/or job dissonance. fears. behavioral modifications at work place. y Self-evaluation and improvement. This indicates the basic behavioral orientation of a person and also gives insights into what motivates that person and the presence of stress/frustrations traced to their possible causes. The reports describe a person on three different aspects: y Work Mask: Describes an individual¶s behavior at work place and modifications he is making as compared to his self-image. The Personal Profile Analysis is a work place related behavioral inventory. The form is then analyzed by Thomas Key software and Provides different reportsproviding information about a person¶s Behavioral makeup and working style. Thinking of himself at work place he chooses words that describe him most and least. expected behavior under pressure conditions.

What are projective tests? How would you apply them to corporate life? In psychology. First. (2) discuss two methods for measuring validity. low absenteeism. words or images. A selection process is not valid on its own. but rather. Q. a projective test is a type of personality test in which the individual offers responses to ambiguous scenes. let¶s consider a legal issue that is closely connected to validity: employment discrimination. A selection process is valid if it helps you incr ease the chances of hiring the right person for the job.8. Validity embodies not only what positive outcomes a selection approach may predict. This type of test emerged from the psychoanalytic school of thought.y Self Image: Describes an individual¶s inherent or core behavior.9. How Do Projective Test Work? . In this chapter we will (1) r eview ways of improving the consistency or reliability of the selection process. relative to a specific purpose. It is possible to evaluate hiring decisions in ter ms of such valued outcomes as high picking speed. however. A cr itical component of validity is reliability. For example. These projective tests were intended to uncover such unconscious desires that are hidden from conscious awareness. What are the various Intelligence tests.7. Q. reliably) it does so. Q. but also how consistently(i.e.. a test that effectively pr edicts the work quality of strawberry pickers may be useless in the selection of a capable cr ew for eman. How do you validate selection tests? Validity is a measur e of the effectiveness of a given approach. which suggested that people have unconscious thoughts or urges. and (3) pr esent two cases that illustrate these methods. or a good safety r ecor d.

The following are just a few examples of some of the best-known projective tests. Gestures. Scoring projective tests is also highly subjective. Types of Projective Tests There are a number of different types of projective tests. including what is happening. Additionally. In many cases. therapists use these tests to learn qualitative information about a client. According to the theory behind such tests. The participant is then asked to tell a story describing the scene. y The Rorschach Inkblot Test The Rorschach Inkblot was one of the first projective tests. The examiner then scores the test based on the needs. Validity refers to whether or not a test is measuring what it purports to measure. The participant is shown one card at a time and asked to describe what he or she sees in the image. tone of voice. While projective tests have some benefits. Strengths and Weaknesses of Projective Tests Projective tests are most frequently used in therapeutic settings. projective tests lack both validity and reliability. motivations and anxieties of the main character as well as how the story eventually turns out. Some therapists may use projective tests as a sort of icebreaker to encourage the client to discuss issues or examine thoughts and emotions. The key to projective tests is the ambiguity of the stimuli. the underlying and unconscious motivations or attitudes are revealed. and continues to be one of the best-known. the test consists of 10 different cards that depict an ambiguous inkblot. The responses are recorded verbatim by the tester. The results of the test can vary depending on which scoring system the examiner uses. of which many different systems exist. By providing the participant with a question or stimulus that is not clear. and other reactions are also noted. while reliability refers to the consistency of the test results. y The Thematic Apperception Test (TAT) In the Thematic Apperception Test.In many projective tests. clearly defined questions result in answers that are carefully crafted by the conscious mind. an individual is asked to look at a series of ambiguous scenes. the respondent's answers can be heavily influenced by the examiner's attitudes or the test setting. Developed by Swiss psychiatrist Hermann Rorschach in 1921. so interpretations of answers can vary dramatically from one examiner to the next. . For example. how the characters are feeling and how the story will end. they also have a number of weaknesses and limitations. the participant is shown an ambiguous image and then asked to give the first response that comes to mind.

a scale of 1:80. and in some projections the scale is not the same in all directions about a single point. The ways of expressing the scale of a chart are readily interchangeable. To find the exact amount. A line or bar called a graphic scale may be drawn at a convenient place on the chart and subdivided into nautical miles.000 means that one unit (such as a meter) on the chart represents 80.000 or larger.097. A statement that a given distance on the earth equals a given measure on the chart.000 is the same as a scale of 1.Q.000. This is noticeable on a chart covering a relatively large distance in a north-south direction.39 = 1.097 (or approximately 1. known as the representative fraction. All charts vary somewhat in scale from point to point. 10. This is sometimes called the numerical scale.000/72. In addition.000 of the same unit on the surface of the earth. For example. divide the scale by the number of inches in a mile. . On such a chart the border scale near the latitude in question should be used for measuring distances.913.1) miles to an inch. On a Mercator chart the scale varies with the latitude. A single subdivided line or bar for use over an entire chart is shown only when the chart is of such scale and projection that the scale varies a negligible amount over the chart. 3. It may be expressed in various ways. or vice versa.913. Of the various methods of indicating scale. the scale is customarily stated on charts on which the scale does not change appreciably over the chart. This scale is sometimes called the natural or fractional scale. On most nautical charts the east and west borders are subdivided to facilitate distance measurements. A simple ratio or fraction. What are the types of scales? The scale of a chart is the ratio of a given distance on the chart to the actual distance which it represents on the earth. Thus. The most common are: 1. 2. or 80. or a little more than a mile.000 or 1/80. Similarly. one inch of the chart represents 80. 1:80. ³30 miles to the inch´ means that 1 inch on the chart represents 30 miles of the earth¶s surface. ³2 inches to a mile´ indicates that 2 inches on the chart represent 1 mile on the earth. the latitude scale serves as an approximate graphic scale. etc. Since 1 minute of latitude is very nearly equal to 1 nautical mile. For example. in a nautical mile there are about 72. If the natural scale of a chart is 1:80. usually one of about 1:75.000 inches of the earth. the graphical method is normally available in some form on the chart.39 inches. meters. For instance.

9) inch to a mile.374. if the scale is 60 nautical miles to an inch.913. Since the terms are relative.913.39/80. there are: 72. the representative fraction is 1:(60 x 72.000.000 = 0. a chart of scale 1:100. As scale decreases. there is no sharp division between the two.000 but small scale when compared with one of 1:25. A chart covering a relatively large area is called a small-scale chart and one covering a relatively small area is called a large-scale chart.803.39) = 1:4. The amount of detail shown depends on several factors. Cartographers selectively decrease the detail in a process called generalization when producing small scale charts using large scale charts as sources. the amount of detail which can be shown decreases also. among them the coverage of the area at larger scales and the intended use of the chart. Similarly. . Thus.911 (approximately 0.000.000 is large scale when compared with a chart of 1:1.Stated another way.

You might enter the room expecting to tell stories about your professional successes and instead find yourself selling the interviewer a bridge or editing code at a computer.11. but employers reach that objective in a variety of ways. One strategy for performing your best during an interview is to know the rules of the particular game you are playing when you walk through the door.Q. .What are the various types of interviews? Types of Interviews All job interviews have the same objective.

Get into the straightforward groove. when interviewers ask each candidate the same series of questions. Employers that like to stay apprised of available talent even when they do not have current job openings. or esteem the mutual friend that connected you to them. Job seekers ostensibly secure informational meetings in order to seek the advice of someone in their current or desired field as well as to gain further references to people who can lend insight. Directive . Personality is not as important to the screener as verifying your qualifications. the jobseeker and employer exchange information and get to know one another better without reference to a specific job opening." If the interview is conducted by phone. They also will want to know from the outset whether you will be too expensive for the company. Give a range. screeners tend to dig for dirt. Sometimes companies use this rigid format to ensure parity between interviews. Give the interviewer your card. Some tips for maintaining confidence during screening interviews: y y Highlight your accomplishments and qualifications. "I would be willing to consider your best offer.) Sometimes human professionals are the gatekeepers. whether the interviewer catches you sleeping or vacuuming the floor. Remember-they do not need to know whether you are the best fit for the position. Answer questions directly and succinctly. the interviewer has a clear agenda that he or she follows unflinchingly. y y The Informational Interview On the opposite end of the stress spectrum from screening interviews is the informational interview. Save your winning personality for the person making hiring decisions! Be tactful about addressing income requirements. but be intentional nonetheless: y y y y Come prepared with thoughtful questions about the field and the company. For this reason. That way. they can more readily compare the results. feel flattered by your interest. especially if they like to share their knowledge. Computer programs are among the tools used to weed out unqualified candidates. Write a thank you note to the interviewer. it is helpful to have note cards with your vital information sitting next to the phone. and try to avoid giving specifics by replying. This takes off some of the performance pressure. are often open to informational interviews. you will be able to switch gears quickly. Gain references to other people and make sure that the interviewer would be comfortable if you contact other people and use his or her name.Screening | Informational | Directive | Meandering Stress | Behavioral | Audition | Group Tag-Team | Mealtime | Follow-up The Screening Interview Companies use screening tools to ensure that candidates meet minimum qualification requirements. A meeting that you initiate. See our resume center for help. Screening interviewers often have honed skills to determine whether there is anything that might disqualify you for the position. Screeners will hone in on gaps in your employment history or pieces of information that look inconsistent. (This is why you need a digital resume that is screening-friendly. The Directive Style In this style of interview. the informational interview is underutilized by job-seekers who might otherwise consider themselves savvy to the merits of networking. only whether you are not a match. During an informational interview. contact information and resume.

If you go into it feeling stressed. qualities and experiences. y y The Stress Interview Astounding as this is. View it as the surreal interaction that it is. you will have a more difficult time keeping a cool perspective. Remain alert to the interviewer. Insults and miscommunication are common. Prepare and memorize your main message before walking through the door. Although the open format allows you significantly to shape the interview. The following strategies. You might be held in the waiting room for an hour before the interviewer greets you. remain respectful of the interviewer's role. It might begin with a statement like "tell me about yourself. If you are flustered. which are helpful for any interview. or you might find the conversation develops naturally. This interview style allows you tactfully to guide the discussion in a way that best serves you. Even if the interviewer is rude. All this is designed to see whether you have the mettle to withstand the company culture. Go into the interview relaxed and rested. Ask well-placed questions. you will better maintain clarity of mind if you do not have to wing your responses. the Greek hazing system has made its way into professional interviews. running with your own agenda and dominating the conversation means that you run the risk of missing important information about the company and its needs. following his or her lead. If he or she becomes more directive during the interview. Their style does not necessarily mean that they have dominance issues. politely interject it. You might feel like you are being steam-rolled. open-ended question before falling into silence. remember: y y Flex with the interviewer. Do not rely on the interviewer to spark your memory-jot down some notes that you can reference throughout the interview. remain calm and tactful. The interviewer might openly challenge your believes or judgment. It is not personal. You might face long silences or cold stares. If the interviewer does not ask you for information that you think is important to proving your superiority as a candidate.interviewers rely upon their own questions and methods to tease from you what they wish to know. Do not relinquish complete control of the interview." which you can use to your advantage. The interviewer might ask you another broad. the clients or other potential stress. Besides wearing a strong anti-perspirant. Even if you feel like you can take the driver's seat and go in any direction you wish. adjust. The Meandering Style This interview type. although you should keep an eye open for these if the interviewer would be your supervisor. usually used by inexperienced interviewers. are particularly important when interviewers use a non-directive approach: y Come to the interview prepared with highlights and anecdotes of your skills. Either employers view the stress interview as a legitimate way of determining candidates' aptness for a position or someone has latent maniacal tendencies. The Behavioral Interview . You might be called upon to perform an impossible task on the flylike convincing the interviewer to exchange shoes with you. relies on you to lead the discussion. you will do well to: y y y y Remember that this is a game. Either way.

An audition can be enormously useful to you as well. but there are a few tips that will help you navigate the group interview successfully: . and identifying the results of your actions. and you should demonstrate to the prospective employer that you make the effort to do things right the first time by minimizing confusion. or do you compete for authority? The interviewer also wants to view what your tools of persuasion are: do you use argumentation and careful reasoning to gain support or do you divide and conquer? The interviewer might call on you to discuss an issue with the other candidates. y The Audition For some positions. they might take you through a simulation or brief exercise in order to evaluate your skills. Keep your responses concise and present them in less than two minutes. initiative or stress management. leadership. y y The Group Interview Interviewing simultaneously with other candidates can be disconcerting. Depending upon the responsibilities of the job and the working environment. Take ownership of your work. Reflect on your own professional. Communication is half the battle in real life. multi-tasking. educational and personal experience to develop brief stories that highlight these skills and qualities in you. logically highlighting your actions in the situation. but also organization. such as computer programmers or trainers. You should have a story for each of the competencies on your resume as well as those you anticipate the job requires.Many companies increasingly rely on behavior interviews since they use your previous behavior to indicate your future performance. Your responses require not only reflection. In these interviews. conflict resolution. Treat the situation as if you are a professional with responsibility for the task laid before you. For this reason. adaptability. This environment might seem overwhelming or hard to control. requesting an audition can help level the playing field. volunteer. You will be asked how you dealt with the situations. The simulations and exercises should also give you a simplified sense of what the job would be like. remember to: y Clearly understand the instructions and expectations for the exercise. but it provides the company with a sense of your leadership potential and style. Brush up on your skills before an interview if you think they might be tested. since it allows you to demonstrate your abilities in interactive ways that are likely familiar to you. Review your resume. you might be asked to describe a time that required problem-solving skills. companies want to see you in action before they make their decision. To maximize your responses in the behavioral format: y y y Anticipate the transferable skills and personal qualities that are required for the job. employers use standardized methods to mine information relevant to your competency in a particular area or position. If you sense that other candidates have an edge on you in terms of experience or other qualifications. The group interview helps the company get a glimpse of how you interact with peers-are you timid or bossy. Prepare stories by identifying the context. Any of the qualities and skills you have included in your resume are fair game for an interviewer to press. or discuss your peculiar qualifications in front of the other candidates. are you attentive or do you seek attention. do others turn to you instinctively. To maximize on auditions. solve a problem collectively.

you will proceed through a series of one-on-one interviews. If you are unsure of what is expected from you. marriages. This method of interviewing is often attractive for companies that rely heavily on team cooperation. Order something slightly less extravagant than your interviewer. If he badly wants you to try a particular dish. In some companies. If she and the other guests discuss their upcoming travel plans or their families. Keep an eye on the interviewer throughout the process so that you do not miss important cues. If there are several people in the room at once. if possible. Glenn. Gain each person's business card at the beginning of the meeting. Particularly when your job requires interpersonal acuity. Do not sit down until your host does. Do not begin eating until he does. companies want to know what you are like in a social setting. With some preparation and psychological readjustment. Mealtime interviews rely on this logic. Just as each interviewer has a different function in the company. If he orders coffee and dessert. interviewing over a meal sounds like a professional and digestive catastrophe in the making. and expand it. you can enjoy the process. and the serving staff. Some helpful tips for maximizing on this interview format: y Treat each person as an important individual. do not launch into business. Are you relaxed and charming or awkward and evasive? Companies want to observe not only how you handle a fork. ask for clarification from the interviewer. any other guests. Companies often want to gain the insights of various people when interviewing candidates. Glenn. do so. Meals often have a cementing social effectbreaking bread together tends to facilitate deals. this could be a challenge. In other companies. Stay focused and adjustable. Treat others with respect while exerting influence over others. multiple people will interview you simultaneously. which will make you look uncooperative and immature. Prepare psychologically to expend more energy and be more alert than you would in a one-onone interview. Avoid overt power conflicts. y y y The Mealtime Interview For many. Not only does the company want to know whether your skills balance that of the company. oblige him. remembering that you are the guest.y y y y Observe to determine the dynamics the interviewer establishes and try to discern the rules of the game. two of her staff. friendships. Make eye contact with each person and speak directly to the person asking each question. If he recommends an appetizer to you. he likely intends to order one himself. Some basic social tips help ease the complexity of mixing food with business: y Take cues from your interviewer. you might find yourself in a room with four other people: Ms. When asking questions. do not leave him eating alone. and the Sales Director. but also whether you can get along with the other workers. If your interviewer wants to talk business. they each have a unique perspective. Use the opportunity to gain as much information about the company as you can. and religious communion. If you have difficulty chewing gum while walking. be sensitive not to place anyone in a position that invites him to compromise confidentiality or loyalty. but also how you treat your host. and refer to each person by name. you might wish to scribble down their names on a sheet of paper according to where each is sitting. Bring at least double the anecdotes and sound-bites to the interview as you would for a traditional one-on-one interview. The Tag-Team Interview Expecting to meet with Ms. y . Be ready to illustrate your main message in a variety of ways to a variety of people.

12. Other times.y y y y y Try to set aside dietary restrictions and preferences. Q. be as tactful as you can. You can focus on cementing rapport. the interviewer is your host." Choose manageable food items. Find a discrete way to check your teeth after eating. Remember. It does not directly determine pay levels. You might find yourself negotiating a compensation package. Sometimes they just want to confirm that you are the amazing worker they first thought you to be. Thank your interviewer for the meal. Practice eating and discussing something important simultaneously. Write a short note on: a) Job evaluation Job evaluation is a practical technique." or "Shrimp makes my eyes swell and water. The second method is one of awarding points for various aspects of the job. you might find that you are starting from the beginning with a new person. Still. Alternatively. if possible. Avoid phrases like: "I do not eat mammals. The two most common methods of job evaluation that have been used are first. designed to enable trained and experienced staff to judge the size of one job relative to others. but will establish the basis for an internal ranking of jobs. The Follow-up Interview Companies bring candidates back for second and sometimes third or fourth interviews for a number of reasons. When meeting with the same person again. Be prepared for anything: to relax with an employer or to address the company's qualms about you. and you must prepare for each of them. where jobs are taken as a whole and ranked against each other. understanding where the company is going and how your skills mesh with the company vision and culture. Some tips for managing second interviews: y y y y Be confident. Sometimes they are having difficulty deciding between a short-list of candidates. Probe tactfully to discover more information about the internal company dynamics and culture. The second interview could go in a variety of directions. the interviewer should view you as the answer to their needs. you do not need to be as assertive in your communication of your skills. It is rude to be finicky unless you absolutely must. the interviewer's supervisor or other decision makers in the company want to gain a sense of you before signing a hiring decision. If you must. Walk through the front door with a plan for negotiating a salary. Avoid barbeque ribs and spaghetti. Excuse yourself from the table for a moment. Accentuate what you have to offer and your interest in the position. In the points system . whole job ranking.

All jobs in an organisation will be evaluated using an agreed job evaluation scheme. These jobs will be accurately described in an agreed job description. not people. This scheme evaluates job responsibilities in the light of three major factors .the higher the educational requirements of the job the higher the points scored. The real test of the evaluation results is their acceptability to all participants. It is not the person that is being evaluated. The job is assessed as if it were being carried out in a fully competent and acceptable manner. . The most well known points scheme was introduced by Hay management consultants in 1951. Job evaluation can aid organisational problem solving as it highlights duplication of tasks and gaps between jobs and functions. Job evaluation is based on judgement and is not scientific. problem solving and accountability.various aspects or parts of the job such as education and experience required to perform the job are assessed and a points value awarded . Job evaluators will need to gain a thorough understanding of the job Job evaluation is concerned with jobs. y y y y y y y y Job Evaluation .know how. Some Principles of Job Evaluation y Clearly defined and identifiable jobs must exist. It is possible to make a judgement about a job's contribution relative to other jobs in an organisation.The Future As organisations constantly evolve and new organisations emerge there will be challenges to existing principles of job evaluation. However if applied correctly it can enable objective judgements to be made.

Explanation: . modify them to ensure they are up to date and relevant ? Simply sticking rigidly to what is already in place may not be enough to ensure their survival. Job Evaluation . Job Evaluation .the information collected is evaluated using a numerical scale or ranking and rating methodology.32529 ± ³Any method ranking the relative worth of jobs which can then be used as a basis for a remuneration system´ It is essentially a comparative process. which is collectively referred to as Job Study (other names exist). The job grades may or may not be used for status or payment purposes. used by so many organisations is often already seen to be inflexible.BSI definition (32542).Whether existing job evaluation techniques and accompanying schemes remain relevant in a faster moving and constantly changing world. remains to be seen. according to one of several alternative methods. Constantly updating and writing new jobs together with the time that has to be spent administering the job evaluation schemes may become too cumbersome and time consuming for the benefits that are derived.More Job evaluation is essentially one part of a tripartite subject. Job evaluation evaluates selected job factors. The resulting numerical gradings can form the basis of an equitable structure of job gradings. where necessary. The three parts are Job Analysis. which are regarded as important for the effective performance of the job. where new jobs and roles are invented on a regular basis. and Merit Rating . Does this mean that we will see existing schemes abandoned or left to fall into disrepute ? Will providers of job evaluation schemes examine and. Sticking rigidly to an existing scheme may impose barriers to change. The formal points systems. BSI definition .

In this method the job pressure is assessed according to the length of time over which managers decisions commit the company. meaning that they simultaneously possess both positive and negative attitudes toward the item in question. and cognition)Template:Van den Berg et al.Job Evaluation is concerned with measuring the demands the job places on its holder. even senior management. It is possible to use it for all grades of personnel. Most factors that contribute to this job pressure. The cognitive response is a cognitive evaluation of the entity that constitutes an individual's beliefs about the object. When evaluations are carried out on all hourly paid personnel the technique¶s uses include establishing relative wage rates for different tasks. A machine operative. The manager who buys the machine is committing the company for ten years. Eagly & Chaiken. knowledge of mathematics required.. 2006. 1998.[citation needed] Most attitudes are the result of either direct experience orobservational learning from the environment. is at any moment committing the company only for the period needed to make one product unit or component. Attitudes are generally positive or negative views of a person. are assessed and the result is a numerical estimate of the total job pressure. The affective response is an emotional response that expresses an individual's degree of preference for an entity. physical strength required. behavior. Attitudes are judgments. e.g. People can also be conflicted or ambivalent toward an object. Thebehavioral intention is a verbal indication or typical behavioral tendency of an individual. They develop on the ABC model (affect. b) Attitude and employee satisfaction studies. place. thing. An attitude is a hypothetical construct that represents an individual's degree of like or dislike for an item. or event²this is often referred to as the attitude object. . Illustration: The Time Span of Discretion is an interesting and unusual method of job evaluation developed by Elliot Jaques for the Glacier Metal Company. for example.

c) Performance measuring d) Manpower planning .

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->