You are on page 1of 13

Journal of Psychoeducational Assessment http://jpa.sagepub.


Emotional Intelligence in Applicant Selection for Care-Related Academic Programs

Leehu Zysberg, Anat Levy and Anna Zisberg Journal of Psychoeducational Assessment 2011 29: 27 originally published online 22 March 2010 DOI: 10.1177/0734282910365059 The online version of this article can be found at:

Published by:

Additional services and information for Journal of Psychoeducational Assessment can be found at: Email Alerts: Subscriptions: Reprints: Permissions: Citations:

>> Version of Record - Jan 6, 2011 OnlineFirst Version of Record - Mar 22, 2010 What is This?

Downloaded from by dana martin on April 30, 2012

Emotional Intelligence in Applicant Selection for Care-Related Academic Programs

Leehu Zysberg1, Anat Levy1, and Anna Zisberg2

Journal of Psychoeducational Assessment 29(1) 2738 2011 SAGE Publications Reprints and permission: http://www. DOI: 10.1177/0734282910365059

Abstract Two studies describe the development of the Audiovisual Test of Emotional Intelligence (AVEI), aimed at candidate selection in educational settings. Study I depicts the construction of the test and the preliminary examination of its psychometric properties in a sample of 92 college students. Item analysis allowed the modification of problem items, resulting in acceptable reliability (intraclass correlation = .67) and moderate to good discrimination indices. Study II examined criterion-related validity of the AVEI based on a sample of 102 nursing students in a large university in northern Israel. The results suggest that the AVEI correlated with students performance in field practice and in human relations training courses better than with any other relevant variable (e.g., GMA, GPA). Associations remained in the .45 to .60 range, even after controlling for factors such as academic ability, GPA, and gender. These results suggest that the AVEI may be a valid instrument in student selection for care-related programs. Keywords emotional intelligence, test development, applicant selection, care professions Most programs in higher education use a selection process for accepting or rejecting applicants. Applicant selection serves at least two purposes. The first, in academic settings enjoying high rates of application, is to minimize dropout rates and underperformance by accepting those with the highest chances of graduating successfully (relative criterion; see Gregory, 2006). The second postulates that even if there is room for all applicants, the minimum requirements associated with satisfactory performance must be met by every student (absolute criterion). Selection systems also provide at least a measure of perceived justice by minimizing bias, discrimination, and other inequities that may result from circumstances where demand exceeds supply (Cascio, 1991). To achieve these goals, the selection system must be as reliable and as valid as possible. Most notably, the predictive validity of selection tools and methods are of utmost importance in this regard (Gregory, 2006). For decades, the measures that have consistently been providing moderate to high correlations with academic and job-related performance are those involving General

1 2

Tel Hai College, Upper Galilee, Israel University of Haifa, Haifa, Israel

Corresponding author: Leehu Zysberg, 15A Nurit Street #8, Haifa 34654, Israel Email:

Downloaded from by dana martin on April 30, 2012


Journal of Psychoeducational Assessment 29(1)

Mental Ability (GMA; see Bertua, Anderson, & Salgado, 2005; Schmidt & Hunter, 1998). Metaanalytic studies have provided evidence suggesting measures of GMA are associated with academic performance and job performance at the r = .50 to .60 level on average (Sackett & Lievens, 2007). That being said, GMA remains only a partial assessment of human potential for performance. Numerous attempts have been made to find alternative or complementary predictors, yielding inconsistent or disappointing results (e.g., Cascio & Aguinis, 2005).

Emotional Intelligence as a Complementary Predictor of Performance

Emotional intelligence (EI) has been proposed as a complementary measure of human potential (Bar-On, Parker, & Alexander, 2000; Goleman, 1995; Mayer, Salovey, & Caruso, 2000). There are two different views of EI: one sees EI as a personality component involving predispositions and tendencies to behave (Petrides & Furnham, 2001; Shulman & Hemenover, 2007), the other sees EI as an ability (Brackett & Mayer, 2003; Mayer et al., 2000). The most dominant model of ability EI was presented by Mayer, Salovey, and Caruso (2008). The model proposes a four-tier hierarchical structure of EI ranging from emotion perception to emotion management and modification. Regardless of the theoretical model used, some core aspects seem to consistently reemerge in the often conflicted literature on EI: ones ability to identify emotions in self and others, to understand the dynamics leading to complex emotional experiences, and to manage and regulate emotions in self and others. Following this integrative working definition, EI may offer a theoretical complement to the traditional GMA model in predicting academic and professional performance (Mayer et al., 2008; Rode et al., 2007).

Measures of Emotional Intelligence

Echoing the two different theoretical approaches to EI, measures of EI fall into two distinct categories. The first category includes self-report measures and questionnaires, which are based on the personality model of EI. The advantages of self-report measures are that they are easy to administer and score and have high face validity. However, the two main drawbacks of these measures are (a) their high correlation with well-known personality measures, thus adding little to no unique explained variance, and (b) biases typical of self-report measures. Perhaps the most notable form of this tendency is the floor effect, which refers to the fact that people with low EI are not self-aware and are therefore incapable of reporting themselves as such. The second category includes ability EI tests: such tests examine individuals responses to standard stimuli of emotional character against a criterion of correctness. As any ability test they present test takers with problems requiring the best solution. Studies examining the construct validity as well as their incremental predictive validity yield a promising body of evidence (Bnziger, Grandjean, & Scherer, 2009; Mayer et al., 2008; Rode et al., 2007). The MayerSaloveyCaruso Emotional Intelligence Test (MSCEIT; Brackett & Mayer, 2003; Mayer et al., 2000; Mayer et al., 2008) is the most popular measure within the ability EI school. The test pertains to the four theoretical content areas of EI: emotion recognition, integration of emotion in thinking and reasoning processes, emotional complexity, and emotional regulation or management (Mayer et al., 2000; Roberts et al., 2006). The MSCEIT has shown good reliability and reasonable construct validity in various settings (Brackett & Mayer, 2003; Mayer, Roberts, & Barsade, 2007; Mayer et al., 2008). The test features 141 items divided into eight subcategories (two for each of the content areas). Replies are provided on a Likert-type or semantic differential scales to assess levels of correctness for each presented response. The main critique against the current version of the MSCEIT focuses on two points. The first addresses the validity of the

Downloaded from by dana martin on April 30, 2012

Zysberg et al.


correctness criterion. The second discusses the MSCEIT in intercultural contextsome evidence suggests that the measure is vulnerable to intercultural variance (Wang, Law, Hackett, Wang, & Chen, 2005). Attempts have already been made to create a simplified, effective, and valid measure of ability EI based on performance rather than self-report (e.g., Bnziger et al., 2009; MacCann & Roberts, 2008). Promising as this direction may seem, at this point the main disadvantage is inconsistent evidence of the measures ability to successfully correlate with relevant external criteria, attesting to the predictive validity of such measures.

The Current Study

The potential contribution of the concept of EI in educational settings is by now widely recognized, but insufficiently empirically tested (Humphrey, Curran, Morris, Farrell, & Woods, 2007). The current research aims to add to the existing literature in two ways. First, we propose a new brief ability-based measure of EI, offering a shorter, user-friendly, computer-based format that is less mediated by verbal information and is easier to administer and grade. The second uses the new proposed measure to examine the added value of EI in the selection of candidates for academic study programs in care-related professions (in this case, nursing) as a way of validating it, vis--vis an external set of criteria. The assumption that care-related studies and practice require high levels of EI is not new to the literature and serves to identify a possible criterion for EI-related outcomes (Austin, Evans, Magnus, & OHanlon, 2007). We chose to concentrate on the assessment of emotion recognition and analysis. Emotion perception, recognition, and analysis are common to all current models of EI (ability or nonability models). In the literature (see Mayer et al., 2007, for a review), emotion recognition and identification is considered the most rudimentary aspect of EI without which other, higher emotional abilities may not be present. Our new measure is based on ideas already present in the field of emotion research. For example, the MSCEIT contains a section dedicated to the perception of emotions in various images (facial expressions and pictures of general subjects). The MSCEIT, however, is not the first nor is it the only ability-like measure of emotion recognition. An early example for such measures is the Profile of Nonverbal Sensitivity (Rosenthal, Hall, DiMatteo, Rogers, & Archer, 1979), designed to assess attitude perception by means of audiovisual enactments. The Diagnostic Analysis of Nonverbal Accuracy, a more recent test, used audio and visual stimuli for affect recognition as well as new tests such as the Emotion Recognition Index and Multimodal Emotion Recognition Test (see Bnziger et al., 2009, for a review). None of these however (except for the MSCEIT of course) was specifically designed to assess EI as per the ability approach, and to date, none has been tested for validity vis--vis emotion-related performance criteria. In the first study, item generation and test structure were conducted using two samples: the first consisting of content matter experts and the second consisting of college students who took the preliminary version of the test for psychometric analysis purposes. In the second study, a sample of nursing students took the test and provided additional data that were examined against the criteria for their academic performance and their professional performance in the internship so as to test the added value of the new measure of EI.

In light of the above review, we hypothesized (a) that in Study I the new measure of EI will show acceptable levels of reliability and content validity and (b) that in Study II the new measure of EI will show significant incremental association with various indices of academic and professional

Downloaded from by dana martin on April 30, 2012


Journal of Psychoeducational Assessment 29(1)

performance in a care-related academic program, beyond associations already accounted for by GMA measures.

Study I
Based on the ability model of EI, and on assessment methods set by tests such as the Profile of Nonverbal Sensitivity and MSCEIT, our proposed test presents test takers with stimuli requiring the correct identification of emotion. Much in line with the MSCEIT, criteria for correctness was set by a panel of subject matter experts. It, however, deviates from existing tests in major aspects of its methodology. While developing the test, we were aiming at a conservative item response scale format using a simple multiple choice structure with one correct answer for each. As mentioned above, for the sake of brevity, we decided to focus on one of the four facets of EI, namely, emotion recognition, which is considered to be the most basic attribute of EI (Mayer et al., 2000). This practice of focusing on a representative aspect of a general concept is not uncommon in the assessment of abilities and intelligence (Schmidt & Hunter, 1998). All the test items are still images or video clips rather than written vignettes, to reduce the reliance on written language to a bare minimum. Moreover, we structured the items to fit either a primary or secondary emotion definition (see section Item Generation for more details).

Content matter experts. We identified six faculty members, all with at least PhD-level training, from two leading academic institutions in northern Israelall with expertise and published work on EI. Out of the six, five agreed to participate in the study. There were two women and three men; four were assistant or associate professors and one was a full professor. They all obtained their PhD degrees in either psychology or education. Their ages ranged from 34 to 72 years. The ones who did not participate cited shortage of time as a reason. Test takers. Ninety-two students were recruited from a large community college in northern Israel, all from social science programs (e.g., psychology, social work, and education). They were recruited through an advertisement placed on campus asking for participants in a study designing a new psychological measure. There were no material incentives for participation. The majority (77%) of the participants was women, and less than one quarter (23%) were men. All participants were single and ranged in age from 19 to 30 years (mean = 23.89; SD = 1.86). The sample was evenly divided between freshmen and sophomores.

The Audiovisual Test of Emotional Intelligence (AVEI) was designed as a dedicated measure of the Emotion Perception branch of ability EI. Item generation. Item generation began by defining a list of primary emotions (e.g., fear, happiness) and secondary emotions (e.g., guilt, pride). The list was created by a group of six psychology majors in their senior year of college, following an extensive literature search on the definitions of the two categories of emotions defined by Damasio (1994; see also Toronchuk & Ellis, 2007; van Beek & Dubas, 2008). Primary emotions are those often found to be common to most mammals, representing basic survival responses of appetitive or aversive behavior. Secondary emotions involve either a complex set of different and often contradictory primary emotions or stem from a more sophisticated appraisal of a situation (Damasio, 1994; van Beek & Dubas, 2008). The list was not meant to be exhaustive but rather representative of emotions that people may experience in everyday life. The list was designed keeping an even number of positive and

Downloaded from by dana martin on April 30, 2012

Zysberg et al.


negative emotions to avoid possible bias. The final list included six positive emotions (happiness, satisfaction, pride, love, caring, and comfort) and six negative emotions (fear, sadness, anger, frustration, doubt, and envy). Within each positive or negative category, half were defined as basic and the rest as complex. Items consisting of pictures or short video clips (each taking no more than 5 seconds in length) were generated as scripts by three research assistants with training in both psychology and the visual arts (photography or cinematography). The first pool consisted of three different stills and three different video clips for each of the 12 emotions, making for a total of 72 items. These items were then subjected to content validity testing.

Content Validity
The content matter experts received the test items on a CD, together with a form on which they were requested to assess each of the items in two ways: first, to assign one of the emotions from a list provided for each item, and second, to rate the extent to which the item represents the emotion on a scale ranging from 1 = not at all to 5 = very much. Items were retained as part of the test if they received at least 80% agreement and an average relevance grade of 4.0 or more. Based on the additional comments, changes were made to the existing items and some were excluded altogether. AVEIs first working version. The first working version of the AVEI consisted of 30 computerbased items, including 12 video clips and 18 still pictures (more details on the item structure are given later in the article). In each item, participants were asked about a certain target person appearing in the picture/video and were required to identify the emotion being experienced by that specific person. Responses were provided on a multiple-choice scale with four options for each item (varying from item to item), of which only one was judged by the content experts to be correct (see description above). The test yielded one single score, calculated as the number of correct answers provided by the respondent. The total score represents a quantitative indicator of an individuals ability to identify, analyze, and name emotions, the equivalent of the first component of EI according to the ability model (Mayer et al., 2000).

Students signed an informed consent form before taking the AVEI. Students took the test in groups of 5 to 12 in a computer lab. Each student had a headset (for the soundtrack of the video clips) and his or her own computer. The test took about 16 to 18 minutes to complete on a PC platform. At the completion of the test, we thanked them and gave them a contact email for future questions, if they had any.

Descriptive statistics followed by item analyses (including item difficulty and discrimination indices, among others) were used to assess the fit of data to patterns expected by an ability measure of EI.

The results of the item analysis are provided in Tables 1 and 2. We first examined item differentiation level (minimal acceptable values ranging from .10 to .25 or higher; see Sim & Rasiah, 2006), using it as a preliminary indication of item-level validity. Items failing to meet our criteria were omitted (overall, three items were excluded at this stage). Despite the item deletion, no emotion was omitted from the list, still including all 12 emotions originally defined for the test.

Downloaded from by dana martin on April 30, 2012


Journal of Psychoeducational Assessment 29(1)

Table 1. Correct Answer Distribution and Discriminated Coefficients in the First and Second Versions of the Audiovisual Test of Emotional Intelligence First Version Original Item Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Correct Answer Proportion .73 .61 .93 .44 .88 .97 .80 .47 .79 .95 .81 .84 .83 .75 .22 .83 .73 .70 .65 .87 .29 .91 .61 .20 .51 .77 .33 .23 .63 .89 Discrimination Coefficient .33 .40 .08 .22 .28 .01 .35 .18 .40 .40 .26 .43 .50 .51 .22 .17 .23 .38 .27 .30 .36 .04 .30 .55 .09 .18 .26 .30 .14 .18 Second Version Correct Answer Proportion .58 .42 .59 .44 .88 .81 .84 .51 .76 .75 .36 .64 .17 .60 .62 .67 .90 .80 .86 .85 .35 .60 .81 .28 .36 .71 .77 Discrimination Coefficient .21 .31 .55 .24 .20 .33 .32 .38 .34 .48 .21 .46 .19 .35 .15 .38 .13 .34 .18 .23 .36 .36 .44 .10 .08 .50 .31

Table 2. Descriptive Statistics and Distribution Indicators for the Audiovisual Test of Emotional Intelligence Total Score (N = 92) Descriptive Index Mean (SD) Median Minimum, maximum Skewness Version 1 18.03 (3.15) 19 11-22 -1.19 Last Version 16.75 (3.29) 17 9-23 -.37

The item difficulty indices, though generally supporting the differentiation between basic and complex emotion items, suggested that the items were generally too easy. We followed-up on a subsample of our participants and questioned them about the items. Most of the responses from 25 participants we interviewed suggested that the distractors in the easier items were too obvious.

Downloaded from by dana martin on April 30, 2012

Zysberg et al.


We therefore redesigned those with a coefficient of .85 or more content-wise and revised our response scale from a 4-item to a 10-item multiple-choice scale to make choosing the correct answer more challenging. We also tested for internal reliability, using intraclass correlations (ICCs) rather than the traditional alpha coefficient due to the dichotomous nature of the items in the test. The ICC coefficient was an acceptable .67.

Study II
Study II used the revised version of the AVEI to examine: (a) the validity (especially the predictive validity) of the AVEI, thus also testing, and (b) the added value of EI in predicting academic and professional performance in care-related fields.

A group of 102 nursing students in a large university in northern Israel agreed to participate in a study examining a new psychological measure. Consenting participants were entered into a lottery from which five were randomly chosen to receive a modest monetary reward (the equivalent of US$50). This was done to encourage students to participate in this stage of the study, which required allowing the researchers access to their academic files. The majority (72.5%) of the participants were women and the remaining 27.5% men. The participants ranged in age from 21 to 46 years (mean = 25.31; SD = 4.29), with 29% freshmen, 50% sophomores, and the rest juniors and seniors.

To test the criterion-related validity of the AVEI, we collected a set of indices reflecting various criteria, including the following. Grade point average (GPA). The students GPA at the time point when the study was conducted. This measure was used as an indicator of academic performance. The grades, according to the Israeli system of grading, range theoretically from 0 to 100. Psychometric exam score (equivalent to SAT). This is a standardized score required for acceptance to most academic programs. Scores range from 200 to 800, with the average score being 550. This score is considered to be highly correlated with IQ scores and was therefore used as a proxy of scholastic intelligence (Nevo, 1997). The Israeli standard psychometric exam shows high and consistent levels of internal reliability, acceptable testretest reliability, and ample evidence point to its criterion related validity (e.g., a stable r = .45-.55 correlation with academic performance, across samples and time points; Kenneth-Cohen, 2001). Clinical practice grade. Guided by nurseinstructors/preceptors, field study courses are at the core of the nursing program and are graded according to a strict professional protocol. The grades were used in this study to reflect professional performance criteria. The universities participating in this study do not test these measures for reliability or validity; however, in our sample Cronbachs a was .89. The measure was used here as a proxy of professional performance assessmenta practice often mentioned in validity studies using work performance as a criterion (Schmidt & Hunter, 1998). Grades can theoretically range from 0 to 100. Interpersonal skills workshop grade. As part of the nursing program, students are required to take a workshop in interpersonal skills. We used the final grade of this course as an assessment of the students interpersonal abilities, a notion found highly associated with the concept of EI in the literature (Mayer et al., 2000). The grades are given by the course instructors.

Downloaded from by dana martin on April 30, 2012


Journal of Psychoeducational Assessment 29(1)

Table 3. The Audiovisual Test of Emotional Intelligence Item Structure by Emotion Expressed (Final Version Only) Emotion Love Shame Frustration Care Satisfaction Sadness
Note: Each emotion represented by 1 still and one video. a. For items with two still images.

Pride Anger Happinessa Feara Anger Envya

Audiovisual Test of Emotional Intelligence. The version of the AVEI used for Study II consisted of 27 revised items in a computerized format with a 10-item multiple-choice response scale for each item, of which only one response was correct. Of the total 27 items, 12 were video clips and 15 were still pictures. The test took 12 to 18 minutes to complete, and administration required computers, equipped with headsets (for audio in the video clips). Table 3 depicts the general structure of the items by emotion for this version of the AVEI.

Students were invited to take the AVEI in a computer lab after signing an informed consent form and a form allowing the researchers to gather data from their academic files. After completing the test, students were briefed regarding the purpose of the study. Data were then gathered from the students academic records using automated computer software to minimize exposure of their personal data. After obtaining the final data files, any identifying numbers were excluded from the files, and analyses were conducted using SPSS 16.0.

Table 4 shows the descriptive statistics for the main variables in Study II. The indices suggest that, as expected on the theoretical level, the total AVEI scores and the psychometric scores were both normally distributed. However, the criteria scores were negatively skewed, albeit to an extent not precluding the possibility of parametric analysis. To test for internal consistency, we calculated the ICC coefficients, yielding a marginally acceptable value of .65. Next, we examined the zero-order correlations between the study variables. Table 5 presents a summary of the findings in a matrix form. In line with the literature, the table reveals moderate associations between the AVEI and measures that are traditionally considered to be proxies of cognitive mental abilities (i.e., psychometric score, GPA). In addition, the correlations found between the AVEI and the various performance criteria provide preliminary evidence of the tests criterion-related validity. To further examine the relationship between the AVEI and the various performance criteria, we calculated partial correlations between the AVEI and each of the performance criteria, controlling for the cognitive measures (i.e., psychometric score, GPA). In Table 5, we added the results of this analysis in parentheses where relevant. The results indicate that the association between the AVEI and the performance criteria in this study remained significant even when controlling for cognitive abilitiesthus refuting one of the most prominent critiques of EI tests, namely, the claim that they assess intelligence rather than EI (Amelang & Steinmayr, 2006).

Downloaded from by dana martin on April 30, 2012

Zysberg et al.
Table 4. Descriptive Statistics for the Main Variables, Study II (N = 102) Descriptive index Mean (SD) Median Minimum, maximum Skewness AVEI 16.75 (3.29) 17 9-23 -.37 GPA 82.56 (5.46) 82.50 69-97 -.04 Psy 573.00 (47.9) 572 346-707 -.84 Clin 91.75 (3.40) 92 78-97 -.1.43 SSkill


90.93 (4.88) 92 70-98 -1.64

Note: AVEI = Audiovisual Test of Emotional Intelligence; GPA = Grade Point Average (academic achievement); Psy = Psychometric exam (equivalent to SAT); Clin = Clinical practice grade; SSkill = Interpersonal skill workshop grade.

Table 5. Zero-Order Pearsons Correlations Among the Study Variables (N = 102) Descriptive index AVEI GPA Psy Clin SSkill AVEI GPA .26* Psy .30** .32** Clin .39** (.31) .58** .24* SSkill .43** (.34) .50** .14 .39** (.16)

Note: AVEI = Audiovisual Test of Emotional Intelligence; GPA = Grade Point Average (academic achievement); Psy = Psychometric exam (equivalent to SAT); Clin = Clinical practice grade; SSkill = Interpersonal skill workshop grade. *p < .05. **p < .01.

Table 6. Multiple Regression Analysis, Using AVEI and Traditional Psychometric Scores, to Predict Performance Criteria (N = 102) Predictor AVEI Psychometric exam B .79 .44 .01 .01 r .43 .41 .14 .24 Partial Correlation .41 .41 .02 .14 Significance .001 .001 .930 .260

Notes: AVEI = Audiovisual Test of Emotional Intelligence. Data in top two rows relate to interpersonal skill workshop scores as a criterion. Data in bottom two rows relate to clinical practice scores as a criterion.

Finally, we used multiple regression analysis to assess the validity of a combined modelthat is, using cognitive abilities alongside EI. Using both the clinical performance grade and the interpersonal relations workshop grade as criteria, the AVEI emerged as the first and only significant predictor, with the cognitive predictor remaining out of the formula. Table 6 summarizes the main findings of this analysis.

We suggested that EI is a concept of added value in the assessment of potential in care-related settings and helping professions, such as psychology, nursing, social work, and education. Then we proposed a new measure of EI, based on the ability model of EI (Mayer et al., 2000) and existing scales of emotion recognition called the AVEI. While testing its reliability and criterionrelated validity, we hypothesized that measures of EI as ability offer substantial added value in predicting both academic and professional performance in care-related settings, specifically in a sample of nursing students.

Downloaded from by dana martin on April 30, 2012


Journal of Psychoeducational Assessment 29(1)

Generally, the first study suggests that the AVEI shows acceptable content validity and the item analysis suggests most items are within the acceptable range of values for difficulty and discrimination (a proxy of content validity), with a few exceptions. Though we discarded items not meeting minimum criteria for inclusion, we kept items with marginal indices for future studies (or until proven to be below the acceptable cut point). The results of the second study provide support for the AVEIs content- and criterion-related validity. In doing so, the results also point to the associations between the AVEI and three types of performance criteriaGPA, achievement in clinical practice, and interpersonal relations workshop gradesall a standard part of the academic program in our sample population. Moreover, preliminary evidence suggests that the AVEI predicted the latter two criteria better than did GPA or traditional psychometric exams. The association found between the AVEI and the relevant indices was robust and remained significant even when controlling for cognitive factors. The potential contribution of a new dedicated measure of EI is especially noteworthy in a budding field in which there are currently almost no alternatives to the MSCEIT. Having different measures assessing the same concept by different means and methods is a blessing in terms of content validity and application in diverse settings (see also MacCann & Roberts, 2008). The AVEI is predominantly aimed at candidate selection and was therefore designed with ease of administration and grading in mind, producing one single score, representing the first, most rudimentary aspect of emotion perception and identification, found to be highly loaded in EIs intelligence factor structure, though as mentioned before, results from various studies are still equivocal (e.g., Mayer et al., 2000; Roberts et al., 2006). How similar or dissimilar the AVEI score to the emotion perception score of the MSCEIT is yet to be determined, but given both measures structure and methodology, we expect future studies will show certain similarities. Two studies are currently underway evaluating the AVEIs associations with various measures of cognitive ability, ability EI, and personality EI. A couple of weaknesses are evident in this short preliminary series of studies. First, the samples used were from Israel, were relatively small, and are not necessarily representative, though the college and the university from which both samples were recruited can be considered representative of the academic arena in Israel. Additional studies on diverse samples, and preferably in a variety of cultures and sampling frames, are required before any definitive conclusions can be drawn regarding the AVEI and its application in educational selection settings. A second weakness is in regard to the content validity of the AVEI. Despite consistency of evidence from content matter experts, the question remains as to whether this provides enough evidence to claim that the AVEI is a valid test of EI. This question can be addressed in two ways, the first is theoretical and the second empirical. On the theoretical side is the model proposed by Mayer et al. (2008), strongly positioning the identification of emotion at the core of the notion of EI. Additional work by the same authors as well as competing groups promoting alternative views of EI validated the importance of emotion perception, identification, and analysis as a major factor of EI, on the conceptual level (Mayer et al., 2007). On the empirical side are the patterns of results emerging in both studies reported here. The moderate associations between the AVEI and cognitive measures are typical of findings derived from the budding field of EI research. The robust associations of the AVEI with emotion-related criteria and performance criteria also echo the definitions of EI provided in both the theoretical and empirical literature (Bar-On et al., 2000; Goleman, 1995; Mayer et al., 2007). Despite being just the first step in this direction, we believe that the evidence presented here provides a preliminary basis for future work toward achieving two important goals: first, presenting a relatively simple ability measure of EI for everyday work- and study-related applications and, second, positioning measures of EI at the core of candidate selection in educational and professional settingsespecially those that are care centered.

Downloaded from by dana martin on April 30, 2012

Zysberg et al. Acknowledgments


The authors wish to thank Ms. Ifat Cohen and Mr. Elad Cohav for their involvement in an earlier version of the AVEI. The authors also wish to thank Prof. Hy Le Xuan, Ms. Sara Amamro, and Ms. Yuuri Awkagawa for translating items from the first version of the AVEI to English.

Declaration of Conflicting Interests

The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.

The studies described in this manuscript were supported in part by the University of Haifa fund for encouraging original research.

Amelang, M., & Steinmayr, R. (2006). Is there a validity increment for tests of emotional intelligence in explaining the variance of performance criteria? Intelligence, 34, 459-468. Austin, E. J., Evans, P., Magnus, B., & OHanlon, K. (2007). A preliminary study of empathy, emotional intelligence and examination performance in MBChB students. Medical Education, 41, 684-689. Bnziger, T., Grandjean, D., & Scherer, K. R. (2009). Emotion recognition from expressions in face, voice, and body: The multimodal emotion recognition test (MERT). Emotion, 9, 691-704. Bar-On, R., Parker, J. D. A., & Alexander, J. D. (2000). The handbook of emotional intelligence. San Francisco: Jossey-Bass. Bertua, C., Anderson, N., & Salgado, J. F. (2005). The predictive validity of cognitive ability tests: A UK meta-analysis. Journal of Occupational and Organizational Psychology, 78, 387-409. Brackett, M. A., & Mayer, J. D. (2003). Convergent, discriminant, and incremental validity of competing measures of emotional intelligence. Personality and Social Psychology Bulletin, 29, 1147-1158. Cascio, W. F. (1991). Applied psychology in personnel management (4th ed.). Boston: Prentice Hall. Cascio, W. F., & Aguinis, H. (2005). Test development and use: New twists on old questions. Human Resource Management-New York, 44, 219-236. Damasio, A. R. (Ed.). (1994). Descartes error: Emotion, reason and the human brain. New York: Putnam. Goleman, D. (1995). Emotional intelligence. New York: Bantam Books. Gregory, R. J. (2006). Psychological testing history, principles, and applications (5th ed.) New York: Allyn & Bacon. Humphrey, N., Curran, A., Morris, E., Farrell, P., & Woods, K. (2007). Emotional intelligence and education: A critical review. Educational Psychology, 27, 235-254. Kenneth-Cohen, T. (2001). The differential validity of the university selection system in Israel, by socioeconomic status. Jerusalem, Israel: The National Institute for Testing & Assessment. MacCann, C., & Roberts, R. D. (2008). New paradigms for assessing emotional intelligence: Theory and data. Emotion, 8, 540-551. Mayer, J. D., Roberts, R. D., & Barsade, S. G. (2007). Human abilities: Emotional intelligence. Annual Review of Psychology, 59, 507-536. Mayer, J. D., Salovey, P., & Caruso, D. (2000). Models of emotional intelligence. In R. J. Sternberg (Ed.), Handbook of intelligence (pp. 396-421). New York: Cambridge University Press. Mayer, J. D., Salovey, P., & Caruso, D. R. (2008). Emotional intelligence: New ability or eclectic traits. American Psychologist, 63, 503-517. Nevo, B. (1997). Human intelligence. Tel Aviv, Israel: Open University press.

Downloaded from by dana martin on April 30, 2012


Journal of Psychoeducational Assessment 29(1)

Petrides, K. V., & Furnham, A. (2001). Trait emotional intelligence: Psychometric investigation with reference to established trait taxonomies. European Journal of Personality, 15, 425-448. Roberts, R. D., Schulze, R., OBrien, K., MacCann, C., Reid, J., & Maul, A. (2006). Exploring the validity of the Mayer-Salovey-Caruso emotional intelligence test (MSCEIT) with established emotions measures. Emotion, 6, 663-669. Rode, J. C., Mooney, C. H., Arthaud-Day, M. L., Near, J. P., Baldwin, T. T., Rubin, R. S., et al. (2007). Emotional intelligence and individual performance: Evidence of direct and moderated effects. Journal of Organizational Behavior, 28, 399-421. Rosenthal, R., Hall, J., DiMatteo, M. R., Rogers, P. L., & Archer, D. (1979). Sensitivity to non-verbal communication: The PONS test. Baltimore: John Hopkins University Press. Sackett, P. R., & Lievens, F. (2007). Personnel selection. Annual Reviews, 59, 419-450. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology. Psychological Bulletin, 124, 262-274. Shulman, T., & Hemenover, S. (2007). Is dispositional emotional intelligence synonymous with personality? Self and Identity, 5, 147-171. Sim, S., & Rasiah, R. I. (2006). Relationship between item difficulty and discrimination indices in true/ false-type multiple choice questions of a para-clinical multidisciplinary paper. AnnalsAcademy of Medicine Singapore, 35, 67-71. Toronchuk, J., & Ellis, G. (2007). Criteria for basic emotions: Seeking disgust? Cognition and Emotion, 21, 1829-1832. van Beek, Y., & Dubas, J. S. (2008). Age and gender differences in decoding basic and non-basic facial expressions in late childhood and early adolescence. Journal of Nonverbal Behavior, 32, 37-52. Wang, H., Law, K. S., Hackett, R. D., Wang, D., & Chen, Z. X. (2005). Leader-member exchange as a mediator of the relationship between transformational leadership and followers performance and organizational citizenship behavior. Academy of Management Journal, 48, 420-432.

Downloaded from by dana martin on April 30, 2012