You are on page 1of 13

The relative performance of different methods

for selecting creative marketing personnel


Niek Althuizen
Published online: 21 July 2012
#
Springer Science+Business Media, LLC 2012
Abstract Despite an increasing need for creativity in all corners of business, the
spotlight of most recruitment and selection procedures has not shifted accordingly.
Measures of creative ability that are to be used in practice should preferably be brief
and operationally valid in the small and relatively homogenous pools of subjects that
companies typically have to deal with. This article examines the performance of
different assessment methods of creative ability in two small-scale hiring contexts,
i.e., with prospective marketing employees (marketing students) and with current
employees of a creative marketing agency. With prospective marketing employees, a
combination of test ratings and student CV ratings of creative ability shows high
operational validity. Supervisory ratings and self-ratings of creative ability are rea-
sonable alternatives to test ratings for senior and long-time employees, but should be
used with caution when junior employees and recent recruits are concerned.
Keywords Creative ability
.
Measurement
.
Recruitment
.
Selection
.
Marketing
employees
1 Introduction
Creativity enables organizations to develop innovative strategies, new products, and
novel ways of working that are crucial for survival in highly competitive and dynamic
business environments (Andrews and Smith 1996; Woodman et al. 1993). A study by
IBM (2010) revealed that 60 % of the surveyed chief executive officers worldwide
consider creativity as their top priority. A survey by Patterson et al. (2009) among UK
businesses showed that 78 % views creativity and innovation as key to survival and
future success. One would expect that organizations have shifted the spotlight of their
recruitment and selection procedures accordingly. However, of the aforementioned
Mark Lett (2012) 23:973985
DOI 10.1007/s11002-012-9198-x
N. Althuizen (*)
ESSEC Business School, Avenue Bernard Hirsch, 95201 Cergy-Pontoise, France
e-mail: althuizen@essec.edu
UK businesses, only 29 % listed creativity among their personnel selection criteria
and most appraisal processes still favor conscientiousness over innovativeness (Pat-
terson et al. 2009).
Measures of creative ability do not enjoy the same reputation as general mental ability
and personality tests or other methods to gauge an applicants aptitude for the job, such
as the evaluation of CVs, interviews, and assessment centers (see Robertson and Smith
2001). The marketing literature has also paid scant attention to the measurement of
creative ability and individual differences in creative ability are largely neglected as
an explanatory variable in marketing research (e.g., Goldenberg et al. 1999; Sellier
and Dahl 2011). According to a review of applied creativity research in business by
Kabanoff and Rossiter (1994), however, the largest factor contributing to the creative
output of the firm is the creative ability of the individual. The present article is the
first to explore the operational validity of different assessment methods of creative
ability in the typically small and relatively homogenous pools of subjects that
companies have to deal with (Scullen and Meyer 2012). An additional requirement
for use in marketing research and practice is that measures of creative ability should
be preferably brief and easy to administer (Scratchley and Hakstian 2001).
Most studies that investigate the validity of selection instruments are based on the
(implicit) assumption that companies have an infinite pool of subjects to choose from.
From this theoretical perspective, it makes sense to correct the observed validity coef-
ficients for range restriction and unreliability in the measures to get at the true population
validity coefficient (Schmitt 2007). However, companies have to select new recruits or
incumbents from a finite pool of subjects, with sample sizes of less than 20 being the rule
rather than the exception (Scullen and Meyer 2012). Moreover, the subjects are likely to be
similar on a number of background variables (e.g., job qualifications). Because of differ-
ences between theoretical and real-world hiring situations (Scullen and Meyer 2012),
scholars have urged to examine validity coefficients in small-scale hiring contexts
(Heneman et al. 2000). Such studies usually fail to attract attention from researchers
because of limited generalizability and reduced statistical power (Scherbaum 2005).
This article presents the results of two validation studies in small-scale hiring contexts.
Study 1 (n=20) examines the performance of three brief assessment methods of
creative ability, viz. test ratings, CV ratings and self-ratings, in a laboratory setting
with prospective marketing employees (i.e., marketing students). Study 2 (n=24)
further investigates the convergent validity of the self-ratings and test ratings together
with supervisory ratings of creative ability in a field setting with current employees of
a marketing agency. Small n studies are often criticized for being merely descriptive
and a-theoretical. However, small n studies do rely, albeit often implicitly, on
interpretive theory to provide meaning to the facts that these studies generate
(Coppedge 2007). The studies presented in this article are grounded in the creative
cognition literature (e.g., Finke et al. 1992; Plucker and Renzulli 1999), which will be
briefly discussed next.
2 The construct and measurement of creative ability
Creativity can be defined as the generation of ideas or solutions that are both novel
and useful (Amabile 1983) and is generally considered to be the result of an interplay
974 Mark Lett (2012) 23:973985
between personal factors, such as knowledge, skills, abilities and motivation, and
contextual factors, such as creativity training and incentives (Amabile 1983; Kilgour
and Koslow 2009). Goldenberg et al. (1999), for example, trained subjects in the
use of creative templates derived from an analysis of patterns in previous
innovations in order to boost their creative performance. One of the templates
advises people to link previously unconnected attributes, e.g., making the color
of a glass dependent on the temperature of the content. The use of templates
influences the creative process by channeling thinking along predefined inven-
tive routes (Goldenberg et al. 1999).
The interactionist perspective on creativity (e.g., Woodman et al. 1993) argues that
process- or context-related variables, such as providing subjects with templates
(Goldenberg et al. 1999) or adding task constraints (e.g., Sellier and Dahl 2011),
may interact with a persons innate creative ability. The use of creative templates
could, for example, boost the performance of noncreative individuals, but may prove
less effective for highly creative individuals. To not confound the validity coefficients
with such interaction effects, this article focuses exclusively on the relationship
between the creative ability of the person and creative outcomes, without interfering
in the creative process or context.
Now what does a persons creative ability consist of? Creative tasks are typically
ill-defined with many parameters left unspecified, which opens up a vast solution
space (Finke et al. 1992). According to creative cognition theories, people have a
natural tendency to search for ideas within bounded areas of their knowledge
network, i.e., they follow a path of least resistance (Ward 1994). The first ideas that
come to mind are usually the most obvious ones based on easily accessible knowl-
edge structures (Rietzschel et al. 2007). For example, when asked to draw a new type
of alien, people tend to incorporate features of known earth animals (Marsh et al.
1999). Divergent thinking is needed for thinking outside the box and to generate more
creative ideas (Guilford 1967; Torrance 1974). Divergent thinking comprises the
following cognitive subskills: fluency, flexibility, and originality. Fluency is the
ability to generate a large number of ideas in response to a problem. Originality is
the ability to come up with novel, unusual ideas. Flexibility refers to the ability to
produce ideas in different categories. Divergent thinking is necessary, but not suffi-
cient. Turning a novel idea into something meaningful requires convergent thinking
(Kilgour and Koslow 2009). Elaboration, which is the ability to work out an idea and
embellish it with details (Torrance 1974), is a more convergent cognitive subskill
(Kris 1952).
For a comparative evaluation of different assessment methods of creative ability, it
is imperative to clearly define the construct of interest (Robertson and Smith 2001). A
good construct definition (see Rossiter 2002) includes the focal object (i.e., the
person) and the attribute on which the focal object is to be evaluated (i.e., creative
ability, comprising fluency, originality, flexibility, and elaboration). The rater entity,
i.e., the person who provides the object-on-attribute judgments is an often neglected,
yet important aspect. For example, customer ratings of IBMs service quality do not
have to produce the same estimates as manager ratings of IBMs service quality
(Rossiter 2002). For the validation studies that are presented next, measures were
sought that rate the same object on the same attribute, but deploy different assessment
methods or rater entities.
Mark Lett (2012) 23:973985 975
3 Study 1: Prospective marketing employees
This study involved prospective marketing employees, i.e., marketing students who
were close to graduation. They usually do not have a portfolio of past creative work.
Recruiters thus have to resort to other indicators of creative ability. Three brief
assessment methods with different rater entities were scrutinized: a creative ability
test scored by trained raters (test ratings), CVs rated by recruitment and selection
professionals (CV ratings), and a questionnaire completed by the individual (self-
ratings). Cognitive ability tests have been found to be good, if not the best, predictors
of job performance across a wide range of occupations (e.g., Robertson and Smith
2001). CVs are a widely used method of selection that has received surprisingly little
attention from researchers (Robertson and Smith 2001). And finally, self-rated ques-
tionnaires can convey information about the person that may be difficult to observe
by external raters.
3.1 Study framework
Figure 1 depicts the framework for this study with the three predictor variables (left-
hand side), viz. test ratings, student CV ratings, and self-ratings, and two criterion
variables (right-hand side). The first criterion measure, viz. creative performance,
concerns the subjects performance on a real-life creative marketing task as rated by
marketing experts (work sample ratings). The second criterion measure, viz. early
career creativity, concerns an assessment of the subjects creative ability as rated by
recruitment and selection professionals based on early career CVs (i.e., 5 years after
the collection of the other measures displayed in Fig. 1, indicated by T2 and T1,
respectively). Criterion-related validity coefficients are commonly reported for indi-
vidual methods (Robertson and Smith 2001). In practice, however, multiple methods
are often used to assess the candidates likely performance. It is therefore worthwhile
to also investigate the incremental validity of the methods, i.e., the extent to which
they provide non-overlapping information (Robertson and Smith 2001).
3.2 Hypotheses
The hypotheses will be largely based on the vast body of research on personnel
selection in general, which has been summarized in meta-analytical studies (e.g.,
Schmidt and Hunter 2004). Empirical studies on the relationship between assessment
methods of creative ability and creative performance in business are not as ubiquitous
as for other cognitive abilities, such as general mental ability, with overall job
performance as the criterion. In the absence of evidence for creative ability, these
findings will be used to formulate the hypotheses.
First, with work sample ratings as the criterion, one may expect higher operational
validity for the test ratings of creative ability than for the student CV ratings or the
self-ratings, since both the test ratings and the work sample ratings are derived from
the subjects actual performance on a creative task, one being general in nature and
the other one domain-specific. Cognitive ability tests, in general, have demonstrated
high validity coefficients (of about 0.5) with overall job performance as the criterion
(see Robertson and Smith 2001; Schmidt and Hunter 2004). As for the self-ratings,
976 Mark Lett (2012) 23:973985
there is some evidence that suggests that the validity coefficient with supervisory
ratings of creative performance as the criterion is about 0.3 (e.g., Tierney and Farmer
2004). Thus:
H1 Test ratings of creative ability have higher operational validity with creative
performance on a marketing task (work sample ratings) as the criterion than
student CV ratings and self-ratings.
For the second criterion measure, viz. the expert-rated creative ability based on
early career CVs, one may expect a different pattern. The student CV ratings are
likely to show high operational validity, as past behavior is usually a good predictor
of future behavior. Moreover, the information listed in the early career CVs and the
student CVs partially overlaps. Self-ratings of creative ability represent a belief in
ones ability to perform well on creative tasks (Tierney and Farmer 2004) or in ones
aptitude for a creative job and may therefore steer career decisions. The extent to
which a persons innate (test-rated) creative ability eventually translates into a
creative career, however, will depend on many other personal and contextual factors
(Amabile 1983). Work samples, such as a real-life marketing task, have generally
demonstrated high validity coefficients (of about 0.5) with overall job performance as
the criterion (see Robertson and Smith 2001). Thus:
H2 Student CV and self-ratings of creative ability and work sample ratings of
creative performance have higher operational validity with early career creativ-
ity (CV ratings) as the criterion than test ratings.
The question as to whether the three brief assessment methods of creative ability
are able to pick up unique, non-overlapping information that is relevant to the
.85
p<.01
Test rating
(T1)
Student CV rating
(T1)
Self-rating
(T1)
Creative Ability
(T1)
Early Career
Creativity
(T2)
Creative
Performance
(T1)
Work sample rating
(T1)
Early career CV
rating (T2)
Indicator
Latent
Construct
Legend:
T1 = 2005
T2 = 2010
.69
p<.01
.39
p<.01
R
2
= .48
R
2
= .35
.90
p<.01
.56
p<.01
.25
p<.05
Fig. 1 Conceptual framework study 1
Mark Lett (2012) 23:973985 977
criterion measures will be answered empirically. Personnel selection research has
shown that combining methods (e.g., cognitive ability tests and work samples) can
lead to higher validity coefficients (see Robertson and Smith 2001). But it is unclear
whether these findings also translate to the narrower concept of creative performance
with predictor measures that intend to assess the focal object on exactly the same
attribute.
3.3 Participants and procedure
Twenty marketing students (age, 1928; gender, 35 % female; education level,
70 % master level) from a Dutch university participated in return for a
monetary reward (30 Euros). The participants were asked to design a creative
marketing campaign for a well-known beer brand based on a real-life campaign
brief (including marketing objectives, the desired theme, and some practical
constraints). The participants first completed a creative ability questionnaire. In
the universitys laboratory, they then collectively took a creative ability test
after which they worked on the task individually (maximum, 3 h). They were
also asked to submit their student and early career CVs (5 years later). Three
students could not be tracked down 5 years later.
3.4 Measures
Test ratings of creative ability (predictor) The Torrance Tests of Creative Thinking
(TTCT; Torrance 1974) that measure the four cognitive subskills of creative ability,
viz. fluency, flexibility, originality and elaboration, are by far the most commonly
used tests of creative ability (Plucker and Renzulli 1999, p.39). There is also a 15-
min version of the TTCT available, namely the Abbreviated Torrance Test for Adults
(ATTA; Goff and Torrance 2002), which is more suitable for use in practice (see
Althuizen et al. 2010). This test consists of one verbal and two figural tasks each with
a 3-min time limit. The verbal task asks respondents to imagine a hypothetical
situation (e.g., that you could walk on air) and to list as many problems as possible
that might occur. The other tasks ask for figural responses in the form of pictures
drawn by the testee, prompted by stimuli. Two trained raters independently scored all
test booklets using a detailed scoring manual. The cognitive subskill scores are
summed across tasks and then converted into normalized scores. The average across
subskills forms the subjects test rating of creative ability. The inter-rater reliability
coefficient (intraclass correlation (ICC) requiring absolute agreement, see Shrout and
Fleiss 1979) was 0.96. Hence, the scores of the two raters were averaged. The mean
test rating of creative ability was 16.0 (SD01.3) on a scale from 11 (0low) to 19
(0high).
Self-ratings of creative ability (predictor) The Abedi Test of Creativity (ATC; see
Auzmendi et al. 1996) is a 56-item, multiple-choice questionnaire that measures
the same four cognitive subskills by means of self-ratings. An example of an
item for fluency is: If you had to participate in a contest in which you were
asked to come up with as many words as possible which began with the letter
J, how would you do? (10poorly, 20okay, 30very well). An example for
978 Mark Lett (2012) 23:973985
originality is: Do people think that you come up with unique ideas? (10no,
20sometimes, 30often). Completing the ATC takes about 15 min and scoring
the answers does not require expert judgment. The average of the item scores,
up to two decimal places, forms the subjects self-rating of creative ability (M0
2.37, SD00.20).
CV ratings of creative ability (predictor and criterion) The CVs contained informa-
tion on education, work experience, activities, interests, skills and hobbies. As it is
difficult to assess cognitive subskills based on a CV only, the following two items
were used: Based on your reading of the CV, how would you rate the creative ability
of this person (10minimal to 70substantial), How likely is it that this person
receives an invitation for an interview for a creative marketing job (10not likely at
all to 70very likely). Four recruitment and selection professionals agreed to rate the
subjects creative ability and were randomly assigned to either the student CVs or the
early career CVs. After checking the reliability of the two items within judges
(Cronbach s >0.88), the item scores were averaged into a single CV rating of
creative ability. The inter-rater reliability coefficient (ICC requiring consistency, see
Shrout and Fleiss 1979) was 0.69 for the student CVs and 0.82 for the early career
CVs. Hence, the scores were averaged across raters (student CV ratings: M04.56,
SD01.35; early career CV ratings: M03.94, SD01.27).
Work sample rating of creative performance (criterion) The campaign proposals
were rated on two commonly used dimension of creativity (see Amabile
1983; Rubera et al. 2010), viz. novelty (three items: uniqueness, originality, and
surprise) and usefulness (four items: effectiveness, fit with the brand, appeal, and
usefulness) on 11-point scales (00lowest to 100highest; cf. Kilgour and Koslow
2009). Two agency directors and three brand managers, of whom two had served as
chairman for the annual Marketing Campaign Awards, agreed to independently rate
all proposals. Following a dimensionality and reliability check within judges (all
Cronbach s>0.83), the item scores were combined into single novelty and useful-
ness scores. To calculate a creative performance score, the mean of the novelty and
usefulness scores was taken (Shalley et al. 2004). The inter-rater reliability coefficient
(ICC requiring consistency) was 0.72. Hence, the creative performance scores were
averaged across judges. The mean work sample rating of creative performance was
5.80 (SD00.68).
Other potential indicators of creative ability Recruiters commonly use grade point
averages (GPAs) to screen college applicants. Permission was therefore
obtained to access the students grade records (after they had graduated). All
course grades are reported on an 11-point, decimal scale ranging from 0
(0lowest) to 10 (0highest). The records were incomplete for four participants.
The overall GPAs were broken down into a bachelor GPA (M06.88, SD00.47)
and a master GPA (M07.15, SD00.43), as partial measures of GPAs, such as in-
major GPAs, have been found to be more predictively valid for job performance
(McKinney et al. 2003). The grade for the thesis (M07.47, SD00.61) was also
included, since writing a thesis arguably allows for more creative expression than
taking an exam.
Mark Lett (2012) 23:973985 979
3.5 Results
Table 1 shows the bivariate correlation coefficients between the predictor and the
criterion measures, which is the most straightforward statistic for assessing the
operational validity of the assessment methods.
Performance of the predictor variables The correlation coefficient between the test
ratings and the self-ratings is significant (r00.46, p00.044) but, in line with H1, the
test ratings predict the students creative performance on the marketing task (work
sample ratings) more accurately (r00.59, p00.006 versus r00.36, p00.117). The
operational validity of the test ratings is quite impressive and comparable to the
average validity coefficient of about 0.5 for general mental ability tests with overall
job performance as the criterion. However, contrary to H1, the coefficient for the
student CV ratings is even higher (r00.66, p00.004), although not significantly
different from the coefficient for the test ratings (p00.749, based on a Fisher r-to-z
transformation). Surprisingly, the student CVs contain sufficient information for
professionals to accurately discriminate between individuals with low and high test
ratings of creative ability, despite the rather homogeneous sample. As expected (H2),
student CV ratings (r00.54, p00.025) and work sample ratings (r00.57, p00.009)
have higher operational validity with early career CV ratings as the criterion than the
test ratings (r00.40, p00.109) but also higher than the self-ratings (r00.25, p0
0.338). GPAs are not significantly related to the criterion measures (see Table 1).
Thus, they should not be used as indicators of creative ability (GPAs are more likely
to reflect general mental ability), although the correlation between the master thesis
grade and the test ratings is marginally significant (r00.42, p00.075).
When combined, the test ratings, student CV ratings and self-ratings are able to
explain 48 % of the variance in the work sample ratings of creative performance
Table 1 Correlation coefficients between predictor and criterion measures in study 1
1 2 3 4 5 6 7
Creative ability measures (predictors)
1. Test ratings
2. Student CV ratings 0.67*
3. Self-ratings 0.46* 0.16
Other potential indicators
4. Bachelor GPA 0.15 0.04 0.33
5. Master GPA 0.15 0.14 0.05 0.40
6. Thesis grade 0.42** 0.39 0.24 0.34 0.64*
Criterion measures
7. Work sample ratings (creative performance) 0.59* 0.66* 0.36 0.14 0.13 0.09
8. Early career CV ratings (early career creativity) 0.40 0.54* 0.25 0.13 0.09 0.20 0.57*
N020. Missing values for student and early career CVs (n017), bachelor GPA (n018), master GPA (n016)
and thesis grade (n019) are replaced by the mean value
*p<0.05, **p<0.10
980 Mark Lett (2012) 23:973985
based on a partial least squares analysis (smartPLS) of the model depicted in Fig. 1.
This is 7 % (for the student CV ratings) to 36 % (for the self-ratings) more than the
assessment methods can explain individually. Using a combination of test ratings and
student CV ratings seems however sufficient, as they already explain 47 % of the
variance in the work sample ratings. In combination with the work sample ratings, the
three methods are able to explain 35 % of the variance in the early career CV ratings,
which is 6 % (student CV ratings) to 28 % (self-ratings) more than the methods can
explain individually. It should be noted that the work sample ratings alone already
explain 32 % of the variance in the early career CVratings. However, obtaining work
sample ratings is more cumbersome.
4 Study 2: Current employees of a marketing agency
Study 1 revealed that self-ratings of creative ability have lower operational validity
than test ratings and student CV ratings. Hunter and Hunter (1984) argue that the
validity of self-ratings drops drastically for individuals who have not received
sufficient feedback on the ability of interest, such as the marketing students in study 1.
Study 2 therefore involves current marketing employees and investigates the conver-
gent validity of test ratings, self-ratings, and supervisory ratings of creative ability.
One would expect that supervisors, who are responsible for recruiting and selecting
creative marketing personnel, are better able to assess their employees creative
ability when they have had more time to observe them. Thus:
H3 Test ratings, self-ratings, and supervisory ratings of creative ability show higher
convergent validity for senior and long-time employees than for juniors and
recent recruits.
4.1 Participants, procedure and measures
Twenty-four employees of a Dutch marketing agency (annual gross income, five
million euro), varying from art directors and designers to IT specialists (age, 2037;
50 % female; tenure, from a few months up to 8 years), agreed to complete the self-
rated ATC and the ATTA test. Five years later, information was collected on the
participants current jobs. The same raters as in study 1 independently scored the
ATTA test booklets. The inter-rater reliability (ICC requiring absolute agreement) for
the test ratings was 0.99. Hence, the test ratings were averaged across raters (M016.5,
SD01.5). The mean for the self-ratings of creative ability was 2.32 (SD00.20).
The CEO and the creative director agreed to independently rate their employees
overall creative ability as well as the four cognitive subskills, viz. fluency, flexibility,
originality, and elaboration. For this purpose, a six-item measurement scale was
devised based on the subskill definitions provided in the ATTA manual (Goff and
Torrance 2002). A sample item for flexibility is: How would you rate the ability of
this person to approach a task from different points of view (10minimal to 70
substantial). The inter-rater reliability coefficient (ICC requiring consistency) was
0.78 for the single-item supervisory ratings and 0.88 for the composite supervisory
ratings (i.e., the average of the six subskill items). The scores were thus averaged
Mark Lett (2012) 23:973985 981
across supervisors. The mean supervisory rating of creative ability was 4.6 (SD0
0.99) for the single-item measure and 4.3 (SD00.70) for the composite measure.
4.2 Results
Table 2 shows the correlation coefficients. The test ratings and self-ratings are highly
correlated (r00.70, p<0.001), but they are only moderately correlated with the
single-item supervisory ratings (test ratings: r00.34, p00.107; self-ratings: r00.48,
p00.017) and the composite supervisory ratings (test ratings: r00.31, p00.141; self-
ratings: r00.36, p00.087). The remainder of Table 2 shows the correlation coeffi-
cients for employees who were less than 2 years with the company (recent recruits,
n010) versus 5 years or more (long-time employees, n09) and for juniors (25 years
and younger, n09) versus seniors (30 years and older, n08). The differences are as
expected (H3), but the magnitude is striking. Overall, the three assessment methods
Table 2 Means, standard deviations, and correlation coefficients for the creative ability measures in study 2
Variables Mean SD 1 2 3
All employees (n024)
1. Test ratings (ATTA) 16.5 1.5
2. Self-ratings (ATC) 2.32 0.20 0.70*
3. Supervisory ratings (composite) 4.3 0.70 0.31 0.36**
4. Supervisory ratings (single item) 4.6 0.99 0.34 0.48* 0.77*
Recent recruits (2 years; n010)
1. Test ratings (ATTA) 16.9 1.1
2. Self-ratings (ATC) 2.41 0.11 0.35
3. Supervisory ratings (composite) 4.3 0.74 0.15 0.39
4. Supervisor ratings (single item) 4.8 1.2 0.02 0.61** 0.83*
Long-time employees (5 years; n09)
1. Test ratings (ATTA) 16.3 2.0
2. Self-ratings (ATC) 2.29 0.22 0.82*
3. Supervisory rated (composite) 4.4 0.71 0.58 0.64*
4. Supervisory rated (single item) 4.5 0.87 0.79* 0.70* 0.69*
Juniors (25 years; n09)
1. Test ratings (ATTA) 16.7 1.2 -
2. Self-ratings (ATC) 2.36 0.15 0.48
3. Supervisory ratings (composite) 4.1 0.62 0.01 0.01
4. Supervisory ratings (single item) 4.6 0.98 0.02 0.49 0.74*
Seniors (30 years; n08)
1. Test ratings (ATTA) 16.2 2.2
2. Self-ratings (ATC) 2.34 0.19 0.96*
3. Supervisory ratings (composite) 4.4 0.75 0.61 0.65**
4. Supervisory ratings (single item) 4.6 0.83 0.69** 0.60 0.66**
*p<0.05, **p<0.10
982 Mark Lett (2012) 23:973985
are highly convergent for seniors and long-time employees, but not for juniors and
recent recruits (see Table 2).
If the test ratings are operationally valid, high scores should be overrepresented
among art directors and senior designers (n06), as they are hired to perform creative
tasks (Kilgour and Koslow 2009). Sixty-seven percent (four out of six) of the art
directors and designers scored a six or a seven on the test, which is significantly
higher than the 16 % in Goff and Torrances (2002) sample of the general population
(
2
(1)011.123, p<0.001). The other two scored a 4, which is below the level of
creativity required for the job (as assessed by the two supervisors). Five years later,
they had left the agency. Five employees had a higher test rating of creativity than
strictly needed for the job. Five years later, two of them were promoted and two had
started their own company, which is arguably a creative endeavor.
5 Conclusions and discussion
In a small-scale hiring context with prospective marketing employees, the combination
of test ratings and student CV ratings of creative ability demonstrated high operational
validity with creative performance and early career creativity as the criterion measures.
Self-ratings should be approached with more caution. Self-ratings of creative ability are
often used to investigate interaction effects between the employee and the work
environment (see Shalley et al. 2004). This article shows that self-ratings as well as
supervisory ratings may only provide reasonable and easy-to-administer alternatives
to test ratings when more senior and long-time employees are concerned, but not for
juniors and recent recruits. For studies on employee creativity, it may thus be
worthwhile to check whether the results hold for different groups of employees.
The use of self-ratings is even more questionable in high-stakes hiring contexts
(Hunter and Hunter 1984). The unexplained variance in the criterion measures in
study 1 indicates that creative ability alone is not a guarantee for creative performance
or early career creativity. Other selection instruments, such as personality tests and
assessment centers, may still prove valuable. Last but not least, establishing a fit
between the creative demands of a job and the employees creative ability may lead to
more job satisfaction and less stress (Livingstone et al. 1997).
Because sampling errors are explicitly included in the calculation of statistical tests,
there should not be a bias against statistically significant results obtained fromproperly
selected small samples (Sawyer and Peter 1983, p.124). One would expect that large
effect sizes obtained with small samples can be replicated easily (Sawyer and Peter
1983). Additional studies in other small-scale hiring contexts are nonetheless desirable to
further strengthen the confidence in the reported findings. A priori, there is however no
reason to expect that the operational validity of the investigated assessment methods will
be different for other jobs for which creativity is an important component (see Scherbaum
2005). To stress the importance of finding good selection instruments, Hunter and
Hunter (1984, p.7273) argue that any instrument of lower validity would incur very
high costs, because even minute differences in validity translate into large dollar
amounts. Creativity is important as it can provide a competitive edge (1) to compa-
nies in an increasingly global marketplace and (2) to employees facing the automa-
tion of relatively structured tasks by computers, robotics and other technologies.
Mark Lett (2012) 23:973985 983
Acknowledgments The author would like to thank the editor and two anonymous reviewers for their
valuable comments and suggestions. Timothy Heath, Ayse nculer, John Rossiter, and Berend Wierenga
are thanked for their feedback and comments on earlier versions of the manuscript. The author is also
grateful to Ed van Eunen, Ren Hendriks, Tom Wilms, Pieter Brusman, Eric Welles, Sebastiaan Bongers,
Natasja Deelen, Jeanette Bisschop, Ragnhild Savenije, and Willem Royaards for their contributions to this
research project.
References
Althuizen, N., Wierenga, B., & Rossiter, J. R. (2010). The validity of two brief measures of creative ability.
Creativity Research Journal, 22, 5361.
Amabile, T. M. (1983). The social psychology of creativity. New York: Springer.
Andrews, J., & Smith, D. C. (1996). In search of the marketing imagination: factors affecting the creativity
of marketing programs for mature products. Journal of Marketing Research, 33, 174187.
Auzmendi, E., Villa, A., & Abedi, J. (1996). Reliability and validity of a newly constructed multiple-choice
creativity instrument. Creativity Research Journal, 9, 8995.
Coppedge, M. (2007). Theory building and hypothesis testing: large- vs. small-N research on democrati-
zation. In G. Munck (Ed.), Regimes and democracy in Latin America: theories and findings. Oxford:
Oxford University Press.
Finke, R. A., Ward, T., & Smith, S. (1992). Creative cognition: theory, research, and applications.
Cambridge: MIT Press.
Goff, K., & Torrance, E. P. (2002). Abbreviated Torrance test for adults manual. Bensenville: Scholastic
Testing Service.
Goldenberg, J., Mazursky, D., & Solomon, S. (1999). Toward identifying the inventive templates of new
products: a channeled ideation approach. Journal of Marketing Research, 36, 200210.
Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill.
Heneman, R. L., Tanksy, J. W., & Camp, S. M. (2000). Human resource practice in small and medium-sized
enterprises. Entrepreneurship Theory and Practice, 25, 1116.
Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance.
Psychological Bulletin, 96, 7298.
IBM. (2010). Capitalizing on complexity: insights from the global chief executive officer study. Somers:
IBM.
Kabanoff, B., & Rossiter, J. R. (1994). Recent developments in applied creativity. In C. L. Cooper & I. T.
Robertson (Eds.), International review of industrial and organizational psychology (pp. 283324).
London: Wiley.
Kilgour, M., & Koslow, S. (2009). Why and how do creative thinking techniques work? Trading off
originality and appropriateness to make more creative advertising. Journal of the Academy of Market-
ing Science, 37, 298309.
Kris, E. (1952). Psychoanalytic exploration in art. New York: International Universities Press.
Livingstone, L. P., Nelson, D. L., & Barr, S. H. (1997). Person-environment fit and creativity: an
examination of supply-value and demand-ability versions of fit. Journal of Management, 23, 119146.
Marsh, R. L., Ward, T. B., & Landau, J. (1999). The inadvertent use of prior knowledge in a generative
cognitive task. Memory and Cognition, 27, 94105.
McKinney, A. P., Carlson, K. D., Mecham, R. L., DAngelo, N. C., & Connerley, M. L. (2003). Recruiters
use of GPA in initial screening decisions: higher GPAs dont always make the cut. Personnel
Psychology, 56, 823845.
Patterson, F., Kerrin, M., Gatto-Roissard, G., & Coan, P. (2009). Everyday innovation: how to enhance
innovative working in employees and organisations. London: NESTA.
Plucker, J. A., & Renzulli, J. S. (1999). Psychometric approaches to the study of human creativity. In R. J.
Sternberg (Ed.), Handbook of creativity (pp. 3561). Cambridge: Cambridge University Press.
Rietzschel, E. F., Nijstad, B. A., & Stroebe, W. (2007). Relative accessibility of domain knowledge and
creativity: the effects of knowledge activation on the quantity and originality of generated ideas.
Journal of Experimental Social Psychology, 43, 933946.
Robertson, I. T., & Smith, M. (2001). Personnel selection. Journal of Occupational and Organizational
Psychology, 74, 441472.
Rossiter, J. R. (2002). The C-OAR-SE procedure for scale development in marketing. International Journal
of Research in Marketing, 19, 305335.
984 Mark Lett (2012) 23:973985
Rubera, G., Ordanini, A., & Mazursky, D. (2010). Toward a contingency view of new product creativity:
assessing the interactive effects of consumers. Marketing Letters, 21, 191206.
Sawyer, A., & Peter, J. P. (1983). The significance of statistical significance tests in marketing research.
Journal of Marketing Research, 20, 122133.
Scherbaum, C. A. (2005). Synthetic validity: past, present and future. Personnel Psychology, 58, 481515.
Schmidt, F. L., & Hunter, J. E. (2004). General mental ability in the world of work: occupational attainment
and job performance. Journal of Personality and Social Psychology, 86, 162173.
Schmitt, N. (2007). The value of personnel selection: reflections on some remarkable claims. Academy of
Management Perspectives, 21, 1923.
Scratchley, L. S., & Hakstian, A. R. (2001). The measurement and prediction of managerial creativity.
Creativity Research Journal, 13, 367384.
Scullen, S. E., & Meyer, B. C. (2012). More applicants or more applications per applicant? A big question
when pools are small. Journal of Management. doi:10.1177/0149206312438774.
Sellier, A.-L., & Dahl, D. W. (2011). Focus! Creative success is enjoyed through restricted choice. Journal
of Marketing Research, 48, 9961007.
Shalley, C. E., Zhou, J., & Oldham, G. R. (2004). The effects of personal and contextual characteristics on
creativity: where should we go from here? Journal of Management, 30, 933958.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological
Bulletin, 86, 420428.
Tierney, P., & Farmer, S. M. (2004). The Pygmalion process and employee creativity. Journal of Manage-
ment, 30, 413432.
Torrance, E. P. (1974). The Torrance tests of creative thinking: norms-technical manual. Lexington:
Personnel Press.
Ward, T. B. (1994). Structured imagination: the role of category structure in exemplar generation. Cognitive
Psychology, 27, 140.
Woodman, R. W., Sawyer, J. E., & Griffin, R. W. (1993). Toward a theory of organizational creativity.
Academy of Management Review, 18, 293321.
Mark Lett (2012) 23:973985 985

You might also like