You are on page 1of 50

Ravens Advanced Progressive Matrices

International Technical Manual

Copyright 2011 NCS Pearson, Inc. All rights reserved.


No part of this publication may be reproduced or transmitted in any form or by any means,
electronic or mechanical, including photocopy, recording, or any information storage and retrieval system,
without permission in writing from the copyright owner.
Pearson, the TalentLens, Watson-Glaser Critical Thinking Appraisal, and Ravens Progressive Matrices
are trademarks in the U.S. and/or other countries of Pearson Education, Inc., or its affiliate(s).
Portions of this work were previously published.
Produced in the United Kingdom

Contents
Chapter 1
Introduction .......................................................................................................1
Development of the 23-Item Form....................................................................................................2
Internal Consistency Reliability...........................................................................................................3
Content Validity.....................................................................................................................................3
Convergent Validity ..............................................................................................................................3
Criterion-Related Validity...................................................................................................................4
Equivalency Information ......................................................................................................................5
Global Applicability...............................................................................................................................7
Development of Ravens APM International Versions ..................................................................7

Chapter 2
Australia/New Zealand (English) ...................................................................9
Translation/Adaptation Process.................................................................................................................9
Sampling Procedure ......................................................................................................................................9
Item/Test Difficulty .......................................................................................................................................11
Distribution of Scores ................................................................................................................................11
Evidence of Reliability .................................................................................................................................12

Chapter 3
France (French)........................................................................................................13
Translation/Adaptation Process................................................................................................................13
Sampling Procedure ....................................................................................................................................13
Item/Test Difficulty .....................................................................................................................................14
Distribution of Scores ...............................................................................................................................15
Evidence of Reliability ................................................................................................................................15

Chapter 4
India (English) .........................................................................................................17
Translation/Adaptation Process.......................................................................................................17
Sampling Procedure ............................................................................................................................17
Item/Test Difficulty .............................................................................................................................18
Distribution of Scores .......................................................................................................................19
Evidence of Reliability ........................................................................................................................19

Chapter 5
The Netherlands (Dutch) ...............................................................................21
Translation/Adaptation Process...........................................................................................................21
Sampling Procedure ................................................................................................................................21
Item/Test Difficulty .................................................................................................................................23
Distribution of Scores ...........................................................................................................................23
Evidence of Reliability ............................................................................................................................20

Chapter 6
The UK (English) .................................................................................................25
Translation/Adaptation Process...........................................................................................................25
Sampling Procedure ................................................................................................................................25
Item/Test Difficulty .................................................................................................................................27
Distribution of Scores ...........................................................................................................................28
Evidence of Reliability ............................................................................................................................28

Chapter 7
The US (English) ................................................................................................29
Sampling Procedure ..............................................................................................................................29
Item/Test Difficulty ...............................................................................................................................30
Distribution of Scores .........................................................................................................................31
Evidence of Reliability ..........................................................................................................................31

Appendix A
Best Practices in Administering and Interpreting the APM .................................33
Administrators Responsibilities ........................................................................................................33
Assessment Conditions .......................................................................................................................33
Answering Questions ..........................................................................................................................34
Administration.......................................................................................................................................34
Understanding the Scores Reported ...............................................................................................35
Maintaining Security of Results and Materials ................................................................................35
Sources of Additional Best Practice Information...........................................................................35
Instructions for Administering the APM Online............................................................................37
APM Short Test Administration Instructions Paper and Pen..................................................41

References ..........................................................................................................43
Tables
1.1 Descriptive Statistics of the APM by Test Version and Administration Order....................6
1.2 Reliability Estimates by APM Test Version and Administration Order .................................6
1.3 Descriptive Statistics and Internal Consistency Reliability Estimates for Ravens
APM Across Countries........................................................................................................................7
2.1 Demographic Information for the Australia/New Zealand Sample .......................................10
2.2 Ravens APM Item Analysis Information for the Australia/New Zealand Sample ................11
2.3 Distribution of APM Scores in the Australia/New Zealand Sample ......................................11
2.4 Internal Consistency Reliability Estimates in the Australia/New Zealand Sample..............12
3.1 Demographic Information for the France Sample .....................................................................13
3.2 Ravens APM Item Analysis Information for the France Sample..............................................14
3.3 Distribution of APM Scores in the France Sample ....................................................................15
3.4 Internal Consistency Reliability Estimates in the France Sample ...........................................15
4.1 Demographic Information for the India Sample.........................................................................17
4.2 Ravens APM Item Analysis Information for the India Sample .................................................18
4.3 Distribution of APM Scores in the India Sample .......................................................................19
4.4 Internal Consistency Reliability Estimates in the India Sample...............................................19
5.1 Demographic Information for the Netherlands Sample ..........................................................22

5.2 Ravens APM Item Analysis Information for Netherlands Sample .........................................23
5.3 Distribution of APM Scores in the Netherlands Sample .........................................................23
5.4 Internal Consistency Reliability Estimates in the Netherlands Sample ................................24
6.1 Demographic Information for the UK Sample ..........................................................................26
6.2 Ravens APM Item Analysis Information for the UK Sample...................................................27
6.3 Distribution of APM Scores in the UK Sample .........................................................................28
6.4 Internal Consistency Reliability Estimates in the UK Sample.................................................28
7.1 Demographic Information for the US Sample ...........................................................................29
7.2 Ravens APM Item Analysis Information for the US Sample....................................................30
7.3 Distribution of APM Scores in the US Sample ..........................................................................31
7.4 Internal Consistency Reliability Estimates in the US Sample.................................................. 31

Chapter 1
Introduction

The Ravens Progressive Matrices have been used in many countries for decades as a measure of
problem-solving and reasoning ability (Raven, Raven, & Court, 1998a). The various versions of the
Ravens Progressive Matrices have been studied in over 45 countries on samples totalling over 240,000
participants (Brouwers, Van de Vigver, & Van Hemert, 2009).
This manual describes the adaptation/translation of the latest 23-item version of the Ravens Advanced
Progressive Matrices (APM) for the US, Australia/New Zealand, France, India, the Netherlands and the
UK. From an international perspective, several enhancements were made to facilitate cross-country
score comparisons, and to standardise the testing experience for administrators and participants. These
enhancements include:

Use of a uniform test format and common test content across countries

Uniform scoring and reporting of scores across countries

Availability of local manager norms for each country, based on a common definition of manager
across countries

Implementation of a common set of items and administration time across countries (i.e., 23
items; 40 minutes).

Description of the Ravens Advanced Progressive Matrices


The Ravens Advanced Progressive Matrices (APM) is a nonverbal assessment tool designed to measure
an individuals ability to perceive and think clearly, make meaning out of confusion and formulate new
concepts when faced with novel information. The APM score indicates a candidates potential for
success in such positions as executive, director, general manager, or equivalent high-level technical or
professional positions in an organisation. These categories of positions typically require high levels of

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

clear and accurate thinking, problem identification, holistic situation assessment, and evaluation of
tentative solutions for consistency with all available information. Each item in the APM comprises a
pattern of diagrammatic puzzles with one piece missing. The candidates task is to choose the correct
missing piece from a series of possible answers.

Development of the Current 23-Item Form


The current revision of the APM was undertaken to provide customers with a shorter version of the
assessment that maintains the essential nature of the construct being measured and the psychometric
features of the assessment. The APM is a power assessment rather than a speeded assessment, even
though it has a time limit. Speeded assessments are typically composed of relatively easy items and rely
on the number of correct responses within restrictive time limits to differentiate performance among
candidates. In contrast, the APM items have a wide range of difficulty and a relatively generous time
limit, which makes it a power assessment. The 42-minute administration time for the current APM (40
minutes for 23 operational items in Part 1; 2 minutes for 2 experimental items in Part 2) maintains the
APM as an assessment of cognitive reasoning power rather than speed.
N.B. The paper and pencil format does not contain Part 2 experimental items

Classical Test Theory (CTT) and Item Response Theory methodologies were used in the analyses of the
APM data for item selection. Specifically, for each of the 36 items in the previous APM version, the
following indices were examined to select items: item difficulty index (p value), corrected item-total
correlation, IRT item discrimination (a) parameter, and IRT item difficulty (b) parameter. Because the
APM was designed to differentiate among individuals with high mental ability, less discriminating items
were dropped from the current version of the APM. For the current APM revision, data were used
from 929 applicants and employees in a number of positions across various occupations. These
individuals took the APM within the period May 2006 through to October 2007. Five hundred and nine
of these individuals provided responses about their current position levels (e.g., Executive, Director,

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Manager, and Professional/Individual Contributor). See the Appendix of separate document APM
Development 2007, for more details regarding the composition of the sample.

Internal Consistency Reliability


The internal consistency reliability estimate (split-half) for the APM total raw score was .85 in the U.S
standardisation sample (n=929). This reliability estimate for the 23-item version of the APM indicates
that the total raw score on the APM possesses good internal consistency reliability. Internal consistency
reliability estimates for each country-specific Manager norm group in the global data-collection effort are
summarised in each country-specific chapter within this manual.

Content Validity
In an employment setting, evidence of content validity exists when an assessment includes a
representative sample of tasks, behaviours, knowledge, skills, abilities, or other characteristics necessary
to perform the job. Evidence of the content-related validity of the APM should be established by
demonstrating that the jobs for which the APM is to be used require the problem solving skills
measured by the assessment. Such evidence is typically documented through a thorough job analysis.

Convergent Validity
Evidence of convergent validity is provided when scores on an assessment relate to scores on other
assessments that claim to measure similar traits or constructs. Years of previous studies on the APM
support its convergent validity (Raven, Raven, & Court, 1998b). In a sample of 149 college applicants,
APM scores correlated .56 with math scores on the American College Test (Koenig, Frey, & Detterman,
2007). Furthermore, in a study using 104 university students, Frey and Detterman (2004) reported that
scores from the APM correlated .48 with scores on the Scholastic Assessment Test (SAT).
Evidence of convergent validity for the current version of the APM is supported by two findings. First, in
the standardisation sample of 929 individuals, scores on the current APM correlated .98 with scores on

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

the previous APM. Second, in a subset of 41 individuals from the standardisation sample, the revised
APM scores correlated .54 with scores on the Watson-Glaser Critical Thinking AppraisalShort
Form (Watson & Glaser, 2006).

Criterion-Related Validity
Criterion-related validity addresses the inference that individuals who score better on an assessment
will be more successful on some criterion of interest (e.g., job performance). Criterion-related validity
for general mental ability tests like the APM is supported by validity generalisation. The principle of
validity generalisation refers to the extent that inferences from accumulated evidence of criterionrelated validity from previous research can be generalized to a new situation. There is abundant
evidence that measures of general mental ability, such as the APM, are significant predictors of overall
performance across jobs. For example, in its publication on the Principles for the Validation and Use of
Personnel Selection Procedures, SIOP (2003) notes that validity generalisation is well-established for
cognitive ability tests. Schmidt & Hunter (2004) provide evidence that general mental ability predicts
both occupational level attained and performance within one's chosen occupation and does so better
than any other ability, trait, or disposition and better than job experience (p. 162). Prien, Schippmann,
and Prien (2003) observe that decades of research present incontrovertible evidence supporting the
use of cognitive ability across situations and occupations with varying job requirements (p. 55). Many
other studies provide evidence of the relationship between general mental ability and job performance
(e.g., Kolz, McFarland, & Silverman, 1998; Kuncel, Hezlett, & Ones, 2004; Ree & Carretta, 1998; Salgado,
et al., 2003; Schmidt & Hunter, 1998; Schmidt & Hunter, 2004).

In addition to inferences based on validity generalisation, studies using the APM over the past 70 years
provide evidence of its criterion-related validity. For example, in a validation study of assessment
centres, Chan (1996) reported that scores on the Ravens Progressive Matrices correlated with ratings
of participants on initiative/creativity (r=.28, p< .05). Another group of researchers (Gonzalez,

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Thomas, and Vanyukov, 2005) reported a positive relationship between scores on the Ravens APM and
performance in decision-making tasks. Fay and Frese (2001) found that APM scores were consistently
and positively associated with an increase of personal initiative over time (p. 120). Recently, Pearson
(2010) conducted a study of 106 internal applicants for management positions in which APM scores
were positively correlated with trained assessor ratings of thinking, influencing, and achieving. In
addition, manager applicants scoring in the top 30% of APM scores were 2-3 times more likely to
receive above average ratings for the Case Study/Presentation Exercise, Thinking Ability, and
Influencing Ability than applicants in the bottom 30% of APM scores.

The APM Manual and Occupational Users Guide (Raven, 1994; Raven, Raven, & Court, 1998b) provide
additional information indicating that the APM validly predicts the ability to attain and retain jobs that
require high levels of general mental ability. The validity information presented in this manual is not
intended to serve as a substitute for locally obtained validity data. Local validity studies, together with
locally derived norms, provide a sound basis for determining the most appropriate use of the APM.
Therefore, users of the APM should study the validity of the assessment at their own location or
organisation.

Equivalency Information
Occasionally, customers inquire about the equivalence of online versus paper administration of the APM.
Studies of the effect of the medium of test administration have generally supported the equivalence of
paper and computerised versions of non-speeded cognitive ability tests (Mead & Drasgow, 1993). To
ensure that these findings held true for the Ravens Advanced Progressive Matrices, Pearson TalentLens
conducted an equivalency study using paper-and-pencil and computer-administered versions of the test.
The psychometric properties of the two forms were compared to determine whether the mode of
administration impacted scores and whether decision consistency could be assured across modes of
administration.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

In this study, a counter-balanced design was employed using a sample of 133 adult participants from a
variety of occupations. Approximately half of the group (n=68) completed the paper form followed by
the online version, while the other participants (n=65) completed the tests in the reverse order. The
interval between test sessions ranged from 13 to 91 days (M=22.9, SD=12.1). Table 1.1 presents means,
standard deviations, and correlations obtained from an analysis of the resulting data. Analyses of the test
modes revealed that there was no significant difference in the examinees raw scores on the APM
between paper (M= 11.9, SD= 4.9) and online versions (M= 11.7, SD= 4.8), t(132) = -0.95, p = .34). The
total scores from the different versions were highly correlated (r= .78).
Table 1.1
Descriptive Statistics of the APM by Test Version and Administration Order
Administration
Order
Paper First
Online First
Total

N
68
65
133

APM Test Version


Paper
Online
M
SD
M
SD
12.4
4.8
13.0
5.5
11.5
4.8
10.3
3.7
11.9
4.8
11.7
4.9

r
.85
.73
.78

Table 1.2 displays the reliability estimates of the paper and online versions from the different test
administration groups. These estimates demonstrate that reliability estimates of APM scores fall within
the acceptable range regardless of test modes or administration order, providing additional support for
equivalence.
Table 1.2 Reliability Estimates by APM Test Version and Administration Order
APM Test Version
Paper
Administration Order
Paper First
Online First
Total

Online

rsplit
.88
.86
.86

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

ralpha
.83
.82
.83

rsplit
.88
.75
.85

ralpha
.87
.70
.83

Global Applicability
The nonverbal aspect of the APM minimizes the impact of language skills on performance on the
assessment. The fact that the Ravens shows less influence of cultural factors than more verbally-laden
assessments has made it very appealing as global measure of cognitive ability. The global exposure of the
Ravens abstract reasoning format has several important advantages for inclusion in a global selection
strategy. Specifically, its familiarity increases the likelihood of local management support, it promotes
positive participant reactions and helps ensure that scores arent unduly influenced by language or
culture (see Ryan & Tippins, 2009 for more information on implementing a consistent and effective
global selection system). The following chapter provides important information in helping multinational
companies incorporate the APM into a global selection system, and to make informed comparisons of
applicants performance on the APM across countries and cultures.

Development of Ravens Advanced Progressive Matrices (APM) International Versions


Development for each country-specific version of the APM followed a uniform process focused on
adapting and translating the test instructions in a way that ensured consistent measurement of the
construct across countries. The international versions are based on the same 23-item set, including the
practice items, matrix stimuli, and response options. Table 1.3 presents a summary of results by country,
including the number of managers tested, characteristics of the score distribution, and total score
reliability.
Table 1.3 Descriptive Statistics and Internal Consistency Reliability Estimates for
Ravens APM Across Countries
Country
Australia/New
Zealand
France
India
Netherlands
UK
US

N
128
106
100
103
101
175

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Mean
11.95
14.33
9.51
13.01
12.38
12.23

SD
4.17
4.09
4.22
4.53
4.72
4.14

Skewness

Kurtosis

0.02
-0.34
0.19
-0.10
0.04
-0.03

-0.39
-0.09
-0.80
-0.64
-0.71
-0.13

ralpha
.77
.74
.79
.81
.83
.77

rsplit
.78
.79
.82
.83
.85
.81

Results showed that internal consistency reliability estimates across countries were adequate (e.g., rsplit =
.79-.85), and that sample homogeneity and differences in prior exposure to cognitive ability testing may
account for observed differences in raw score means across countries. Detailed information on the
collection and analyses of country-specific norms is provided throughout the remainder of this Manual.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Chapter 2
Australia/New Zealand (English)
Translation/Adaptation Process
Instructions for the APM were reviewed and adapted by a team of test-development experts from the
Sydney, Australia Pearson TalentLens office.

Sampling Procedure
The Sydney, Australia office of Pearson TalentLens recruited and administered the online version of the
APM to 128 manager-level examinees across various industries. These individuals took the APM under
timed (40-minutes) and proctored (i.e. supervised) conditions within the period November, 2009
through to September, 2010. Table 2.1 provides the demographic data for the final sample of N=128.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Table 2.1
Demographic Information for the Australia/New Zealand Sample
N
128
Education Level
Year 11 or equivalent
Year 12 or equivalent
Certificate III / IV
Diploma
Advanced Diploma
Bachelor
Graduate Certificate
Graduate/Postgraduate
Diploma
Master
Doctorate
Other
Not Reported
Sex
Female
Male
Not Reported
Age
21-24
25-29
30-34
35-39
40-49
50-59
60-69
Not Reported
Years in Occupation
<1 year
1 year to less than 2 years
3 years to less than 4 years
5 years to less than 7 years
8 years to less than 10 years
11years to less than 15 years
>15
Not Reported

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Percent
100.0

2
4
2
3
2
28
1

1.6
3.1
1.6
2.3
1.6
21.9
0.8

20
47
3
4
12

15.6
36.7
2.3
3.1
9.4

61
55
12

47.7
43.0
9.4

1
9
19
25
28
29
5
12

0.8
7.0
14.8
19.5
21.9
22.7
3.9
9.4

18
6
27
17
14
14
18
14

14.1
4.7
21.1
13.3
10.9
10.9
14.1
10.9

10

Item/Test Difficulty
Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used in the analysis
of the APM data collected in Australia and New Zealand. Specifically, for each of the 23 items in the
APM, the following indices were examined: IRT item difficulty (b) parameter, item-ability (theta)
correlation, item discrimination (a) parameter, item difficulty index (p value), and item-total correlation.
Results are presented in Table 2.2.
Table 2.2
Ravens APM Item Analysis Information for the Australia/New Zealand Sample
APM Item
Number

Item
Difficulty (b)
Parameter
(IRT)

Item-Ability
Correlation
(IRT)

Discrimination (a)
Parameter (IRT)

Item Difficulty Index (p


value; CTT)

Item-Total
Correlation
(CTT)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

-3.38
-3.05
-1.82
-1.51
-1.24
-0.55
-1.24
0.33
0.05
-0.14
0.05
0.36
0.48
0.68
1.15
1.01
0.80
0.17
1.06
1.28
0.64
1.24
3.63

0.25
0.37
0.43
0.40
0.30
0.44
0.44
0.41
0.46
0.46
0.41
0.31
0.43
0.42
0.39
0.35
0.49
0.42
0.37
0.41
0.42
0.42
0.43

1.03
1.11
1.12
1.05
0.85
1.07
1.10
0.91
1.12
1.13
0.91
0.53
1.00
0.99
0.93
0.80
1.21
0.96
0.86
0.99
0.98
1.00
1.09

0.95
0.94
0.83
0.80
0.76
0.64
0.76
0.46
0.52
0.56
0.52
0.46
0.43
0.39
0.31
0.33
0.37
0.50
0.32
0.28
0.40
0.29
0.05

0.22
0.33
0.37
0.34
0.22
0.36
0.36
0.31
0.38
0.38
0.31
0.21
0.33
0.33
0.28
0.24
0.42
0.32
0.27
0.32
0.33
0.31
0.31

Distribution of Scores
Characteristics of the distribution of APM raw scores for the Australia/New Zealand sample are
provided in Table 2.3.
Table 2.3 Distribution of APM Scores in the Australia/New Zealand Sample
N
128

Minimum
3

Maximum
23

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

M
12.0

11

SD
4.2

Skewness
0.02

Kurtosis
-0.39

Evidence of Reliability
Split-half (rsplit), Cronbachs alpha (ralpha), and standard error of measurement (SEM) were calculated using
the Australia/New Zealand Sample. Results are presented in Table 2.4. Internal consistency reliability
estimates were consistent with the values found in the US standardisation sample and confirm that the
APM demonstrates adequate reliability in the Australia/New Zealand sample.

Table 2.4
Internal Consistency Reliability Estimates in the Australia/New Zealand Sample
Ravens APM
Total Score

N
128

rsplit
.78

Note. SEM was calculated based on split-half reliability.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

12

ralpha
.77

SEM
1.96

Chapter 3
France (French)
Translation/Adaptation Process
Instructions for the APM were translated by a team of test-development experts from the Paris, France
office of Pearson TalentLens (Les Editions du Centre de Psychologie Appliquee).

Sampling Procedure
The Paris, France office of Pearson TalentLens recruited and administered the online version of the APM
to 100 manager-level examinees across various industries. These individuals took the APM under timed
(40-minutes) and proctored (i.e. supervised) conditions within the period November, 2009 through to
March, 2010. Table 3.1 provides the demographic data for the final sample of N=100.
Table 3.1 Demographic Information for the France Sample
N
100
Education Level
11 ans (1ire, CAP-BEP)
13-14 ans (Bac +1 et 2)
15-16 ans (Bac +2 et 3)
17-18 ans (Bac + 4 et 5)
Plus de 18 ans (Doctorat)
Not Reported
Sex
Female
Male
Age
21-24
25-29
30-34
35-39
40-49
50-59
60-69
Not Reported

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

1
7
11
74
4
3

Percent
100.0
1.0
7.0
11.0
74.0
4.0
3.0

51
49

51.0
49.0

13
25
17
10
18
15
1
1

13.0
25.0
17.0
10.0
18.0
15.0
1.0
1.0

13

Item/Test Difficulty
Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used in the analysis
of the APM data collected in France. Specifically, for each of the 23 items in the APM, the following
indices were examined: IRT item difficulty (b) parameter, item-ability (theta) correlation, item
discrimination (a) parameter, item difficulty index (p value), and item-total correlation. Results are
presented in Table 3.2.
Table 3.2 Ravens APM Item Analysis Information for the France Sample
APM Item
Number

Item
Difficulty (b)
Parameter
(IRT)

Item-Ability
Correlation
(IRT)

Discrimination (a)
Parameter (IRT)

1
2*
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

Item Difficulty
Index (p value;
CTT)

Item-Total
Correlation (CTT)

-3.38
0.05
0.90
0.97
0.00
-1.04
0.34
0.97
0.80
0.25
-1.97
0.32
1.03
0.90
0.26
-0.76
0.39
1.03
0.76
0.32
-0.45
0.15
0.45
0.71
0.02
-0.70
0.32
0.90
0.75
0.23
-0.70
0.45
1.13
0.75
0.37
-0.07
0.44
1.10
0.64
0.34
-0.63
0.49
1.21
0.74
0.43
0.34
0.38
0.88
0.56
0.26
-0.17
0.48
1.21
0.66
0.40
-0.07
0.25
0.55
0.64
0.13
0.49
0.50
1.32
0.53
0.41
0.98
0.40
0.92
0.43
0.27
1.18
0.42
0.99
0.39
0.30
0.04
0.52
1.35
0.62
0.44
0.34
0.57
1.56
0.56
0.50
0.78
0.24
0.31
0.47
0.13
0.93
0.45
1.08
0.44
0.32
0.88
0.43
1.03
0.45
0.32
0.93
0.48
1.20
0.44
0.38
3.06
0.32
0.98
0.12
0.24
*100 percent of the sample obtained a perfect score on Item 2, making the IRT and CTT values inestimable.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

14

Distribution of Scores
Characteristics of the distribution of APM raw scores for the France sample are provided in Table 3.3.
Table 3.3 Distribution of APM Scores in the France Sample
N
100

Minimum
3

Maximum
22

M
14.3

SD
3.9

Skewness
-0.31

Kurtosis
-0.16

Evidence of Reliability
Split-half (rsplit), Cronbachs alpha (ralpha), and standard error of measurement (SEM) were calculated using
the France Sample. Results are presented in Table 3.4. Internal consistency reliability estimates were
consistent with the values found in the US standardisation sample and confirm that the APM
demonstrates adequate reliability in the France sample.
Table 3.4 Internal Consistency Reliability Estimates in the France Sample
Ravens APM
Total Score

N
100

rsplit
.79

Note. SEM was calculated based on split-half reliability.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

15

ralpha
.74

SEM
1.80

Ravens Advanced Progressive Matrices (APM)

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

16

Chapter 4
India (English)

Translation/Adaptation Process
Instructions for the APM were reviewed and adapted by a team of test-development experts from the
Pearson TalentLens Bangalore, India office.

Sampling Procedure
The Bangalore, India office of Pearson TalentLens recruited and administered the online version of the
APM to 100 manager-level examinees across various industries. These individuals took the APM under
timed (40-minutes) and proctored (i.e. supervised) conditions within the period February, 2010 through
to April, 2010. Table 4.1 provides the demographic data for the final sample of N=100.
Table 4.1
Demographic Information for the India Sample
Education Level
10th
12th
Bachelors
Masters
Doctoral
Other
Sex
Female
Male
Age
20-24
25-29
30-34
35-39
40-44
45-49
Not Reported

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

N
100

Percent
100.0

1
3
49
39
2
6

1.0
3.0
49.0
39.0
2.0
6.0

22
78

22.0
78.0

2
27
33
17
10
4
7

2.0
27.0
33.0
17.0
10.0
4.0
7.0

17

Item/Test Difficulty
Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used in the analysis
of the APM data collected in India. Specifically, for each of the 23 items in the APM, the following indices
were examined: IRT item difficulty (b) parameter, item-ability (theta) correlation, item discrimination (a)
parameter, item difficulty index (p value), and item-total correlation. Results are presented in Table 4.2.
Table 4.2 Ravens APM Item Analysis Information for the India Sample
APM Item
Number

Item
Difficulty (b)
Parameter
(IRT)

Item-Ability
Correlation
(IRT)

Discrimination (a)
Parameter (IRT)

Item Difficulty Index (p


value; CTT)

Item-Total
Correlation
(CTT)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

-2.85
-2.27
-1.85
-2.05
-1.49
-1.43
-1.21
0.12
-0.57
-0.57
0.64
0.01
0.52
1.42
1.34
1.59
0.17
0.76
0.52
0.83
1.42
1.03
3.94

0.39
0.53
0.54
0.43
0.53
0.52
0.46
0.40
0.57
0.52
0.55
0.38
0.43
0.29
0.33
0.23
0.40
0.30
0.51
0.45
0.32
0.35
0.01

1.05
1.21
1.21
1.01
1.17
1.13
0.96
0.83
1.29
1.14
1.26
0.73
0.98
0.88
0.93
0.83
0.82
0.74
1.16
1.07
0.94
0.91
0.90

0.86
0.79
0.73
0.76
0.67
0.66
0.62
0.37
0.50
0.50
0.28
0.39
0.30
0.17
0.18
0.15
0.36
0.26
0.30
0.25
0.17
0.22
0.02

0.30
0.43
0.46
0.31
0.43
0.42
0.34
0.31
0.48
0.43
0.48
0.27
0.35
0.22
0.25
0.15
0.30
0.20
0.43
0.37
0.26
0.26
-0.02

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

18

Distribution of Scores
Characteristics of the distribution of APM raw scores for the India sample are provided in Table 4.3.
Table 4.3 Distribution of APM Scores in the India Sample
N
100

Minimum
2

Maximum
19

M
9.5

SD
4.2

Skewness
0.19

Kurtosis
-0.80

Evidence of Reliability
Split-half (rsplit), Cronbachs alpha (ralpha), and standard error of measurement (SEM) were calculated using
the India sample. Results are presented in Table 4.4. Internal consistency reliability estimates were
consistent with the values found in the US standardisation sample and confirm that the APM
demonstrates adequate reliability in the India sample.

Table 4.4 Internal Consistency Reliability Estimates in the India Sample


Ravens APM
Total Score

N
100

rsplit
.82

Note. SEM was calculated based on split-half reliability.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

19

ralpha
.79

SEM
1.79

Ravens Advanced Progressive Matrices (APM)

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

20

Chapter 5
Netherlands (Dutch)

Translation/Adaptation Process
Instructions for the APM were translated into Dutch by an independent translator, contracted by the
Amsterdam, Netherlands office of Pearson TalentLens. Two Dutch-speaking test-development experts
from the Pearson TalentLens Amsterdam office reviewed the translation and refined the final translation.

Sampling Procedure
The Amsterdam, Netherlands office of Pearson TalentLens recruited and administered the online
version of the Dutch APM to 138 manager-level examinees across various industries. These individuals
took the APM under timed (40-minutes) and proctored (i.e. supervised) conditions within the period
September through to October 2009.Thirty-five participants were eliminated from the sample after a
review of self-reported job titles revealed that they did not qualify as a Manager. Table 5.1 provides the
demographic data for the final sample of N=103.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

21

Table 5.1 Demographic Information for the Netherlands Sample


N
103
Education Level
<12 (mbo niveau 4)
12 (havo)
13 (vwo)
14 (hbo)
15 (wo)
Not Reported
Sex
Female
Male
Not Reported
Age
21-24
25-29
30-34
35-39
40-49
50-59
Not Reported
Years in Occupation
1-2 years
2-4 years
4-7 years
7-10 years
10-15 years
15+ years
Not Reported

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Percent
100.0

4
2
4
50
31
12

3.9
1.9
3.9
48.5
30.1
11.7

33
59
11

32.0
57.3
10.7

1
7
6
23
44
11
11

1.0
6.8
5.8
22.3
42.7
10.7
10.7

1
5
2
9
19
56
11

1.0
4.8
1.9
8.7
18.5
54.4
10.68

22

Item/Test Difficulty
Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used in the analysis
of the APM data collected in the Netherlands. Specifically, for each of the 23 items in the APM, the
following indices were examined: IRT item difficulty (b) parameter, item-ability (theta) correlation, item
discrimination (a) parameter, item difficulty index (p value), and item-total correlation. Results are
presented in Table 5.2.
Table 5.2 Ravens APM Item Analysis Information for the Netherlands Sample
APM Item
Number

Item
Difficulty (b)
Parameter
(IRT)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

-2.95
-2.76
-1.77
-1.59
-1.13
-0.46
-1.00
0.32
0.17
-0.58
0.17
0.67
0.17
0.37
1.42
0.88
0.98
-0.14
0.77
1.30
0.98
1.19
2.99

Item-Ability
Correlation
(IRT)

Discrimination (a)
Parameter (IRT)

Item Difficulty Index (p


value; CTT)

Item-Total
Correlation
(CTT)

0.29
0.38
0.50
0.40
0.24
0.47
0.33
0.43
0.50
0.51
0.53
0.44
0.48
0.48
0.47
0.36
0.50
0.51
0.59
0.40
0.45
0.40
0.28

1.02
1.09
1.19
1.03
0.68
1.06
0.82
0.87
1.12
1.19
1.23
0.89
1.03
1.04
1.04
0.66
1.11
1.15
1.38
0.84
0.94
0.84
0.92

0.94
0.93
0.85
0.83
0.78
0.67
0.76
0.52
0.55
0.69
0.55
0.46
0.55
0.51
0.32
0.42
0.40
0.61
0.44
0.34
0.40
0.36
0.12

0.24
0.31
0.44
0.34
0.16
0.39
0.23
0.34
0.42
0.44
0.46
0.35
0.39
0.39
0.37
0.26
0.41
0.43
0.52
0.30
0.38
0.30
0.20

Distribution of Scores
Characteristics of the distribution of APM raw scores for the Netherlands sample are provided in Table
5.3.
Table 5.3 Distribution of APM Scores in the Netherlands Sample
N
103

Minimum
2

Maximum
22

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

M
13.0

23

SD
4.5

Skewness
-0.10

Kurtosis
-0.64

Evidence of Reliability
Split-half (rsplit), Cronbachs alpha (ralpha), and standard error of measurement (SEM) were calculated using
the Netherlands sample. Results are presented in Table 5.4. Internal consistency reliability estimates
were consistent with the values found in the US standardisation sample and confirm that the APM
demonstrates adequate reliability in the Netherlands sample.

Table 5.4 Internal Consistency Reliability Estimates in the Netherlands Sample


Ravens APM
Total Score

N
103

rsplit
.83

Note. SEM was calculated based on split-half reliability.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

24

ralpha
.81

SEM
1.87

Chapter 6
UK (English)
Translation/Adaptation Process
Instructions for the APM were reviewed and adapted by a team of test-development experts from the
Pearson TalentLens London, UK office.

Sampling Procedure
The London, UK office of Pearson TalentLens recruited and administered the online version of the APM
to 101 manager-level examinees across various industries. These individuals took the APM under timed
(40-minutes) and proctored (i.e. supervised) conditions within the period June, 2009 through to March,
2010. Table 6.1 provides the demographic data for the final sample of N=101.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

25

Table 6.1 Demographic Information for the UK Sample


Education Level
GCSE equivalent
A-level, Scottish Highers or
equivalent
Higher Education Certificate
or Diploma
BA
BSc
BEd
LLB
Master Degree
Doctoral Degree
Other
Not Reported
Sex
Female
Male
Not Reported
Age
16-19
20-24
25-29
30-34
35-39
40-44
45-49
50-54
55-59
Not Reported
Years in Occupation
<1 year
1-2 years
3-4 years
5-7 years
8-10 years
11-15 years
16-20 years
26+ years

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

N
101

Percent
100.0

3
1

2.97
0.99

12

11.88

29
16
1
3
26
2
7
1

28.71
15.84
0.99
2.97
25.74
1.98
6.93
0.99

46
46
9

45.54
45.54
8.91

4
1
10
28
31
12
5
2
2
6

3.96
0.99
9.9
27.72
30.69
11.88
4.95
1.98
1.98
5.94

8
18
17
19
16
15
6
2

7.92
17.82
16.83
18.81
15.84
14.85
5.94
1.98

26

Item/Test Difficulty
Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used in the analysis
of the APM data collected in the UK. Specifically, for each of the 23 items in the APM, the following
indices were examined: IRT item difficulty (b) parameter, item-ability (theta) correlation, item
discrimination (a) parameter, item difficulty index (p value), and item-total correlation. Results are
presented in Table 6.2.
Table 6.2 Ravens APM Item Analysis Information for the UK Sample
APM Item
Number
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

Item
Difficulty (b)
Parameter
(IRT)
-3.12
-2.94
-2.27
-1.06
-1.47
-1.06
-0.93
0.03
-0.40
-0.08
0.45
0.51
0.51
0.56
1.00
1.42
0.45
0.24
0.67
1.35
1.35
1.06
3.72

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

Item-Ability
Correlation
(IRT)
0.26
0.33
0.36
0.40
0.32
0.36
0.38
0.55
0.58
0.42
0.53
0.39
0.54
0.58
0.54
0.48
0.52
0.52
0.45
0.37
0.42
0.52
0.39

Discrimination (a)
Parameter (IRT)
1.03
1.09
1.06
0.93
0.86
0.83
0.85
1.25
1.37
0.78
1.17
0.61
1.18
1.34
1.16
0.99
1.14
1.13
0.86
0.68
0.82
1.11
0.99

27

Item Difficulty Index (p


value; CTT)
0.94
0.93
0.88
0.73
0.79
0.73
0.71
0.54
0.62
0.56
0.46
0.45
0.45
0.44
0.36
0.29
0.46
0.50
0.42
0.30
0.30
0.35
0.06

Item-Total
Correlation
(CTT)
0.22
0.31
0.32
0.34
0.25
0.29
0.31
0.50
0.54
0.33
0.47
0.31
0.46
0.52
0.46
0.39
0.45
0.46
0.38
0.26
0.30
0.44
0.29

Distribution of Scores
Characteristics of the distribution of APM raw scores for the UK sample are provided in Table 6.3.
Table 6.3 Distribution of APM Scores in the UK Sample
N
101

Minimum
4

Maximum
23

M
12.4

SD
4.7

Skewness
0.04

Kurtosis
-0.71

Evidence of Reliability
Split-half (rsplit), Cronbachs alpha (ralpha), and standard error of measurement (SEM) were calculated using
the UK sample. Results are presented in Table 6.4. Internal consistency reliability estimates were
consistent with the values found in the US standardisation sample and confirm that the APM
demonstrates adequate reliability in the UK sample.

Table 6.4 Internal Consistency Reliability Estimates in the UK Sample

Ravens APM
Total Score

N
101

rsplit
.85

Note. SEM was calculated based on split-half reliability.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

28

ralpha
.83

SEM
1.83

Chapter 7
US (English)
Sampling Procedure
The US sample consisted of 175 manager-level examinees across various industries. These individuals
took the online APM under timed (40-minutes) and proctored (i.e. supervised) conditions within the
period April through to August 2008. Table 7.1 provides the demographic data for the final sample of
N=175.
Table 7.1 Demographic Information for the US Sample
Education Level
1-2 Years of College
Associates
3-4 Years of College
Bachelors
Masters
Doctorate
Sex
Female
Male
Not Reported
Age
21-24
25-29
30-34
35-39
40-49
50-59
60-69
Not Reported
Years in Occupation
1-2 years
2-4 years
4-7 years
7-10 years
10-15 years
15+ years
Not Reported

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

N
175

Percent
100.0

1
1
4
65
68
6

0.6
0.6
2.3
37.1
38.9
3.4

39
107
29

22.3
61.1
16.6

2
17
23
28
53
20
2
30

1.1
9.7
13.1
16.0
30.3
11.4
1.1
17.1

7
19
32
19
16
19
33

4.0
10.9
18.3
10.9
9.1
10.9
18.9

29

Item/Test Difficulty
Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies were used in the analysis
of the APM data collected in the US. Specifically, for each of the 23 items in the APM, the following
indices were examined: IRT item difficulty (b) parameter, item-ability (theta) correlation, item
discrimination (a) parameter, item difficulty index (p value), and item-total correlation. Results are
presented in Table 7.2.
Table 7.2 Ravens APM Item Analysis Information for the US Sample
APM Item
Number

Item
Difficulty (b)
Parameter
(IRT)

Item-Ability
Correlation
(IRT)

Discrimination (a)
Parameter (IRT)

Item Difficulty Index (p


value; CTT)

Item-Total
Correlation
(CTT)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

-3.11
-2.47
-1.90
-1.05
-1.52
-0.84
-1.39
0.04
-0.61
-0.46
0.04
0.18
0.64
0.61
1.19
1.29
0.73
0.53
1.23
1.13
1.10
1.39
3.24

0.31
0.39
0.47
0.41
0.29
0.31
0.36
0.51
0.50
0.43
0.45
0.31
0.42
0.48
0.42
0.42
0.42
0.38
0.48
0.33
0.43
0.32
0.36

1.06
1.10
1.17
1.03
0.89
0.79
0.98
1.32
1.25
1.04
1.09
0.53
0.98
1.21
1.00
0.99
0.98
0.80
1.16
0.74
1.03
0.78
1.00

0.94
0.90
0.85
0.74
0.80
0.70
0.79
0.53
0.66
0.63
0.53
0.51
0.41
0.42
0.31
0.29
0.40
0.44
0.30
0.32
0.33
0.28
0.07

0.26
0.34
0.42
0.33
0.20
0.21
0.29
0.43
0.44
0.35
0.36
0.21
0.33
0.40
0.31
0.31
0.34
0.27
0.40
0.23
0.33
0.19
0.23

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

30

Distribution of Scores
Characteristics of the distribution of APM raw scores for the US manager sample are provided in Table
7.3.
Table 7.3 Distribution of APM Scores in the US Sample
N
175

Minimum
2

Maximum
23

M
12.2

SD
4.1

Skewness
-0.03

Kurtosis
-0.13

Evidence of Reliability
Split-half (rsplit), Cronbachs alpha (ralpha), and standard error of measurement (SEM) were calculated using
the US sample. Results are presented in Table 7.4. Internal consistency reliability estimates confirm that
the APM demonstrates adequate reliability in the US sample.

Table 7.4 Internal Consistency Reliability Estimates in the US Sample


Ravens APM
Total Score

N
175

rsplit
.81

Note. SEM was calculated based on split-half reliability.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

31

ralpha
.77

SEM
1.80

Ravens Advanced Progressive Matrices (APM)

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

32

Appendix A
Best Practices in Administering and Interpreting the APM
Administrators Responsibilities
The best way for administrators to prepare for the assessment is to take it themselves, complying with
all directions. The administrator should ensure that the organisations assessment process complies with
professional standards and practices, including Human Resources policies. Before candidates take the
assessment, the administrator should explain the nature of the assessment, why it is being used, the
conditions under which the candidates will be assessed, and the nature of any feedback they will receive,
as determined by organisational policy.
Though not required for job applicants in all countries, we recommend obtaining informed consent from
the candidate before the assessment is taken. An informed consent form is a written statement
explaining the type of assessment instrument to be administered, the purpose of the evaluation, and
who will have access to the data. The candidates signature validates that he or she has been informed of
these specifics. Administering the APM takes about one hour in total, including giving directions to
candidates, answering questions about the assessment procedures, and actual assessment time.
Assessment Conditions
The following conditions are suggested for improving score accuracy and maintaining the cooperation of
the candidates: good lighting; comfortable seating; adequate desk or table space; comfortable positioning
of the computer screen, keyboard and mouse, when administering online; a pleasant and professional
attitude on the part of the administrator; and freedom from noise and other distractions.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

33

Answering Questions
Though the instructions for completing the assessment online are presented on-screen, it is important
to develop and maintain rapport with candidates. The administrator is responsible for ensuring that
candidates understand all requirements and interact with the assessment interface appropriately.
Candidates may ask questions about the assessment before they begin. Clarification of what is required
of candidates and confirmation that they understand these requirements is appropriate. See the section
in Appendix A Instructions for Administering the APM Online for an appropriate script when starting
the assessment. Paper and pencil format APM Short Test Administration Instructions are also
provided in Appendix A.
If any candidates have routine questions after the assessment has started, try to answer the questions
without disturbing the other candidates. However, if candidates have questions about the interpretation
of an item, they should be encouraged to respond to the item as they best understand it.
Administration
Both online and paper & pencil formats require supervised administration and begin with a set of four
practice items with an answer and explanation. Although un-timed, allow up to three minutes for
completion of the practice set. Online test takers have 40 minutes to complete all 23 items in Part 1.
Part 1 automatically goes into time out at the end of 40 minutes. Test takers have 2 minutes to
complete the 2 items in Part 2. Part 2 automatically goes into time out at the end of 2 minutes. During
each part of the assessment, test takers have the option of skipping items and returning to them later if
time remains. If test takers finish Part 1 of the assessment before the 40-minute time limit has expired,
they may review their answers, or move on to Part 2. Please note that the Part 2 experimental items
are not included in the paper and pencil format.
If a test takers computer develops technical problems during the assessment, the administrator should
move the candidate to another suitable computer location if possible and log back into the system as

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

34

before. If the technical problems cannot be solved by moving to another computer location, the
administrator should contact Pearsons TalentLens Technical Support for assistance.
At the end of the assessment session, thank each candidate for his or her participation and check the
computer station to ensure that the assessment is closed. Note that scoring will not occur and the
assessment will stay in In Progress status until the candidate has completed the assessment.
Understanding the Scores Reported
The online interpretive report includes a total raw score as well as a percentile score corresponding to
the total raw score. The percentile score is a standardised score that indicates the standing of the
participant relative to individuals in the norm group. The percentile score indicates the proportion of
the norm group who possess less of the ability than the participant. For example, if a participants APM
score is at the 75th percentile of a given norm group, it means that the participant scored higher than or
equal to 75% of the people in the norm group. A score above the 90th percentile is considered well
above average in comparison to the norm group, above the 70th above average, above the 30th average
and above the 10th below average. Scores at the 10th percentile or lower are considered well below
average.
Maintaining Security of Results and Materials
APM scores are confidential and should be stored in a secure location accessible to authorised
individuals only. It is unethical as well as poor assessment practice to allow assessment score access to
individuals who do not have a legitimate need for the information. The security of assessment materials
(e.g. access to online assessments) and protection of copyright must also be maintained by authorised
individuals.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

35

Sources of Additional Best Practice Information


Governmental and professional regulations cover the use of all personnel selection procedures in most
countries. Relevant source documents that the user may wish to consult are the International
Guidelines For Test Use (International Test Commission, 2000), the Code of Good Practice for
Psychological Testing (British Psychological Society, 2012),
Psychological Testing: A User's Guide (British Psychological Society, 2012) and, Data Protection and
Privacy Issues Relating to Psychological Testing in Employment-Related Settings (British Psychological
Society, 2012).
For local statutes and legal proceedings that influence an organisations equal employment opportunity
obligations, the user is referred to their local governing authority that monitors employment selection
practices.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

36

Instructions for Administering the APM Online


Once your PPU survey/assessment is created, copy the PPU URL to the browser of the computer(s) to
be used for supervised testing.
If testing is to be anonymous, test takers will be taken to the beginning of the survey/assessment. If test
takers are required to self-register for the survey/assessment, they will see the following page.

(c) Pearson Education Ltd

Supervised Test Administrator Introduction:


After a short welcome introduction say:
To sign on please enter you name and your email address in the boxes provided, and then
click submit.
When all candidates have signed on say:
The onscreen directions will take you through the entire process, which begins with a
welcome page and some general questions about you. At the end of the test there are a
few more general questions.
While all candidates are completing the general questions ask:
Do you have any questions before you click next to start the assessment? (Ensure all
candidates have completed the onscreen general questions and are ready to begin the
test)

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

37

All surveys/assessments begin with a generic welcome instruction

Pre assessment general questions all fields are required.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

38

Back end general questions are all optional. All data is anonymous and strictly confidential.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

39

Ravens Advanced Progressive Matrices (APM)

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

40

APM Short Test Administration Instructions Paper and Pencil

If more than 10 people are to be tested in a group, it is necessary to have one assistant for every 10-15 people.

The test administrator needs: these instructions, a stopwatch, and one copy of the Test Booklet and Record Form.

Each person taking the test needs: two pencils, eraser and one copy of the Test Booklet and Record Form.

After your informal introduction to the test session:


DO

Hold up the Test Booklet and a Record Form as you;

SAY

For this session there is one Test Booklet and a Record Form on which to record your answers. No marks should
be made on the Test Booklets.
Please look at the Advanced Progressive Matrices Record Form. Please complete your name, todays date, your job
title, the industry you are currently employed in, years in position and highest level of education you have
completed.
To monitor fairness in testing, equal opportunities information is also requested. Completion of the equal
opportunities section at the bottom of this page is optional. All of the groups presented here are protected by UK
equality and discrimination laws. A prefer not to say option is available for each question if you do not wish to
disclose this information. Choosing this option will have no impact on your score or your standing in the selection
process.

DO

Allow time for candidate/s to complete the details on the cover of the record form.

SAY

This is a test of observation and clear thinking. The first part of the test contains 4 practice items followed by
explanations of the answers to them. This is intended to show you how the test works, or, if you have seen tests of
this sort before, to remind you how they work.

DO

Hold up a Test Booklet

SAY

Please open your Test Booklet to the first page and read the instructions but do not turn the page yet.

DO

Allow time for the candidates to read the instructions, then:

SAY

Now turn the page. You will see that this is Practice Item 1.
Now turn over your Record Form.
You will see that under the heading Practice Items there is a column of numbers 1, 2, 3, and 4. This is where the
answers go.

DO

Hold up a Record Form and point to Column 1.

SAY

Now look back at your Test Booklet. The top part of Practice Item 1 is a pattern with a piece missing. Look at the
pattern, think what the piece needed to complete the pattern correctly both along and down must be like. Then find
the right piece out of the eight options shown below. Please look at this practice item and try to solve the problem.
Record your answer by placing a single line across the number of the answer you think is correct in the appropriate
box on the Record Form.

DO

Pause for approximately 1 minute to allow candidates to look at the first item.

SAY

Now turn the page and we will read through the answer together.
Number 8 is the correct answer because it is the only piece that correctly completes the pattern going across the
row and down the column.

Numbers 1, 2, and 6 complete the pattern of one solid line going down the column, but do not complete the
pattern of three dotted lines going across the row.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

41

Numbers 4 and 5 correctly complete the pattern of three dotted lines going across the row, but do not complete
the pattern of one solid line going down the column.
Numbers 3 and 7 do not complete the pattern of three dotted lines going across the row, and do not complete the
pattern of one solid line going down the column.
Number 8 is the only answer that works going across the row and down the column.

DO

Tester and assistants check that everyone has correctly marked 8 for Practice Item 1 on their Record Form.

SAY

This is a Practice Set. It is not important to complete all the items. The important thing is to notice how the
problems develop and to learn how to solve them. Continue with the rest of the Practice Items on your own.
Record your answers by making a single horizontal mark through the number of the answer you think is correct on
your Record Form. Then review the explanations to the answers.
If you make a mistake or want to change your answer, put a cross, or X, through your incorrect answer, and then
put a single line through the correct answer. Do not try to erase the incorrect answer.
Now complete Practice Items 2-4 by yourselves. You will have up to 3 minutes for this. When you have finished do
not turn the page until you are instructed to do so.

DO

Allow 3 minutes unless it is clear that test takers have all finished before this.
Tester and assistants check that everyone is putting their answers in the correct column.

SAY

Please stop now. Shortly we will start the real test. The items in it are similar to those you have just completed
except that there are more of them, and they get progressively more difficult.
As a general guideline, the correct response will always fit across each row and down each column. You can look
along each row and down each column to help you determine the missing piece. The correct option will match the
pattern going across the row and down the column.
As you did when completing the Practice Items, record your answers to the test by making a single horizontal mark
through the number of the answer you think is correct on your Record Form under Test Items.

DO

Hold up a Record Form and show where the Test Items begin.

SAY

Please do not mark the Test Booklet. You will be allowed 40 minutes to complete the test. Remember it is
accurate work that counts. Attempt each item in turn. Do your best to find the correct option before you go on to
the next item. If you get stuck, move on and come back to the item later. Remember, however, that you may find
the next item harder and it may take you longer to check your answers carefully. Are there any questions?

DO

Pause briefly.

SAY

Now turn the page in your Test Booklet to the first Test item.

DO

Pause briefly. Check that everyone is ready to start.

SAY

Begin now.

DO

Start timing 40 minutes.

At the end of 40 minutes:


SAY

Everyone stop working, please. Close your Test Booklet. Please check that you have put your name, as well as
todays date, on the Record Form.

DO
Check that they have put their names on their Record Forms, and filled in the additional details.
At the end of the session thank everyone for their time and tell them what use will be made of their data. If appropriate,
explain what arrangements will be made for them to find out their results. It would also be constructive to reassure test
takers regarding the confidentiality of test scores.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

42

References
American Educational Research Association, American Psychological Association, & National Council of
Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC:
Author.
Brouwers, S. A., Van de Vijver, F. J. R., & Van Hemert, D. A. (2009). Variation in Ravens Progressive
Matrices scores across time and place. Learning and Individual Differences, 19, 330338.
Chan, D. (1996). Criterion and construct validation of an assessment centre.
Journal of Occupational and Organizational Psychology, 69, 167181.
Fay, D., & Frese, M. (2001). The concept of Personal Initiative: An overview of validity studies.
Human Performance, 14(1), 97124.
Frey, M. C., & Detterman, D. K. (2004). Scholastic assessment or g? The relationship between the
Scholastic Assessment Test and general cognitive ability. Psychological Science, 15(6), 373378.
Gonzalez, C., Thomas, R. P., & Vanyukov, P. (2005). The relationship between cognitive ability and
dynamic decision making. Intelligence, 33, 169186.
International Test Commission (2000). International guidelines for test use. Downloaded electronically
from www.intestcom.org/itc_projects.htm
Koenig, K. A., Frey, M. C., & Detterman, D. K. (2008). ACT and general cognitive ability.
Intelligence, 36, 153160. doi:10.1016/j.intell.2007.03.005
Kolz, A. R., McFarland, L. A., & Silverman, S. B. (1998). Cognitive ability and job experience as predictors
of work performance. The Journal of Psychology, 132(5), 539548.
Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential, creativity,
and job performance: Can one construct predict them all?
Journal of Personality and Social Psychology, 86(1), 148161.
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability
tests: A meta-analysis. Psychological Bulletin, 114(3), 449458.
Pearson (2010). Management assessment process: Preliminary report.
Unpublished manuscript 34 Copyright 2011 NCS Pearson, Inc. All rights reserved.
Prien, E. P., Schippmann, J. S., & Prien, K. O. (2003).
Individual assessment as practiced in industry and consulting. Mahwah, NJ: Erlbaum.
Raven, J. (1994). Occupational users guide: Ravens Advanced Progressive Matrices & Mill Hill Vocabulary Scale.
Oxford, UK: Oxford Psychologists Press Ltd.
Raven, J., Raven, J. C., & Court, J. H. (1998). Raven manual: Section 4, Advanced Progressive Matrices,1998
edition. Oxford, UK: Oxford Psychologists Press Ltd.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

43

Ree, M. J., & Carretta, T. R. (1998). General cognitive ability and occupational performance. In C. L.
Cooper & I. T. Robertson (Eds.), International review of industrial and organizational psychology
(Vol. 13, pp. 159184) Chichester, UK: Wiley.
Ryan, A.M., & Tippins, N. (2009). Designing and implementing global selection systems.
Chichester, UK: Wiley-Blackwell.
Salgado, J. F., Anderson, N., Moscoso, S., Bertua, C., & De Fruyt, F. (2003). International validity
generalisation of GMA and cognitive abilities: A European community meta-analysis.
Personnel Psychology, 56, 573605.
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel
psychology: Practical and theoretical implications of 85 years of research findings.
Psychological Bulletin, 124, 262274.
Schmidt, F. L., & Hunter, J. (2004). General mental ability in the world of work: Occupational attainment
and job performance. Journal of Personality and Social Psychology, 86(1), 162173.
Society for Industrial and Organizational Psychology. (2003). Principles for the validation and use of
personnel selection procedures (4th ed.). Bowling Green, OH: Author.
U.S. Department of Labor. (1999). Testing and assessment: An employers guide to good practices.
Washington, DC: Author.
Watson, G., & Glaser, E. M. (2006). Watson-Glaser Critical Thinking AppraisalShort Form manual. San
Antonio, TX: Pearson.

Copyright 2011 NCS Pearson, Inc or its affiliate(s)

44