You are on page 1of 26

This article was downloaded by: [Temple University Libraries]

On: 24 November 2014, At: 17:51


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954
Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,
UK

The Journal of General


Psychology
Publication details, including instructions for
authors and subscription information:
http://www.tandfonline.com/loi/vgen20

Dynamic Assessment of
Intelligence Is a Better Reply
to Adaptive Behavior and
Cognitive Plasticity
a
Rosa Angela Fabio
a
Catholic University of Milan Department of
Psychology Italy
Published online: 07 Aug 2010.

To cite this article: Rosa Angela Fabio (2005) Dynamic Assessment of Intelligence Is
a Better Reply to Adaptive Behavior and Cognitive Plasticity, The Journal of General
Psychology, 132:1, 41-66, DOI: 10.3200/GENP.132.1.41-66

To link to this article: http://dx.doi.org/10.3200/GENP.132.1.41-66

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the
information (the “Content”) contained in the publications on our platform.
However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness,
or suitability for any purpose of the Content. Any opinions and views
expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the
Content should not be relied upon and should be independently verified with
primary sources of information. Taylor and Francis shall not be liable for any
losses, actions, claims, proceedings, demands, costs, expenses, damages,
and other liabilities whatsoever or howsoever caused arising directly or
indirectly in connection with, in relation to or arising out of the use of the
Content.
This article may be used for research, teaching, and private study purposes.
Any substantial or systematic reproduction, redistribution, reselling, loan,
sub-licensing, systematic supply, or distribution in any form to anyone is
expressly forbidden. Terms & Conditions of access and use can be found at
http://www.tandfonline.com/page/terms-and-conditions
Downloaded by [Temple University Libraries] at 17:51 24 November 2014
The Journal of General Psychology, 2005, 132(1), 41–64

Dynamic Assessment of Intelligence


Is a Better Reply to Adaptive Behavior
and Cognitive Plasticity
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

ROSA ANGELA FABIO


Department of Psychology
Catholic University of Milan, Italy

ABSTRACT. In the present study, the author conducted 3 experiments to examine the dynam-
ic testing of potential intelligence. She investigated the relationship between dynamic mea-
sures and other factors such as (a) static measures of intelligence (Raven’s Colored Progres-
sive Matrices Test [J. C. Raven, J. H. Court, & J. Raven, 1979] and the D48 [J. D. Black,
1961]) and (b) codifying speed, codifying accuracy, and school performance. The participants
were kindergarten children (n = 150), primary school children (n = 287), and teenaged stu-
dents (n = 198) who were all trained to master problem solving tests with dynamic measures
of intelligence. The results showed that dynamic measures predict more accurately the rela-
tionships of codifying speed, codifying accuracy, and school performance.
Key words: cognitive modifiability, dynamic measures, intelligence, potential

INTELLIGENCE OR ADAPTIVE BEHAVIOR is one of the prominent areas of


focus in psychological research. Intelligence is linked to cognitive plasticity,
which is the ability of cognitive processes to be influenced by an ongoing activi-
ty. Traditionally, measures of general intelligence that are obtained by testing con-
sist of either a ratio of mental age to chronological age or of a score of deviation
from an expected test performance by age. Such measures are referred to as stat-
ic assessment procedures, and they emphasize previously acquired knowledge in
terms of intelligence or achievement scores; they do not incorporate modifications
aimed at increasing levels of performance into their procedure (Brown & French,
1979; Carlson & Wiedl, 1992; Feuerstein, Rand, & Hoffman, 1979).
Dynamic testing involves purposeful teaching within the testing situation. Its
purpose is to distinguish between a child’s apparent level of development as mea-
sured by a standardized test and the child’s level of potential development. In

Address correspondence to Rosa Angela Fabio, Department of Psychology, Catholic Uni-


versity, Largo Gemelli, 1, 20100, Milan, Italy; rosangelafabio@tiscalinet.it (e-mail).

41
42 The Journal of General Psychology

dynamic testing models, a pretest–intervene–retest procedure is used to measure


the breadth of the zone of proximal development. The procedure is based on the
assumption that the best way to help a child learn is to explore the teaching strate-
gies to which that child is most responsive (Berk, 2001).
The theoretical foundations of the present study are derived from the socio-
cultural theory of Vygotskij (1978)—in particular, the zone of proximal devel-
opment (ZPD) concept—and Feuerstein’s mediated learning experience (MLE;
Feuerstein et al., 1979). Vygotskij believed that a great deal of development
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

was mediated by social interaction. The ZPD refers to the distance between the
level of performance a child can reach unaided and the level of participation a
child can accomplish when guided by someone more knowledgeable in that
domain. Therefore, it refers to a range of tasks that the child cannot yet handle
alone but can do with the help of more skilled partners (Vygotskij). As Cam-
pione, Brown, Ferrara, and Bryant (1984) have pointed out, the breadth of the
zone varies across individuals and across domains of learning within an indi-
vidual. For one child in a particular domain, the zone may be narrow, indicat-
ing that the child is not yet ready to master tasks beyond his or her unaided
performance. For another child in the same domain or the same child in a dif-
ferent domain, the zone may be broader, suggesting that the child can perform
at a higher level than the current performance indicates with the help of a more
expert partner. Other researchers have found that the zone tends to be broader
or narrower in a great variety of tasks (Fabio, 1999, 2001; Fabio & Mancuso,
1995; Vygotskij).
Another way in which Vygotskij (1978) used the zone of proximal develop-
ment was to test the child’s readiness, or intellectual maturity, in a specific domain
(Brown & Campione, 1984). As such, he was using it as an individual-difference
metric. He argued that if we measure the IQ of two children with the same chrono-
logical and mental age (8 years), then we cannot make assumptions on the course
of their future mental development and school performance. On the contrary, if
we consider their IQ measures as starting points and not as definite (static) mea-
surements and we give them new problems, then they will be able to handle prob-
lems above their starting level. If the children solve problems with adult assis-
tance, and one child attains an IQ of a mental age of 9 years and the second child
an IQ of a mental age of 12 years, then the difference between 9 and 8 or between
12 and 8 is the zone of proximal development. “It is the distance between the
actual development level as determined by independent problem solving and the
level of potential development as determined through problem solving under
guidance of persons who know more” (Vygotskij, pp. 85–86).
The other theoretical foundation of dynamic measures is the mediated learn-
ing experience, which describes a special quality of interaction between a learn-
er and a person, whom we shall call a mediator. The function of a mediator is to
observe how the learner approaches problem solving. The problem at hand is a
ruse for the mediator to be able to observe the learner’s thinking process. For this
Fabio 43

process to be successful, at least three important qualities have to characterize the


interaction (Feuerstein, Rand, Hoffman, & Miller, 1980):
1. Intentionality and reciprocity. The mediator and the learner help one another.
The mediator helps the learner understand how he or she (i.e., the learner) is using
his or her brain and how to advance with solving the problem, and the learner
allows the mediator to observe and thereby understand his or her thinking process.
In other words, only the learner knows how the thinking proceeds; the mediator
is more like a fellow explorer.
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

2. Mediation of meaning. The mediator (a) interprets on behalf of the learner the
meaning of what the learner has accomplished, (b) mediates feelings of accom-
plishment, and (c) makes the learner think not just about the solution of the prob-
lem but also about how the solution was attained and how the general rules can
be applied to future learning situations.
3. Transcendence. This means bridging existing experience and the lessons
learned with new situations.
Evidence on the effectiveness of dynamic testing confirms that it has con-
siderable value for predicting children’s learning potential. First, both age and
ability (as measured by static intelligence tests) are positively related to the
breadth of children’s zone of proximal development as determined by dynamic
testing. Indeed, these assessment methods are not related to low sociocultural
background. They may be particularly important for low-socioeconomic status
and ethnic minority children, who may demonstrate a large difference between
actual and potential development (Missiuna & Samuels, 1989). Those who per-
form poorly when left on their own often perform substantially better when given
appropriate instructional intervention. They perform poorly on static measures but
better on dynamic measures. Second, dynamic testing reduces the possibility that
a child who can profit from instruction mediation will be denied opportunities to
learn because of a poor score on a static assessment. Third, use of the dynamic
assessment has been motivated by the inadequacy of conventional static tests to
provide accurate information about the individual’s learning ability, change
processes, and the modifiability or plasticity of cognitive processes.
In the present study, I propose measures of dynamic assessment of intelli-
gence based on the zone of proximal development concept and the mediated
learning experience theory. The rationale is that flexible access to information and
transfer can be used as an index of plasticity and modifiability.
Indexes of dynamic assessment measure one’s problem solving ability in two
phases: (a) during the learning phase, when the participant faces new problem
solving, and (b) during the transfer phase, when the participant applies what has
been learned in the first phase to new, more complex problem solving. This type
of measure entails (a) presenting the learners with problem solving exercises that
are more difficult than their learning ability, (b) supplying them with gradual and
balanced assistance and progressively disclosing the solution of the problem, and
44 The Journal of General Psychology

(c) determining the amount of aid the learner needs to be able to solve the prob-
lem. The amount of aid is inversely proportional to the modifiability index.
I had four aims in this study. The first was to evaluate whether the modifiabil-
ity index was specific in the various contexts or whether it could be considered as
a general ability to learn and transfer. To achieve this aim, I administered a dynam-
ic assessment of intelligence and checked the indexes that resulted from the facto-
rial analysis. Because the tests propose several items outlining various types of log-
ical processes, I had to determine whether there was coherence within the test, that
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

is, whether it was possible to find similar results in the modifiability indexes of the
various items. If this coherence was present, then one could consider the test result
to be a general index of the capability to learn and refer to new rules quickly and
efficiently. If this coherence was not present and the results of the various items of
the test were different in the specific areas, then one could interpret the test result
as an indication of a specific disability in the different areas.
The second aim was to point out the relationship between dynamic testing and
static testing. One expects there may be a relationship between the learning mod-
ifiability (low, medium, or high) and the IQ results with regard to three intelligence
levels (below standard, standard, and above standard): One also expects signifi-
cant but partial levels of reference. In fact, the amplitude of the learning modifia-
bility bears some influence on static capabilities of intelligence (highly modifiable
individuals learn more and can have better performances even in traditional prob-
lem solving), and in particular, being independent from the effect of previous
learning supplies a neat measure of the modifiability itself (the intensity of grad-
ual aids given to problem solving; Day & Hall, 1988; Fabio, 1998).
The third aim of the study was to present the validity of dynamic measures
in relation to various indexes: the highest correlations between dynamic testing
and some cognitive variables such as the selective attention level attained (result-
ing from the Deux Barrages Test; Zazzo, 1959), the global attention level (eval-
uated by the teacher), and school performances in language and mathematics.
These correlation indexes, together with the aforementioned cognitive variables,
are important because they are measures of the validity of dynamic testing.
The fourth aim of the study was to present the validity of the dynamic test
assessment in relation to an individual’s sociocultural background level on such
measures. On the one hand, because static tests measure the individual’s knowl-
edge at the time of testing, they supply information on the level of knowledge
attained by the individual up to that moment. Therefore, such test assessments
can be affected by an individual’s sociocultural background. On the other hand,
because cognitive modifiability indexes measure the potential intelligence of the
individual, the influence of an individual’s sociocultural background may not be
significant at all (Tzuriel & Kaufman, 1999).
I addressed these four general points in three experiments. In Experiment 1,
I examined kindergarten children; in Experiment 2, primary school children; and
in Experiment 3, teenaged students.
Fabio 45

EXPERIMENT 1

The results of a study by Tzuriel (2000) showed that it is necessary to per-


form dynamic test assessments on kindergarten children because education poli-
cies have an influence on their future performance. In Experiment 1, I analyzed
dynamic indexes in children aged 4 and 5 years in relation to (a) internal consis-
tency indexes, (b) their correlation with the static test, (c) their external validity
bound to the correlation with attention indexes, and (d) the influence of their
sociocultural background.
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Method
Participants
There were 150 children, 82 boys and 68 girls, who participated in Experi-
ment 1 (mean age = 5 years, 3 months; range = 4 years, 10 months to 5 years, 11
months). The children were from schools in northern and southern Italy, and they
were all normal. Mentally handicapped children and children with serious emo-
tional problems were not included in the sample. Parental consent was given for
the children’s participation.

Instruments

Dynamic test. The items in the dynamic test were selected from a balanced and
standardized work (Fabio & Mancuso, 1995), which had met satisfactory criteria
of statistics reliability. The dynamic test items are problem solving tasks that are
not solved spontaneously by the child, but during the course of the learning ses-
sion; in a pretest study, they cannot be solved by 99% of participants of the same
chronological age of the sample. The test in its final form consists of 16 items,
of which 8 are related to the learning stage and 8 to the transfer stage. The latter
contain the same problem solving rules as those in the learning stage, as well as
one new rule that must interact with the other items for the child to be able to find
the solution. The items were: (a) conservation of length concept, (b) conservation
of liquid surface concept, (c) simultaneity, (d) conservation of substance and
weight concept, (e) class inclusion, (f) transitivity, (g) sequentiality, and (h) biu-
nivocal (one-to-one) correspondence.
Figure 1 shows the test structure: a three-dimensional, cylindrical picture in
whose sections are placed the 16 items of the two stages of the test. The concen-
tric circles represent the step-by-step aids supplied to the child for problem solv-
ing. There are five identified steps to attain the solution of the problem. They
range from the most peripheral (generic attention) to the most central (execution),
where the child is taught how to solve the problem:
1. General attention advice (i.e., encouraging the child to attain a higher attention
level);
46 The Journal of General Psychology

1. General attention
2. Selective attention
LEARNING PHASE 3. Partial rule
4. Global rule
5. Execution
TRANSFER PHASE
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Conservation length concept

Conservation of liquid surface concept

Simultaneity

Conservation of substance and weight concept

Class inclusion

Transitivity

Sequentiality

One-to-one correspondence

FIGURE 1. A graphic representation of the kindergarten children’s


dynamic test structure. Points 1–5 are the cues that are required to attain
the solution to the problem.
Fabio 47

2. Selective attention aid (i.e., guiding the child’s analytical attention, preventing
him or her from making mistakes in task analysis);
3. Aid in the presentation of the partial rule (containing a description of some of
the rules leading to item solution);
4. Aid in the presentation of the global rule (containing all the rules on which the
solution of the item is based); and
5. Execution aid, whereby the mediator teaches the child the strategy to attain the
solution and helps him to use it operatively.
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

An example of learning Item 1 and transfer Item 1 would be the acknowl-


edgment of length as an invariable value. In the test item sheet, there were two
parallel lines of equal length. The child was told that they were the same length.
Then the child was shown a second sheet on which the lines were not aligned. In
the transfer item, the child was shown a third sheet on which there were three
nonaligned strokes (length and color were still the same). Each time, the child
had to say whether or not the strokes were the same length.

Attention test. One of the series of the Wechsler Intelligence Scale for Children
(WISC III–R; Code Book A, Wechsler, 1974) was given to the child to test atten-
tion. It was composed of a sheet with three matrices at the top representing various
drawings that were featured on the rest of the sheet. The participant had to identi-
fy and mark the drawings that resembled the matrices. The time limit for task per-
formance was 2 min.

Static test: Raven’s Colored Progressive Matrices (CPM; Raven, Court, &
Raven, 1979). The CPM consists of 36 problems divided into three sets (A,
AB, B), each containing 12 problems. The participant had to complete a matrix
by choosing the missing part from six alternatives given at the bottom of the
page, then had to induce a relation on the completed part of the matrix and
apply that relation to the incomplete part. The Raven items were based most-
ly on problems that required completion of a Gestalt and accurate perception
of symmetry.

Procedure

All the children were examined in two stages: In the first stage, they were
given a group test to measure selective attention level (WISC Code Book A, Wech-
sler, 1974). The second stage required individual testing: Raven’s CPM test (Raven
et al., 1979) was administered to measure static intelligence and the modifiability
test was administered to measure dynamic intelligence. The teachers were request-
ed to mark down on a form the attention level as well as the sociocultural level of
the child’s family.

Group testing: Attentional level. I distributed the WISC Code Book A (Wechsler,
48 The Journal of General Psychology

1974) to the children. They had to mark the figures that looked like those in the
matrix. After practicing for a short time, the children were instructed to start the
task. After 2 min, the mediator told them to complete the task. In this test, the
items were marked correct, omitted, or mistaken.

Individual testing. After the group testing, each child was examined in a separate
room with Raven’s CPM for static intelligence level (Raven et al., 1979). In a third
stage, each child again was submitted to testing for dynamic intelligence. The medi-
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

ator showed the first problem to the child, who was then asked to solve it. The child
was reminded that he could ask for help but that he should try to perform the task
with as little aid as possible. The aids were given one by one following a standard
sequence. For example, in transfer Item 1 the sequence was:

1. Pay attention, look well at the three lines;


2. Observe the length of the three lines;
3. Place the lines as if they were aligned;
4. Place them on the same line, the starting points are different; and
5. Examples.

After the first item had been correctly solved, the same procedure was fol-
lowed for all the other items: learning items first and then the transfer items. The
mediator marked down how many aids the child had requested before getting the
correct solution and attributed relevant scores. The scores were established so that
when the child solved the task with one aid only, he was assigned 5 points; with
two aids, 4 points; with three aids, 3 points; with four aids, 2 points; and with 5
aids, 1 point. By summing up the scores in the learning stage, we have the learn-
ing modifiability index (LMI); by summing up the scores in the transfer stage,
we have the transfer modifiability index (TMI); and the sum of the two indexes
gives the general modifiability index (GMI; ranging from 16 to 80).
The teachers were asked to evaluate the children’s attention level and their
sociocultural background. The level of global attention was evaluated on a Likert-
type scale from 1 to 5 (1 = low, 2 = medium-low, 3 = medium, 4 = medium-high,
5 = high). It was defined as global attention because it was based on the global
teacher judgment. Finally, for the definition of the child’s sociocultural back-
ground (low, medium, high), the teachers were advised to take into account tradi-
tional parameters: the father’s and mother’s levels of education, the number of
members in the family, the number of rooms in the child’s home, whether dialect
was spoken in the family, and the parents’ jobs.

Results and Conclusion

The analysis of the results takes into account the items relevant to the study,
which have been pointed out before.
Fabio 49

Internal Consistency

The internal consistency (Cronbach’s alpha) on the 16 dynamic test items


showed an alpha level equal to .57. In addition, the factor analysis, applied to the
16 items related to the learning and transfer stages, showed that Factor 1, extracted
with the principal component analysis (PCA) method, accounted for 43.25% of the
variance. In addition, each of the 16 items seemed to be highly correlated to Com-
ponent 1 (eigenvalue: 6.61), which suggested that one single measure of modifia-
bility was the hidden point that all items have in common.
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Correlation Between Dynamic Testing and Static Testing

The correlation factor between the scores achieved through static testing
(Raven’s CPM; Raven et al., 1979) and dynamic indexes was equal to r(150) =
.58, p < .01. For a clearer explanation of this ratio, Figure 2 shows the specific
relation between Raven’s CPM and cognitive modifiability indexes.
The children who achieved the worst scores (13%) in the Raven indexes and
those who had the best scores (13%) were taken into account for the definition
of the categories relevant to static testing. Twenty children achieved the worst
scores and were defined as children with low static measures. Those who
achieved the average scores (110 children) were defined as children with medi-

70 66
60 59
46 52
Dynamic Measure

50
41
40

30
23
20
11
10
2
0
1 2 3
Low Medium High
Static Measures (Raven)

Low modifiability Medium modifiability High modifiability

FIGURE 2. Relation between static and dynamic testing in kindergarten


children expressed as percentages.
50 The Journal of General Psychology

um static measures. Finally, those with the best scores (20 children) were defined
as children with high static measures. Likewise, the children who had achieved
the worst scores (13%) and those with the best scores (13%) in the modifiability
indexes were taken into account when defining low and high modifiability.
The results suggested that the children with low static measures lacked high
modifiability indexes, whereas the majority of the children with high static measures
had high modifiability indexes and only 2% of them showed low modifiability.
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Correlation Between Dynamic Testing and Cognitive Variables

Selective attention. In evaluating the attention level, we took into account three
parameters: the total number of correct answers, the total number of omitted
answers, and the total number of wrong answers given in the 2-min time frame.
The correlation between the general modifiability index and the correct answers
was positive and highly significant, r(150) = .34, p < .01. With regard to omitted
answers, the correlation was negative and highly significant, r(150) = .288, p < .01.
The correlation between static indexes (Raven’s CPM; Raven et al., 1979) and
selective attention was positive and highly significant in regard to correct answers,
r(150) = .31, p < .01, whereas it was not significant in regard to omissions, r(150)
= .012, p < .13. We found no correlation between wrong answers and dynamic and
static measures. Therefore, dynamic testing, rather than static testing, seemed more
suitable for selective attention indexes because it is correlated not only to speed in
performance (correct answers) but also to accuracy in performance (in fact, it is
inversely correlated to omitted answers).

Global attention. The correlation between attention as assessed by the teacher and
the general modifiability index was positive and quite significant, r(150) = .291,
p < .01. In addition, the correlation between static indexes (Raven’s CPM; Raven
et al., 1979) and global attention was positive, r(150) = .28, p < .01.

Influence of Sociocultural Background on Dynamic Testing

We performed an analysis of variance to check the influence of the two vari-


ables on dynamic testing results. The data referred to a very small sample because
not all the teachers agreed to supply information on the variable related to the
sociocultural background of their pupils. On the basis of the data on hand, the
sociocultural background was insignificant, F(2, 54) = 1.38, p = .09, although
with reference to Raven’s CPM (Raven et al., 1979), I noticed a significant influ-
ence related to the sociocultural background variable, F(2, 54) = 6.9, p < .01. The
lack of significance of the variable related to the sociocultural background has
fundamental theoretical importance. As a matter of fact, it is well known that
dynamic testing does not penalize children of different social levels; neither does
it discriminate against them with regard to their sociocultural background.
Fabio 51

EXPERIMENT 2

In this experiment, I analyzed dynamic indexes as related to internal consis-


tency indexes, the correlation with static testing, the external validity linked with
the correlation with indexes of attention and school performance, and the influ-
ence of sociocultural background in the primary school.

Method
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Participants

There were 287 children, 130 boys and 157 girls (mean age = 8 years, 2
months; range = 6 to 11 years), in Experiment 2. The children came from schools
in northern Italy, and all of them were normal. Parental consent was given for the
children’s participation.

Instruments

Dynamic test. The items included in the dynamic testing were the result of a stan-
dardization work (Fabio, 2003), which had met satisfactory criteria in statistic reli-
ability. These dynamic testing items are problem solving items that the child does
not solve spontaneously, but during the course of the learning session. The test in
its final issue contains 14 items; 7 items are related to the learning phase and 7 to
the transfer phase. The latter contain the same rules for problem solving as the
items related to the learning phase, plus a new rule that must interact with the oth-
ers for the child to be able to find the solution. The items were as follows: (a) com-
pletion of a series of letters, (b) completion of a series of numbers, (c) completion
of geometrical figures, (d) perceptive difference, (e) mental image superimposi-
tion, (f) chain of words, and (g) simultaneous coordination of information.
Figure 3 shows the test structure: A three-dimensional cylindrical figure in
which the sections are the 14 items of the two testing stages. The concentric cir-
cles represent the graded suggestions supplied to the child to achieve the solution
of the problem. There are five stages, ranging from the most peripheral stage (gen-
eral attention) to the most central stage (execution), in which the child is taught
operatively how to solve the problem as described heretofore.
An example of learning Item 2 and transfer Item 2 is as follows: The item is
concerned with inductive reasoning in completing a series of numbers. In both
stages of the test, the child is given a sheet with some numbers that are in groups
of three and are placed one after the other according to a logical criterion. In the
learning stage, the series of three numbers follows this rule: The first and second
figures increase by 1 unit, and the third figure decreases by 1 unit. In the trans-
fer stage, the rule is: The first figure decreases by 2 units, the second figure
increases by 1, and the third figure increases by 2. In both stages, the child has
52 The Journal of General Psychology

1. General attention
2. Selective attention
LEARNING PHASE 3. Partial rule
4. Global rule
5. Execution
TRANSFER PHASE
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Completion of a series of letters

Completion of a series of numbers

Completion of geometrical figures

Perceptual difference

Mental image superimposition

Chain of words

Simultaneous correspondence of information

FIGURE 3. A graphic representation of elementary school children’s


dynamic test structure. Points 1–5 are the cues that are required to attain
the solution to the problem.
Fabio 53

to say which numbers will be in the following series of three. For example, the
aid sequence in transfer Item 2 was:
1. Pay attention to the relationship between the numbers;
2. The relationship between the numbers refers to the sequence of the series of
three numbers;
3. Inside each series of three numbers, the first number decreases by 2 units; look
at the other two numbers;
4. Look at the other two numbers: The second one increases by 1 unit and the
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

third increases by 2 units; and


5. Example.

Attention test. The children were given the Deux Barrages Test (Zazzo, 1959) to
measure their attention level. The test material consists of a matrix (50 × 30 cm),
at the top of which there are three prototypes (three small squares with a line on
the edge that has a precise orientation); on the rest of the matrix, there are squares
with lines on their edges going in all directions, including those of the prototypes
at the top of the matrix. The child must identify all the squares with the lines that
have the same orientation as those in the prototypes.

Static testing. See Raven’s CPM (Raven et al., 1979) description in Experiment 1.

Procedure

All the children were examined in two stages: First, by group testing for pro-
totype identification (Deux Barrages Test; Zazzo, 1959) to measure their level of
attention; and second, individually by using Raven et al.’s (1979) matrices test to
determine their static intelligence and the modifiability test to measure their
dynamic intelligence. The teachers were invited to mark down on a form the
attention level, the school performance level of the children, as well as the socio-
cultural background of the children’s families.

Group testing: Identification of prototypes. The Deux Barrages Test (Zazzo,


1959) was given to the children. They were required to point out all the figures
that looked like the prototypes. After a short period of practice, the children were
instructed to start the test. After each 1-min period, the mediator would tell them
to mark how far they had progressed, and then they continued with what remained
of the test. The test lasted 20 min. The test scores were measured as correct, omit-
ted, or mistaken.

Individual testing. After the group test had been completed, each child was
examined in a separate room with Raven’s CPM (Raven et al., 1979) for static
intelligence. In the third phase, the dynamic test items were presented to each
54 The Journal of General Psychology

child individually. For this test, the teachers were advised to give the children a
card on which there was a series of alphabet letters and a series of numbers from
0 to 9, which they could use, if necessary, to achieve the solution of some of the
test items. This test was performed as described under the same heading in
Experiment 1.
Finally, the teachers had to evaluate the attention level of the pupils, their
school performance, and their sociocultural background. They measured atten-
tion level on a Likert-type scale of 1 to 5 (1 = low, 2 = medium-low, 3 = medium,
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

4 = medium-high, 5 = high). I also required each child’s language and mathe-


matics performance grades. These were then transformed into scores from 1 to 3
(1 = below standard, 2 = standard, 3 = above standard). Finally, with regard to
the sociocultural level of the children (3 levels: low, medium, high), the teachers
were asked to take into account the following parameters: the father’s and moth-
er’s education levels, the number of family members, the number of rooms in the
child’s home, whether dialect was spoken in the family, and the parents’ jobs.

Results

Internal Consistency

The internal consistency coefficient (Cronbach’s alpha) on the 14 items of


the dynamic test showed an alpha level equal to .86. In addition, the factor analy-
sis, applied to the 14 items of the learning and transfer phases, showed that Fac-
tor 1, attained through the PCA method, had a percentage of explained variance
equal to 39.48%: Each one of the 14 items seemed to be highly correlated to Com-
ponent 1 (eigenvalue: 6.61), which suggested that one single modifiability mea-
sure was the hidden feature common to all the items.

Correlation Between Dynamic Testing and Static Testing

The correlation coefficient between the scores obtained with static testing
(Raven’s CPM; Raven et al., 1979) and dynamic indexes was equal to r(287) =
.48, p < .01. Figure 4 shows the specific relation between Raven’s CPM results
and cognitive modifiability indexes.
For the definition of static testing categories, I took into account the children
who had achieved the worst (13%) and the best (13%) scores, respectively, in
Raven’s CPM (Raven et al., 1979). In the end, 37 children had the worst scores
and were defined as children with low static measures, 213 children achieved
standard scores and were defined as children with medium static measures, and
37 children achieved the best scores and were defined as children with high sta-
tic measures. In the same way, to define low and high modifiability indexes, we
took into account those children who had achieved the worst (13%) and the best
(13%) performance in the modifiability indexes. The number of individuals in
Fabio 55

80

72
Dynamic Measure

60 52
51 49
42
40

18
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

20
10
6
0
1 2 3
Low Medium High
Static Measures (Raven)
Low modifiability Medium modifiability High modifiability

FIGURE 4. Relation between static and dynamic testing in elementary school


children expressed as percentages.

each category was the same. These results showed that children with low static
measures lacked high modifiability indexes, whereas most of the children with
high static measures showed high modifiability indexes, with only 6% of them
showing low modifiability indexes (see Figure 4).

Correlation Between Dynamic Testing and Cognitive Variables

Selective attention. In evaluating selective attention results (Deux Barrages Test;


Zazzo, 1959), I took into account three parameters: (a) the total number of cor-
rect answers, (b) the total number of omitted answers, and (c) the total number
of wrong answers given in the 20-min time frame. Some of the participants were
absent on the days on which the data were collected, so the numbers of degrees
of freedom refer to a limited portion of the total sample.
The correlation between the general modifiability index and the correct
answers was positive and highly significant, r(211) = .49, p < .01, the correlation
with the omitted answers was negative and highly significant, r(211) = −.21, p <
.01, and the correlation with the mistakes was also negative and highly signifi-
cant, r(211) = −.198, p < .01. The correlation between static indexes (Raven’s
CPM; Raven et al., 1979) and selective attention was positive and highly signif-
icant with the correct answers parameter, r(211) = .34, p < .01, but not signifi-
cant with mistakes, r(211) = .105, p = .07, or omissions, r(211) = .07, p = .13.
Therefore, the dynamic index test, rather than the static index test, proved to be
56 The Journal of General Psychology

more relevant to selective attention parameters because it was correlated not only
to execution speed (correct answers), but also to execution accuracy (in fact, it
was inversely correlated to the wrong answers and the omitted answers).

Global attention. The correlation between the attention level as evaluated by the
teacher and the general modifiability index was positive and highly significant,
r(211) = .31, p < .01, as was the correlation between static indexes (Raven’s CPM;
Raven et al., 1979) and global attention, r(211) = .27, p < .01.
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

School performance. As far as school performance was concerned, there was a


correlation between the evaluation of humanistic performance and the potential
intelligence measure test, r(211) = .27, p < .01. The correlation became stronger
when mathematical performance was taken into account, r(211) = .36, p < .01.
However, as far as correlations between static testing and humanistic and math-
ematic performances were concerned, the correlation, although remarkable, was
not that strong, r(211) = .16, p < .01, and r(211) = .18, p < .01, respectively.

Influence of Sociocultural Background and Gender on Dynamic Testing

An analysis of variance was performed to check the influence of the two vari-
ables on the dynamic testing results. The data referred to a smaller sample than
the total sample of the study because not all the teachers would provide the data
related to their pupils’ sociocultural backgrounds. In regard to the data that were
available, sociocultural background was not significant, F(2, 121) = 0.85, p < .43;
however, if it was related to Raven’s CPM (Raven et al., 1979), then the influ-
ence on the variable of the sociocultural background was somewhat significant,
F(2, 121) = 19.72, p < .01.

EXPERIMENT 3

In this experiment, I analyzed the dynamic indexes as related to internal con-


sistency, correlation with dynamic testing, external validity linked to the correla-
tion with attention indexes and school performance, and the influence of the
sociocultural background in teenagers.

Method

Participants

There were 198 teenager participants (mean age = 18 years, 7 months; range =
17 years, 10 months to 18 years, 11 months), with an equal number of boys and
girls. They attended different kinds of Italian high schools and technical schools
(40% were from southern Italy, and 60% from northern Italy).
Fabio 57

Instruments

Dynamic test. The items in the dynamic test came from a standardization work
(Fabio, 1999), which had met satisfactory criteria as far as statistic reliability was
concerned. As in Experiments 1 and 2, the dynamic testing items were problem
solving items that the participant did not solve spontaneously but during the course
of the learning session. The test in its final form consisted of 12 items, 6 items
related to the learning stage and 6 to the transfer stage (see Figure 5). As in Exper-
iments 1 and 2, the same rules for problem solving in the learning stage applied
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

to the transfer stage, plus one new rule that had to interact with the other items to
allow the examinee to achieve the solution. The items were as follows: (a) deduc-
tive reasoning, conditional type, (b) deductive reasoning with cryptoarithmetic
problems, (c) inductive reasoning in completing a series of letters, (d) inductive
reasoning in completing a series of numbers, (e) problem solving of a graphic–per-
ceptive mathematic type, and (f) problem solving of a graphic–perceptive type on
the clock.
An example of learning Item 1and transfer Item 1 consisted of the following
sentence, If the book is there, then the triangle is there too, followed by four ques-
tions:
1. There is the book, what follows?
2. The book is not there, what follows?
3. There is the triangle, what follows?
4. The triangle is not there, what follows?
In this case, the possible answers were: (a) It’s there, (b) It’s not there, or (c)
No conclusion can be made. The sequence of the five suggestions leading to the
solution of the conditional assertion was:
1. Pay attention to the relation contained in the sentence.
2. The relation tells you only that if the book is there, then the triangle is also
there—not the other way round.
3. There might be other objects as well—for example, a table or a square—and
the triangle may be there as well.
4. It could be that the book is not there, but nonetheless the triangle may or may
not be there.
5. Because the relation implies that if the book is there, then the triangle is there,
other objects might also be present, among them the triangle.
The answers to this problem are: (a) The triangle is there, (b) Nothing can
be said about the triangle, (c) Nothing can be said about the book, or (d) The book
is not there.

Attention test. The Deux Barrages Test (Zazzo, 1959) was given to the partici-
pants to measure their attention level. It consists of a matrix (50 cm × 30 cm);
58 The Journal of General Psychology

1. General attention
2. Selective attention
LEARNING PHASE 3. Partial rule
4. Global rule
5. Execution
TRANSFER PHASE
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Conditional reasoning

Cryptoarithmetic reasoning

Completion of a series of letters

Completion of a series of numbers

Graphic–perceptive problem solving

Clock problem solving

FIGURE 5. A graphic representation of teenagers’ dynamic test structure.


Points 1–5 are the cues that are required to attain the solution to the problem.
Fabio 59

three prototypes are shown at the top (three small squares with a line on the edge
having a definite orientation), and on the rest of the matrix, there are squares with
lines on their edges going in all directions, including those of the prototypes at
the top of the matrix. The participant had to identify all the squares with lines that
had the same orientation as those in the prototypes.

Static intelligence test. The static intelligence test D48 (Black, 1961) was given
to the participants as a group. The total number of correct answers was tallied out
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

of a maximum score of 48.

Procedure

All the students were examined following the same procedures as those used
in Experiments 1 and 2. Only the static testing procedure was changed, from indi-
vidual to group testing.

Results

Internal Consistency

The internal consistency (Cronbach’s alpha) on the 12 items of the dynamic


test showed an alpha level equal to .86. In addition, the factor analysis applied to
the 12 items related to the learning stage and the transfer stage showed that Fac-
tor 1, obtained through the PCA method, had an explained variance percentage
equal to 56.2% (Factor 1, eigenvalue = 6.82). Each of the 12 items proved to be
highly correlated to Component 1. This could point to one single measure of mod-
ifiability as a hidden feature common to all items even higher in this older group
of students.

Correlation Between Dynamic Testing and Static Testing

The correlation coefficient between the scores obtained with static testing
(D48; Black, 1961) and the dynamic indexes was r(190) = .48, p < .01. Figure 6
shows the specific ratio between the results of the D48 (Black) and the cognitive
modifiability indexes. The participants were divided into three categories on the
basis of the results achieved in the D48 (Black) static testing and taking into
account the 13% of the examinees who had achieved the best scores (defined as
students with high static measures), those with standard scores (defined as stu-
dents with medium static measures), and the 13% of the examinees who had
achieved the worst scores (defined as students with low static measures). There
were thus 25 students with low static measures, 148 with medium static measures,
and 25 with low static measures. The participants were also divided into three
categories in relation to the dynamic indexes. The general modifiability index
60 The Journal of General Psychology

100

60 76
Dynamic Measure

62 60
60

38 36
40
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

14
20 10
4
0
1 2 3
Low Medium High
Static Measures (D48)
Low modifiability Medium modifiability High modifiability

FIGURE 6. Relation between static and dynamic testing in teenagers


expressed as percentages.

ranged from 12 to 60. In this case, to define the three categories of low, medium,
or high modifiability, a value of 1 (low modifiability) was assigned to scores from
12–27 (13% of the worst performances), a value of 2 (medium modifiability) was
assigned to scores from 28–48, and a value of 3 (high modifiability) was assigned
to scores from 49–60 (13% of the best performances). Figure 6 shows the gener-
al modifiability indexes as related to static indexes (D48). The three groups (stu-
dents with low, medium, or high static measures) were quite different from each
other. Most participants who had low static measures showed low modifiability
indexes and the lack of high modifiability indexes. To the contrary, 36% of the
students with high static measures showed high modifiability indexes, and only
4% had low modifiability indexes.

Correlation Between Dynamic Testing and Cognitive Variables

Selective attention. In evaluating the selective attention level (Deux Barrages Test;
Zazzo, 1959), three parameters were taken into account: (a) total number of cor-
rect answers, (b) total number of omitted answers, and (c) total number of mis-
taken answers within the 20-min time frame. The correlation between the gener-
al modifiability index and the correct answers was positive and highly significant,
r(190) = .451, p < .01, the correlation with the omitted answers was negative and
highly significant, r(190) = −.32, p < .01, and the correlation with the mistakes
Fabio 61

made was also negative and highly significant, r(190) = −.25, p < .01. The corre-
lation between the static indexes (D48; Black, 1961) and selective attention was
positive and highly significant with the correct answers parameter, r(190) = .34,
p < .01, but it was not significant with mistakes made, r(190) = .105, p = .12, or
with the omitted answers, r(190) = .07, p = .11. Therefore, the dynamic index test
proved to be more relevant to the selective attention parameters than to the static
test because it was correlated not only to execution speed (correct answers), but
also to accuracy in the execution.
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

School performance. The correlation coefficient between the GMI and the aver-
age marks in mathematics was quite significant, r(190) = .33, p < .01, whereas
the correlation coefficient between the average marks in language and the GMI
was insignificant, r(190) = .023, p = .34. The marks achieved in language were
not correlated with the various indexes of mental ability, whereas the marks
achieved in mathematics were highly correlated with both the D48 (Black, 1961),
r(190) = .32, p < .01, and the speed index, r(190) = .33, p < .01.

Influence of Sociocultural Background and Gender on Dynamic Testing

The number of examinees whose sociocultural background was stated was


too small to allow this parameter to be measured.

Discussion

The first aim of this study was to evaluate the existence of internal consis-
tency. The internal consistency indexes, which are measured with Cronbach’s
alpha, showed very high internal consistency. On the basis of such data, which
were confirmed in the three age groups, one could assume that there is a single
latent ability in the performance of the various tasks. The explanatory hypothe-
sis put forward in this article is in agreement with Vygotskij’s (1978) zone of
proximal development idea. That is, it is possible that a small number of cogni-
tive processes present in the general modifiability index can be generalized in a
large number of contexts. These cognitive processes could be assimilated with
the concepts of modifiability or of propensity to adaptation. However, these data
do not agree with the results obtained by Brown (1990). In her study, Brown com-
pared the modifiability indexes in a task of a series of letters with Raven’s Pro-
gressive Matrices Test (Raven et al., 1979), and her results showed that although
the learning capability was correlated in both tests, the transfer capability was
not; it seemed to be specifically correlated to the task (Campione, Brown, Fer-
rara, Jones, & Steinberg, 1985).
The second aim of this study was to identify the correlation between dynam-
ic and static testing. Dynamic testing proved to be correlated to static testing,
although such correlation was only partial. It is possible that the two measure-
62 The Journal of General Psychology

ments together might supply a more accurate measurement of the individual’s


level of intelligence. As a matter of fact, static testing can be seen as a product of
the individual’s past learning level, whereas dynamic testing can supply infor-
mation on an individual’s modifiability; that is, on his or her propensity to change.
In this sense, one would expect the predictive validity of dynamic testing to be
greater than that of static testing. Unfortunately, the data available in this study
were not enough to verify this hypothesis. By combining static and dynamic test-
ing, the following profiles emerge: (a) individuals with low static indexes and a
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

low or medium modifiability level, (b) individuals with medium static indexes
and a low, medium, or high modifiability level, and (c) individuals with high sta-
tic indexes and a medium or high modifiability level.
There are practically no categories of individuals with low static indexes and
a high modifiability level, or high static indexes and a low modifiability level; if
there are any, they are insignificant. To explain the partial level of correlation
between static and dynamic indexes of intelligence, the measurement of the prod-
uct flowing into the static index is likely to include the dynamic type index; that
is, the changing capability of the individual. By this, we do not intend to assert
that the dynamic index, although it is a more refined and suitable measurement
of potential intelligence, is clean; it may include more fundamental cognitive vari-
ables such as the arousal level or the ability to memorize, as well as noncogni-
tive variables such as the individual’s motivation or attitude.
The third aim of this study was to determine the correlations between dynam-
ic testing and some cognitive variables, such as selective attention level, global
attention level (as evaluated by the teacher), and school performances in language
and mathematics. These indexes of correlation with cognitive variables as listed
are important because they measure the validity of dynamic testing. Individuals
with high modifiability indexes show higher global attention levels, their codifi-
cation systems are faster (positive correlation between correct answers in selec-
tive attention and GMI) and more accurate (negative correlation with omitted
answers), and they show a higher level of school performance. Also, the data
related to static indexes seem to be correlated with the aforementioned cognitive
variables, but to a lesser extent, and they are not correlated with the accuracy of
the codification.
The results regarding the correlation with the various cognitive variables are
the indexes of the external validity of dynamic testing. From this study, it would
seem that dynamic indexes, although partially correlated to static indexes, repre-
sent a quality step-up compared with the static indexes. Dynamic indexes are
instruments that can measure a concept, which is closer to the definition of intel-
ligence as adaptability or modifiability.
The fourth aim of this study was to identify the influence of the sociocultur-
al background variable on dynamic testing. The sociocultural background (even
though it was evaluated on a partial sample only) was not significant in relation
to dynamic measures, but it was significant in relation to static measures. This is
Fabio 63

in agreement with the results of previous studies on dynamic indexes, and it once
again stresses the fact that dynamic indexes measure something different about
the culture of an individual: They measure the propensity to change (Tzuriel,
1997; Tzuriel & Kaufman, 1999). In conclusion, the internal and external valid-
ity of the indexes of dynamic assessment of intelligence make dynamic testing
preferable to static testing because it can better measure intelligence.

REFERENCES
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

Berk, L. E. (2001). Development through the lifespan. Elmsford, NY: Pearson.


Black, J. D. (1961). The D48 Test. Palo Alto, CA: Consulting Psychologists Press.
Brown, A. L. (1990). Domain-specific principles affect learning and transfer in children.
Cognitive Science, 14, 107–133.
Brown, A. L., & Campione, J. C. (1984). Three faces of transfer: Implication for early
competence, individual differences and instruction. In M. Lamb, A. Brown, & B. Rogoff
(Eds.), Advances in developmental psychology (Vol. 3, pp. 143–192). Hillsdale, NJ: Erl-
baum.
Brown, A. L., & French, L. A. (1979). The zone of potential development: Implication for
intelligence testing in the year 2000. Intelligence, 3, 255–277.
Campione, J. C., Brown, A. L., Ferrara, R. A., & Bryant, N. R. (1984). The zone of prox-
imal development: Implications for individual differences and learning. In B. Rogoff &
J. V. Wertsch (Eds.), New directions for child development (No. 23, pp. 77–91). San
Francisco: Jossey-Bass.
Campione, J. C., Brown, A. L., Ferrara, R. A., Jones, R. S., & Steinberg, E. (1985). Break-
downs in flexible use of information: Intelligence-related differences in transfer fol-
lowing equivalent learning performance. Intelligence, 9, 297–315.
Carlson, J., & Wiedl, K. H. (1992). The dynamic assessment of intelligence. In H. C. Hay-
wood & D. Tzuriel (Eds.), Interactive assessment (pp. 167–186). New York: Springer-
Verlag.
Day, J. D., & Hall, L. K. (1988). Intelligence-related differences in learning and transfer
and enhancement of transfer among mentally retarded persons. American Journal of
Mental Deficiency, 93, 125–137.
Fabio, R. A. (1998). Gli indici dinamici nella misura delle abilità cognitive [Dynamic
indexes in the measurement of cognitive ability]. Età Evolutiva, 59, 69–78.
Fabio, R. A. (1999). Costruzione di un test di misura dell’intelligenza potenziale [Con-
struction of a test for the measurement of potential intelligence]. Giornale Italiano di
Psicologia, 1, 125–146.
Fabio, R. A. (2001). L’intelligenza potenziale: Strumenti di misura e di riabilitazione
[Potential intelligence: Instruments of measurement and enrichment]. Milan: Franco
Angeli.
Fabio, R. A. (2003). La modificabilità cognitiva nella scuola elementare [Cognitive mod-
ifiability in elementary schools]. Età evolutiva, 75, 5–18.
Fabio, R. A., & Mancuso, G. (1995). La valutazione degli indici dinamici in bambini in
età prescolare [An evaluation of dynamic indexes in preschool children]. Psicologia e
Scuola, 76, 46–57.
Feuerstein, R., Rand, Y., & Hoffman, M. B. (1979). The dynamic assessment of retarded
performers: The learning potential assessment device-theory, instruments, and tech-
niques. Baltimore: University Park Press.
Feuerstein, R., Rand, Y., Hoffman, M. B., & Miller, R. (1980). Instrumental enrichment.
Baltimore, MD: University Park Press.
64 The Journal of General Psychology

Missiuna, C., & Samuels, M. (1989). Dynamic assessment: Review and critique. Special
Services in the Schools, 5, 1–12.
Raven, J. C., Court, J. H., & Raven, J. (1979). Manual for Raven’s Progressive Matrices
and vocabulary scales. London: H. K. Lewis.
Tzuriel, D. (1997). A novel dynamic assessment approach for young children: Major
dimension and current research. Educational and Child Psychology, 14, 83–108.
Tzuriel, D. (2000). Dynamic assessment of young children: Educational and intervention
perspectives. Educational Psychology Review, 12, 385–435.
Tzuriel, D., & Kaufman, R. (1999). Mediated learning and cognitive modifiability:
Dynamic assessment of Ethiopian immigrant children to Israel. Journal of Cross-Cul-
Downloaded by [Temple University Libraries] at 17:51 24 November 2014

tural Psychology, 30, 359–380.


Vygotskij, L. S. (1978). Mind in society: The development of higher psychological
processes. Cambridge, MA: Harvard University Press.
Wechsler, D. (1974). Manual for the Wechsler Intelligence Scale for Children–Revised.
San Antonio, TX: The Psychological Corporation.
Zazzo, R. (1959). Le test des deux barrages: Manuel pour l’examen psychologique de l’en-
fant [The Deux Barrages Test: Manual for the psychological examination of children].
Neuchâtel, Switzerland: Delachaux & Nestlé.

Manuscript received May 22, 2002


Revision accepted for publication April 2, 2003

You might also like