Professional Documents
Culture Documents
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
College students and adults; tests 1960-1963; Manual, 1965; 18 scales plus recom
composite scores: Arithmetic, Assembly, Components, Coordination, Electronics,
Ingenuity, Inspection, Judgment and Comprehension, Mathematics and Reasoning
Memory, Patterns, Planning, Precision, Scales, Tables, and Vocabulary, plus the
measures: general ability, verbal ability, and quantitative ability. 3 hours 37 minu
Flanagan; Science Research Associates.
According to the test author, FIT was ". . designed specifically for
adults in personnel selection programs for a wide variety of jobs" (Man
In some respects the battery seems particularly well suited for this pu
18 tests represent a wide sampling of human abilities-abilities which, o
of it at least, seem to be important in the real jobs of real people. And
Professor Flanagan has not used factor analytic procedures (or the like)
construction, he has been much more careful than most test construtor
191
The Manual tells us that the FIT tests "like the FACT series . . . are based on
the identified job elements . . . ," but the Manual is not at all clear in indicating
the source research in which these job elements were identified. It appears that this
research is essentially that upon which the FACT was based and that this was
mainly the research conducted under Flanagan's direction during WW II. This
being the impression, one wonders if the job-elements thus far identified are im-
portant elements of today's occupations.
In a sense, of course, this argument is specious, for the tests refer to fairly
general abilities, not specific elements of jobs that would have changed since WW
II. In this respect the tests assess attributes like the primary mental abilities isolated
by means of factor analyses; indeed, it is evident that the FACT and FIT batteries
contain measures of many of the ability factors so far identified, as described in
comprehensive reviews of replicated findings (French, Ekstrom & Price, 1963;
Guilford & Merrifield, 1960). The FIT Manual makes no reference to this line of
research, however. But the principal virtue of the job-element approach would
seem to be that it promises to identify features of job performance not identified
by other means and, in particular, features of today's jobs as they are performed
today. One might expect, for example, that this approach would lead to the identi-
fication of interesting new abilities related to performance in computer program-
ming, a very important occupation in today's world but one which has come into
being only in the last 15 or 20 years (i.e., since WW II). Yet there is no reference in
the FIT Manual to research showing which-if any-elements of such jobs have
been identified. In fact, the FIT Manual does not clearly direct the reader to the
sections of the FACT Manuals wherein the relationships between FACT scores
and job performances are indicated (Your FACT Scores-and What They Mean,
1953; Interpreting Test Scores, 1956; Technical Report, 1959) although tables
showing the comparability between FIT tests and FACT tests are provided.
A closely related point has to do with the norms and practical validities (i.e.,
relevancies) available for use with FIT. These are of two kinds: (1) those derived by
administering the FIT tests to persons in different academic programs and deter-
mining percentile norms and relationships between test scores and academic per-
formance in these groupings, and (2) those obtained by "equating" scores on a given
FIT test with scores on a comparable FACT test and using this as a basis for treat-
ing the information available on the FACT as applicable to the FIT. In either case,
as concerns the author's intent to provide a battery for use with adult
selection programs, the information presently available leaves muc
Means for each of the groups and for each test are provided. T
patterns among them are interpreted as indicating differences bet
vocational groups, but analyses to indicate the significance (or insignif
differences are not given. Correlations and step-wise multiple corr
gression equations are given for fall and spring grades in four of the f
programs. The samples in a few of these analyses are small (e.g., N
of the reported regression coefficients are very likely unstable and
leading. Nevertheless, the results, overall, indicate that some of the tes
vance for predictions of academic performance. But, of course, it
from these results that the tests have relevance for predictions
success, even in fields seemingly related to the college programs,
"variety of jobs."
The FIT tests are, in general, about half as long as the FACT te
battery there is one test-Electronics-not found in FACT, whereas
tains two tests not found in FIT. As concerns the intended use for
the main difference between FIT and FACT is that the difficulty
former have been increased.
The correlations between corresponding FIT and FACT tests are in some cases
very low. For example, FIT Inspection correlates only .28 with FACT Inspection.
This could result because of a difference in difficulty level for the two tests. But
even so, it makes use of the results from equating FIT and FACT scores a dubious
procedure. Also, some FIT tests correlate rather high with non-corresponding FACT
tests or with other FIT tests-i.e., high relative to the correlation between corres-
ponding FIT and FACT tests. For example, although FIT Planning correlates .38
with FACT Planning, it correlates .43 with FACT Ingenuity. Explanation of this
If tests are intended for use with ". .. adults in personnel selection pr
a wide variety of jobs" it would seem desirable to have data indicatin
ships between test performances and such variables as age, speediness an
This is not to say that a test is necessarily invalid if it involves speedin
extent or if it discriminates against older persons. In some tests 'sp
essential aspect of the attribute measured (Inspection, for example)
expected that older persons will perform more poorly in some kinds of
use tests in personnel selection for a wide variety of jobs, one shou
know about these matters, for it is certain that speediness is not es
formance in some jobs where an ability like that measured in a spee
seem to be involved and, likewise, there are situations where one
make an adjustment to remove age differences found on the test but no
predicting job success. Yet the Manual provides no information of
specified.
The reliabilities of the FIT tests were estimated indirectly using several kinds of
information. No test-retest data were gathered and, since there is (as yet) no parallel
form for FIT, equivalency coefficients could not be obtained. Since the tests are
speeded, it is, as is noted in the Manual, ". . . inappropriate to compute the usual
Spearman-Brown and Kuder-Richardson estimates of the reliability coefficients"
(Manual, p. 12). Lacking these avenues of approach to estimates of reliability, Pro-
fessor Flanagan has utilized the correlations between corresponding FIT and FACT
tests, the inferences one can draw from the similarity in pattern of correlations
which corresponding FIT and FACT tests have with other measures, and the in-
ference one can draw from the multiple correlation which a test has with other tests.
The correlations between corresponding FIT and FACT tests range from .28 to
.79. Since the FIT tests are, in general, at a higher level of difficulty than the FACT
tests, these coefficients are almost certainly underestimates of equivalency relia-
bilities. The multiple correlations for the various tests with other tests vary from .30
to .68. These are somewhat inflated by least-squares capitalization on chance varia-
tion, but their lowness also reflects the fact that the tests were designed to be fairly
independent. Hence, again the suggestion is that reliabilities are almost certainly
above .30 to .80. The patterns of correlations with other tests for corresponding
FIT and FACT tests are in many cases strikingly similar and the
tween two FIT tests are often nearly the same size as the correla
responding FACT tests. Taken together, this evidence suggests t
bilities are surely non-zero, almost certainly all above .3, some p
ciably lower than for corresponding FACT tests, and perhaps m
from about .50 to .85. It would seem, therefore, that the reliab
for many uses for which the test is intended (viz., institution
should, of course, be cautious about using the test for purposes for
intended-viz., for individual decisions.
REFERENCES
JOHN L. HORN
University of Denver