Professional Documents
Culture Documents
their research (e.g., “investigation of the nomo- open-ended process in which validity is an overall
logical network of construct x”). Scientific evaluative judgment of the degree to which theo-
research and debates on the concept of the nomo- retical arguments and empirical findings support
logical network as such have typically been moti- the plausibility and appropriateness of interpreta-
vated to clarify the concept of construct validity tions and uses of test scores (Messick 1995). Kane
and the practice of construct validation. (2001, 2013) offered a pragmatic argument-based
If constructs are defined by their position in a approach to construct validation that should avoid
nomological net, the availability of such a lawful the extremes of the strong and weak program,
network of relations deduced from theory is a thus, fitting better to actual research practice. In
precondition for construct validation of measures the argument-based approach, construct validity
and theories. Psychological theories often lack is established through theoretical and empirical
this theoretical precision. This has led to a disso- evidence for a specific and clearly proposed use
ciation between construct validity qua theory and or interpretation of a measure instead of rigorous
the practice of construct validation (Brennan theory or nomological network testing (cf. Kane
2013), a weakening of the theory testing part of 2013).
construct validity (Colliver et al. 2012), and a In construct validation studies, convergent and
renewed discussion of the concept of validity as discriminant relations are typically reported as
such (Borsboom et al. 2004; Embretson 2007; correlations; researchers rarely refer to logical
Newton and Shaw 2013; Special Issue on Validity arguments or experimental results. Correlations
of the Journal of Educational Measurement, can be estimated within a latent variable frame-
2013, 50/1). work, e.g., by confirmatory factor analysis or
structural equation modeling. When using such
confirmatory methods, the nomological network
Nomological Nets and Construct idea guides psychological research in differential
Validation and personality psychology by pinpointing the
importance of theory in the formulation of
The nomological network idea provides no frame- hypotheses about convergent and discriminant
work for addressing practical validation issues. construct relations, by linking constructs to obser-
Nevertheless, it helps to refine the construct vali- vations, by distinguishing between latent relations
dation process. Specifically, Cronbach (1988) and observed relations, by distinguishing between
contrasted programs of strong and weak construct conceptual and empirical overlap, or by
validation. Strong programs are based on fully distinguishing between theoretically and opera-
developed formal theories (i.e., nomological tionally defined constructs. There are various
nets) and deductive theory testing, while weak tools available to visually display network rela-
programs are based on less developed theories tions (e.g., Epskamp et al. 2012), as well as
that – put to the extreme – would allow methods to evaluate construct validity based on
interpreting any relation as validation evidence convergent and discriminant construct relations
(“anything goes”). Strong and weak programs (Westen and Rosenthal 2003).
combine in construct validation in “an iterative
process in which tests of partially developed the-
ories provide information that leads to theory
Challenges
refinement and elaboration, which in turn pro-
vides a sounder basis for subsequent construct
However, despite more than 50 years of research
and theory validation research” (Strauss and
on nomological nets and construct validity, many
Smith 2009, p. 9; see already Cronbach and
open questions regarding theory and application
Meehl 1955, for a discussion of these top-down
of the nomological network idea remain:
and bottom-up processes in construct validation).
In doing so, construct validation becomes an
Nomological Nets 3
First, convergent validity arguments are fre- construct validity of a measure, particularly,
quently based on correlations. High correla- because method variance has been estimated
tions of two or more measures of the same to make up between 18 and 32 percent of the
construct are interpreted in support of the con- total item variance (Podsakoff et al. 2012). Fur-
vergent validity of a measure. However, no ther, the nature of method variance remains
consensus has yet been reached on what con- elusive as theories explaining the phenomena
stitutes a high enough correlation or how to producing method variance are scarce (Ziegler
deal with inconsistent correlational findings. et al. 2013). Podsakoff et al. (2012) present an
Correlations are also influenced by the psycho- overview of procedural and statistical
metric properties of a measure, the sample, or approaches that may help to minimize the
the method of assessment (e.g., tests, self- impact of method variance.
report). Further, for construct validation, there Fourth, Embretson (1983) differentiates two com-
is the need to differentiate between the level of ponents of construct validity: nomothetic span
observations and the level of constructs. These and construct representation. While nomothetic
aspects are not consistently taken into account span comprises convergent and discriminant
in current validation studies (Schweizer 2012). relations of a measure, construct representation
Taken together, these issues undermine the refers to a cognitive theory that explains
idea of convergent validity as a vague and response behavior for that measure. Nomologi-
somewhat indetermined concept cal nets include laws that relate constructs to
(Schweizer 2012). observations, that is, construct representation;
Second, when measures of different constructs are however, most studies that use the nomological
not meaningfully correlated, this is typically network idea focus on convergent and discrim-
interpreted as supporting the discriminant inant relations or nomothetic span and neglect
validity of these measures. However, many construct representation. But if we lack a theory
validation studies lack a clear theoretical ratio- of response behavior, that is, if we cannot
nale for selecting constructs for discriminant explain our data, an important precondition for
relations (Ziegler et al. 2013). Frequently, the- interpreting nomothetic span is missing.
oretically unrelated constructs are chosen. Borsboom et al. (2004) therefore argue for a
However, to strongly support the construct shift to an attribute-based view of measurement
validity of measures of (new) constructs, it is that assigns validity to a measure only if theo-
most informative to investigate relations retical and empirical arguments support the
between different but closely related constructs assumption that an attribute causes the measure-
(Shaffer et al. 2015; Ziegler et al. 2013). And ment outcomes. In this respect, rational or
again, there is the need to differentiate between theory-based item and test construction as well
the level of observations and the level of con- as scaling and scoring of test behavior become
structs (see Shaffer et al. 2015, for a guideline of paramount importance (Brennan 2013).
for conducting a discriminant validation study
that takes these aspects into account).
Third, the nomological net relates theoretical con- Conclusion
structs to observations assessed with a certain
method. Methods refer to key factors that The idea of the nomological net was introduced to
define the measurement process. That is, guide construct validation. To this end, the net-
so-called method factors (e.g., rater response work in its strong form specifies the laws that
styles, characteristics of the item wording, explain to what extent and why theoretical con-
high- vs. low-stakes measurement contexts) structs are related with each other and with
may introduce systematic variance over and corresponding measures. In its strong form, the
above variance attributable to the target con- network also informs on the circumstances (i.e.,
struct. Method factors may threaten the moderator variables) when these relations can or
4 Nomological Nets
cannot be observed. Given its iterative nature, the Kane, M. (2001). Current concerns in validity theory.
nomological network idea underscores that theory Journal of Educational Measurement, 38, 319–342.
doi:10.1111/j.1745-3984.2001.tb01130.x.
development hinges on both clear construct defi- Kane, M. (2013). Validating the interpretations and uses of
nitions (see Podsakoff et al. 2016, for guidelines) test scores. Journal of Educational Measurement, 50,
and the development of excellent measures. 1–73. doi:10.1111/jedm.12000.
Messick, S. (1995). Validity of psychological assessment:
Validation of inferences from persons’ responses and
performances as scientific inquiry into score meaning.
References American Psychologist, 50, 741–749. doi:10.1037/
0003-066X.50.9.741.
American Psychological Association. (1954). Technical Newton, P. E., & Shaw, S. D. (2013). Standards for talking
recommendations for psychological tests and diagnos- and thinking about validity. Psychological Methods,
tic techniques. Psychological Bulletin, 51(2, Suppl.). 18, 301–319. doi:10.1037/a0032969.
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P.
(2004). The concept of validity. Psychological Review, (2012). Sources of method bias in social science
111, 1061–1071. doi:10.1037/0033-295X.111.4.1061. research and recommendations on how to control it.
Brennan, L. R. (2013). Commentary on “Validating the Annual Review of Psychology, 65, 539–569.
Interpretations and Uses of Test Scores”. Journal of doi:10.1146/annurev-psych-120710-100452.
Educational Measurement (Special Issue: Validity), Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P.
50, 74–83. doi:10.1111/jedm.12001. (2016). Recommendations for creating better concept
Campbell, D. T., & Fiske, D. W. (1959). Convergent and definitions in the organizational, behavioral, and social
discriminant validation by the multitrait-multimethod sciences. Organizational Research Methods.
matrix. Psychological Bulletin, 56, 81–105. doi:10.1177/1094428115624965. Published online
doi:10.1037/h0046016. before print.
Colliver, J. A., Conlee, M. J., & Verhulst, S. J. (2012). Schweizer, K. (2012). On issues on validity and especially
From test validity to construct validity . . . and back? on the misery of convergent validity. European Journal
Medical Education in Review, 46, 366–371. of Psychological Assessment, 28, 249–254.
doi:10.1111/j.1365-2923.2011.04194.x. doi:10.1027/1015-5759/a000156.
Cronbach, L. J. (1988). Five perspectives on the validity Shaffer, J. A., DeGeest, D., & Li, A. (2015). Tackling the
argument. In H. Wainer & H. I. Braun (Eds.), Test problem of construct proliferation: A guide to assessing
validity (pp. 3–17). Hillsdale: Erlbaum. the discriminant validity of conceptually related con-
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity structs. Organizational Research Methods.
in psychological tests. Psychological Bulletin, 52, doi:10.1177/1094428115598239. Published online
281–302. doi:10.1037/h0040957. before print.
Embretson, S. E. (1983). Construct validity: Construct Strauss, M. E., & Smith, G. T. (2009). Construct validity:
representation versus nomothetic span. Psychological advances in theory and methodology. Annual Review of
Bulletin, 93(1), 179–197. Clinical Psychology, 5, 1–25. doi:10.1146/annurev.
Embretson, S. E. (2007). Construct validity: A universal clinpsy.032408.153639.
validity system or just another test evaluation proce- Westen, D., & Rosenthal, R. (2003). Quantifying construct
dure? Educational Researcher, 36, 449–455. validity: Two simple measures. Journal of Personality
doi:10.3102/0013189X07311600. and Social Psychology, 84, 608–618. doi:10.1037/
Epskamp, S., Cramer, A. O. J., Waldorp, L. J., 0022-3514.84.3.608.
Schmittmann, V. D., & Borsboom, D. (2012). qgraph: Ziegler, M., Booth, T., & Bensch, D. (2013). Getting
Network visualizations of relationships in psychomet- entangled in the nomological net. European Journal
ric data. Journal of Statistical Software, 48, 1–18. of Psychological Assessment, 29, 157–161.
doi:10.18637/jss.v048.i04. doi:10.1027/1015-5759/a000173.