You are on page 1of 36

Handbook of Item Response Theory,

Three Volume Set - First Edition Van


Der Linden
Visit to download the full and correct content document:
https://textbookfull.com/product/handbook-of-item-response-theory-three-volume-set-f
irst-edition-van-der-linden/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Handbook of item response theory, volume three:


applications 1st Edition Wim J. Van Der Linden (Editor)

https://textbookfull.com/product/handbook-of-item-response-
theory-volume-three-applications-1st-edition-wim-j-van-der-
linden-editor/

Remote Sensing Handbook - Three Volume Set First


Edition Prasad S. Thenkabail

https://textbookfull.com/product/remote-sensing-handbook-three-
volume-set-first-edition-prasad-s-thenkabail/

CRC Handbook of Thermodynamic Data of Polymer


Solutions, Three Volume Set First Edition Wohlfarth

https://textbookfull.com/product/crc-handbook-of-thermodynamic-
data-of-polymer-solutions-three-volume-set-first-edition-
wohlfarth/

The Basics of Item Response Theory Using R 1st Edition


Frank B. Baker

https://textbookfull.com/product/the-basics-of-item-response-
theory-using-r-1st-edition-frank-b-baker/
Lev Vygotsky First Paperback Edition René Van Der Veer

https://textbookfull.com/product/lev-vygotsky-first-paperback-
edition-rene-van-der-veer/

Present imperfect contemporary South African writing


First Edition Van Der Vlies

https://textbookfull.com/product/present-imperfect-contemporary-
south-african-writing-first-edition-van-der-vlies/

Illustrated Encyclopedia of Applied and Engineering


Physics, Three-Volume Set 1st Edition Robert Splinter
(Author)

https://textbookfull.com/product/illustrated-encyclopedia-of-
applied-and-engineering-physics-three-volume-set-1st-edition-
robert-splinter-author/

Biota Grow 2C gather 2C cook Loucas

https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-
loucas/

Narratives of Technology 1st Edition J. M. Van Der Laan


(Auth.)

https://textbookfull.com/product/narratives-of-technology-1st-
edition-j-m-van-der-laan-auth/
Handbook of
Item Response Theory
VOLUME ONE
Models
Handbook of Item Response Theory, Three-Volume Set

Handbook of Item Response Theory, Volume One: Models

Handbook of Item Response Theory, Volume Two: Statistical Tools

Handbook of Item Response Theory, Volume Three: Applications


Chapman & Hall/CRC
Statistics in the Social and Behavioral Sciences Series

Series Editors
Jeff Gill Steven Heeringa
Washington University, USA University of Michigan, USA

Wim J. van der Linden J. Scott Long


Pacific Metrics, USA Indiana University, USA

Tom Snijders
Oxford University, UK
University of Groningen, NL

Aims and scope

Large and complex datasets are becoming prevalent in the social and behavioral
sciences and statistical methods are crucial for the analysis and interpretation of such
data. This series aims to capture new developments in statistical methodology with
particular relevance to applications in the social and behavioral sciences. It seeks to
promote appropriate use of statistical, econometric and psychometric methods in
these applied sciences by publishing a broad range of reference works, textbooks and
handbooks.

The scope of the series is wide, including applications of statistical methodology in


sociology, psychology, economics, education, marketing research, political science,
criminology, public policy, demography, survey methodology and official statistics. The
titles included in the series are designed to appeal to applied statisticians, as well as
students, researchers and practitioners from the above disciplines. The inclusion of real
examples and case studies is therefore essential.
Published Titles

Analyzing Spatial Models of Choice and Judgment with R


David A. Armstrong II, Ryan Bakker, Royce Carroll, Christopher Hare, Keith T. Poole, and Howard Rosen-
thal
Analysis of Multivariate Social Science Data, Second Edition
David J. Bartholomew, Fiona Steele, Irini Moustaki, and Jane I. Galbraith
Latent Markov Models for Longitudinal Data
Francesco Bartolucci, Alessio Farcomeni, and Fulvia Pennoni
Statistical Test Theory for the Behavioral Sciences
Dato N. M. de Gruijter and Leo J. Th. van der Kamp
Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences
Brian S. Everitt
Multilevel Modeling Using R
W. Holmes Finch, Jocelyn E. Bolin, and Ken Kelley
Bayesian Methods: A Social and Behavioral Sciences Approach, Third Edition
Jeff Gill
Multiple Correspondence Analysis and Related Methods
Michael Greenacre and Jorg Blasius
Applied Survey Data Analysis
Steven G. Heeringa, Brady T. West, and Patricia A. Berglund
Informative Hypotheses: Theory and Practice for Behavioral and Social Scientists
Herbert Hoijtink
Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation
Modeling
Heungsun Hwang and Yoshio Takane
Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R Using
EU-SILC
Nicholas T. Longford
Foundations of Factor Analysis, Second Edition
Stanley A. Mulaik
Linear Causal Modeling with Structural Equations
Stanley A. Mulaik
Age–Period–Cohort Models: Approaches and Analyses with Aggregate Data
Robert M. O’Brien
Handbook of International Large-Scale Assessment: Background, Technical Issues, and Methods of
Data Analysis
Leslie Rutkowski, Matthias von Davier, and David Rutkowski
Generalized Linear Models for Categorical and Continuous Limited Dependent Variables
Michael Smithson and Edgar C. Merkle
Incomplete Categorical Data Design: Non-Randomized Response Techniques for Sensitive Questions in
Surveys
Guo-Liang Tian and Man-Lai Tang
Handbook of Item Response Theory, Volume One: Models
Wim J. van der Linden
Handbook of Item Response Theory, Volume Two: Statistical Tools
Wim J. van der Linden
Handbook of Item Response Theory, Volume Three: Applications
Wim J. van der Linden
Computerized Multistage Testing: Theory and Applications
Duanli Yan, Alina A. von Davier, and Charles Lewis
Chapman & Hall/CRC
Statistics in the Social and Behavioral Sciences Series

Handbook of
Item Response Theory
VOLUME ONE
Models

Edited by
Wim J. van der Linden
Pacific Metrics
Monterey, California
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2016 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper


Version Date: 20160421

International Standard Book Number-13: 978-1-4665-1431-7 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.

Library of Congress Cataloging‑in‑Publication Data

Names: Linden, Wim J. van der, editor.


Title: Handbook of item response theory / Wim J. van der Linden, [editor].
Description: Boca Raton, FL : CRC Press, 2015- | Includes bibliographical
references and index.
Identifiers: LCCN 2015034163 | ISBN 9781466514324 (alk. paper : vol. 2)
Subjects: LCSH: Item response theory. | Psychometrics. |
Psychology--Mathematical models.
Classification: LCC BF39.2.I84 H35 2015 | DDC 150.28/7--dc23
LC record available at http://lccn.loc.gov/2015034163

Visit the Taylor & Francis Web site at


http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
Contents

Contents for Statistical Tools ..........................................................................................................xi


Contents for Applications ............................................................................................................ xiii
Preface .......................................................................................................................................... xvii
Contributors ................................................................................................................................. xxi

1. Introduction .............................................................................................................................1
Wim J. van der Linden

Section I Dichotomous Models

2. Unidimensional Logistic Response Models ................................................................... 13


Wim J. van der Linden

3. Rasch Model .......................................................................................................................... 31


Matthias von Davier

Section II Nominal and Ordinal Models

4. Nominal Categories Models............................................................................................... 51


David Thissen and Li Cai

5. Rasch Rating-Scale Model .................................................................................................. 75


David Andrich

6. Graded Response Models ................................................................................................... 95


Fumiko Samejima

7. Partial Credit Model .......................................................................................................... 109


Geoff N. Masters

8. Generalized Partial Credit Model................................................................................... 127


Eiji Muraki and Mari Muraki

9. Sequential Models for Ordered Responses .................................................................. 139


Gerhard Tutz

10. Models for Continuous Responses ................................................................................. 153


Gideon J. Mellenbergh

vii
viii Contents

Section III Multidimensional and Multicomponent Models

11. Normal-Ogive Multidimensional Models .................................................................... 167


Hariharan Swaminathan and H. Jane Rogers

12. Logistic Multidimensional Models ................................................................................ 189


Mark D. Reckase

13. Linear Logistic Models ...................................................................................................... 211


Rianne Janssen

14. Multicomponent Models ...................................................................................................225


Susan E. Embretson

Section IV Models for Response Times

15. Poisson and Gamma Models for Reading Speed and Error ...................................... 245
Margo G. H. Jansen

16. Lognormal Response-Time Model.................................................................................. 261


Wim J. van der Linden

17. Diffusion-Based Response-Time Models...................................................................... 283


Francis Tuerlinckx, Dylan Molenaar, and Han L. J. van der Maas

Section V Nonparametric Models

18. Mokken Models .................................................................................................................. 303


Klaas Sijtsma and Ivo W. Molenaar

19. Bayesian Nonparametric Response Models .................................................................. 323


George Karabatsos

20. Functional Approaches to Modeling Response Data.................................................. 337


James O. Ramsay

Section VI Models for Nonmonotone Items

21. Hyperbolic Cosine Model for Unfolding Responses .................................................. 353


David Andrich

22. Generalized Graded Unfolding Model .......................................................................... 369


James S. Roberts
Contents ix

Section VII Hierarchical Response Models

23. Logistic Mixture-Distribution Response Models........................................................ 393


Matthias von Davier and Jürgen Rost

24. Multilevel Response Models with Covariates and Multiple Groups ...................... 407
Jean-Paul Fox and Cees A. W. Glas

25. Two-Tier Item Factor Analysis Modeling ...................................................................... 421


Li Cai

26. Item-Family Models ........................................................................................................... 437


Cees A. W. Glas, Wim J. van der Linden, and Hanneke Geerlings

27. Hierarchical Rater Models ................................................................................................ 449


Jodi M. Casabianca, Brian W. Junker, and Richard J. Patz

28. Randomized Response Models for Sensitive Measurements ................................... 467


Jean-Paul Fox

29. Joint Hierarchical Modeling of Responses and Response Times ............................ 481
Wim J. van der Linden and Jean-Paul Fox

Section VIII Generalized Modeling Approaches

30. Generalized Linear Latent and Mixed Modeling ........................................................ 503


Sophia Rabe-Hesketh and Anders Skrondal

31. Multidimensional, Multilevel, and Multi-Timepoint Item Response


Modeling .............................................................................................................................. 527
Bengt Muthén and Tihomir Asparouhov

32. Mixed-Coefficients Multinomial Logit Models ........................................................... 541


Raymond J. Adams, Mark R. Wilson, and Margaret L. Wu

33. Explanatory Response Models......................................................................................... 565


Paul De Boeck and Mark R. Wilson

Index ............................................................................................................................................. 581


Contents for Statistical Tools

Section I Basic Tools

1. Logit, Probit, and Other Response Functions


James H. Albert

2. Discrete Distributions
Jodi M. Casabianca and Brian W. Junker

3. Multivariate Normal Distribution


Jodi M. Casabianca and Brian W. Junker

4. Exponential Family Distributions Relevant to IRT


Shelby J. Haberman

5. Loglinear Models for Observed-Score Distributions


Tim Moses

6. Distributions of Sums of Nonidentical Random Variables


Wim J. van der Linden

7. Information Theory and Its Application to Testing


Hua-Hua Chang, Chun Wang, and Zhiliang Ying

Section II Modeling Issues

8. Identification of Item Response Theory Models


Ernesto San Martín

9. Models With Nuisance and Incidental Parameters


Shelby J. Haberman

10. Missing Responses in Item Response Modeling


Robert J. Mislevy

Section III Parameter Estimation

11. Maximum-Likelihood Estimation


Cees A. W. Glas

12. Expectation Maximization Algorithm and Extensions


Murray Aitkin

xi
xii Contents for Statistical Tools

13. Bayesian Estimation


Matthew S. Johnson and Sandip Sinharay

14. Variational Approximation Methods


Frank Rijmen, Minjeong Jeon, and Sophia Rabe-Hesketh

15. Markov Chain Monte Carlo for Item Response Models


Brian W. Junker, Richard J. Patz, and Nathan M. VanHoudnos

16. Statistical Optimal Design Theory


Heinz Holling and Rainer Schwabe

Section IV Model Fit and Comparison

17. Frequentist Model-Fit Tests


Cees A. W. Glas

18. Information Criteria


Allan S. Cohen and Sun-Joo Cho

19. Bayesian Model Fit and Model Comparison


Sandip Sinharay

20. Model Fit with Residual Analyses


Craig S. Wells and Ronald K. Hambleton

Index
Contents for Applications

Section I Item Calibration and Analysis

1. Item-Calibration Designs
Martijn P. F. Berger

2. Parameter Linking
Wim J. van der Linden and Michelle D. Barrett

3. Dimensionality Analysis
Robert D. Gibbons and Li Cai

4. Differential Item Functioning


Dani Gamerman, Flávio B. Gonçalves, and Tufi M. Soares

5. Calibrating Technology-Enhanced Items


Richard M. Luecht

Section II Person Fit and Scoring

6. Person Fit
Cees A. W. Glas and Naveed Khalid

7. Score Reporting and Interpretation


Ronald K. Hambleton and April L. Zenisky

8. IRT Observed-Score Equating


Wim J. van der Linden

Section III Test Design

9. Optimal Test Design


Wim J. van der Linden

10. Adaptive Testing


Wim J. van der Linden

11. Standard Setting


Daniel Lewis and Jennifer Lord-Bessen

12. Test Speededness and Time Limits


Wim J. van der Linden

xiii
xiv Contents for Applications

13. Item and Test Security


Wim J. van der Linden

Section IV Areas of Application

14. Large-Scale Group-Score Assessments


John Mazzeo

15. Psychological Testing


Paul De Boeck

16. Cognitive Diagnostic Assessment


Chun Wang and Hua-Hua Chang

17. Health Measurement


Richard C. Gershon, Ron D. Hays, and Michael Kallen

18. Marketing Research


Martijn G. de Jong and Ulf Böckenholt

19. Measuring Change Using Rasch Models


Gerhard H. Fischer

Section V Computer Programs

20. IRT Packages in R


Thomas Rusch, Patrick Mair, and Reinhold Hatzinger

21. Bayesian Inference Using Gibbs Sampling (BUGS) for IRT models
Matthew S. Johnson

22. BILOG-MG
Michele F. Zimowski

23. PARSCALE
Eiji Muraki

24. IRTPRO
Li Cai

25. Xcalibre 4
Nathan A. Thompson and Jieun Lee

26. EQSIRT
Peter M. Bentler, Eric Wu, and Patrick Mair
Contents for Applications xv

27. ACER ConQuest


Raymond J. Adam, Margaret L. Wu, and Mark R. Wilson

28. Mplus
Bengt Muthén and Linda Muthén

29. GLLAMM
Sophia Rabe-Hesketh and Anders Skrondal

30. Latent GOLD


Jeroen K. Vermunt

31. WinGen
Kyung (Chris) T. Han

32. Firestar
Seung W. Choi

32. jMetrik
J. Patrick Meyer

Index
Preface

Item response theory (IRT) has its origins in pioneering work by Louis Thurstone in the
1920s, a handful of authors such as Lawley, Mosier, and Richardson in the 1940s, and more
decisive work by Alan Birnbaum, Frederic Lord, and George Rasch in the 1950s and 1960s.
The major breakthrough it presents is the solution to one of the fundamental flaws inher-
ent in classical test theory—its systematic confounding of what we measure with the test
items used to measure it.
Test administrations are observational studies in which test takers receive a set of items
and we observe their responses. The responses are the joint effects of both the properties
of the items and abilities of the test takers. As in any other observational study, it would
be a methodological error to attribute the effects to one of these underlying causal fac-
tors only. Nevertheless, it seems as if we are forced to do so. If new items are field tested,
the interest is exclusively in their properties, and any confounding with the abilities of
the largely arbitrary selection of test takers used in the study would bias our inferences
about them. Likewise, if examinees are tested, the interest is in their abilities only and we
do not want their scores to be biased by the incidental properties of the items. Classical
test theory does create such biases. For instance, it treats the p-values of the items as
their difficulty parameters, but these values depend equally on the abilities of the sample
of test takers used in the field test. In spite of the terminology, the same holds for its
item-discrimination parameters and definition of test reliability. On the other hand, the
number-correct scores classical test theory typically is used for are scores equally indica-
tive of the difficulty of the test as the abilities of test takers. In fact, the tradition of index-
ing such parameter and scores by the items or test takers only systematically hides this
confounding.
IRT solves the problem by recognizing each response as the outcome of a distinct prob-
ability experiment that has to be modeled with separate parameters for the item and test
taker effects. Consequently, its item parameters allow us to correct for item effects when
we estimate the abilities. Likewise, the presence of the ability parameters allows us to cor-
rect for their effects when estimating the item parameter. One of the best introductions to
this change of paradigm is Rasch (1960, Chapter 1), which is mandatory reading for anyone
with an interest in the subject. The chapter places the new paradigm in the wider context
of the research tradition found in the behavioral and social sciences with its persistent
interest in vaguely defined “populations” of subjects, who, except for some random noise,
are treated as exchangeable, as well as its use of statistical techniques as correlation coef-
ficients, analysis of variance, and hypothesis testing that assume “random sampling” from
them.
The developments since the original conceptualization of IRT have remained rapid.
When Ron Hambleton and I edited an earlier handbook of item response theory (van
der Linden and Hambleton, 1997), we had the impression that its 28 chapters pretty
much summarized what could be said about the subject. But now, nearly two decades
later, three volumes with roughly the same number of chapters each appear to be neces-
sary. And I still feel I have to apologize to all the researchers and practitioners whose
original contributions to the vast literature on IRT are not included in this new hand-
book. Not only have the original models for dichotomous responses been supplemented
with numerous models for different response formats or response processes, it is now

xvii
xviii Preface

clear, for instance, that models for response times on test items require the same type
of parameterization to account both for the item and test taker effects. Another major
development has been the recognition of the need of deeper parameterization due
to a multilevel or hierarchical structure of the response data. This development has
led to the possibility to account for explanatory covariates, group structures with an
impact on the item or ability parameters, mixtures of response processes, higher-level
relationships between responses and response times, or special structures of the item
domain, for instance, due to the use of rule-based item generation. Meanwhile, it has
also become clear how to embed IRT in the wider development of generalized latent
variable modeling. And as a result of all these extensions and new insights, we are now
keener in our choice of treating model parameter as fixed or random. Volume 1 of this
handbook covers most of these developments. Each of its chapters basically reviews one
model. However, all chapters have the common format of an introductory section with
some history of the model and a motivation of its relevance, and then continue with sec-
tions that present the model more formally, treat the estimation of its parameters, show
how to evaluate its fit to empirical data, and illustrate the use of the model through
an empirical example. The last section discusses further applications and remaining
research issues.
As with any other type of probabilistic modeling, IRT depends heavily on the use of
statistical tools for the treatment of its models and their applications. Nevertheless, sys-
tematic introductions and review with an emphasis on their relevance to IRT are hardly
found in the statistical literature. Volume 2 is to fill this void. Its chapters are on such
topics as commonly used probability distributions in IRT, the issue of models with both
intentional and nuisance parameters, the use of information criteria, methods for dealing
with missing data, model identification issues, and several topics in parameter estimation
and model fit and comparison. It is especially in these last two areas that recent develop-
ments have been overwhelming. For instance, when the previous handbook of IRT was
produced, Bayesian approaches had already gained some ground but were certainly not
common. But thanks to the computational success of Markov chain Monte Carlo methods,
these approaches have now become standard, especially for the more complex models in
the second half of Volume 1.
The chapters of Volume 3 review several applications of IRT to the daily practice of
testing. Although each of the chosen topics in the areas of item calibration and analysis,
person fit and scoring, and test design have ample resources in the larger literature on test
theory, the current chapters exclusively highlight the contributions that IRT has brought to
them. This volume also offers chapters with reviews of how IRT has advanced such areas
as large-scale educational assessments, psychological testing, cognitive diagnosis, health
measurement, marketing research, or the more general area of measurement of change.
The volume concludes with an extensive review of computer software programs available
for running any of the models and applications in Volumes 1 and 3.
I expect this Handbook of Item Response Theory to serve as a daily resource of informa-
tion to researchers and practitioners in the field of IRT as well as a textbook to novices.
To better serve them, all chapters are self-contained. But their common core of notation
and extensive cross-referencing allows readers of one of the chapters to consult others for
background information without too much interruption.
I am grateful to all my authors for their belief in this project and the time they have
spent on their chapters. It has been a true privilege to work with each of them. The
same holds for Ron Hambleton who was willing to serve as my sparring partner during
the conception of the plan for this handbook. John Kimmel, executive editor, Statistics,
Preface xix

Chapman & Hall/CRC has been a permanent source of helpful information during the
production of this book. I thank him for his support as well.

Wim J. van der Linden


Monterey, CA

References
Rasch, G. 1960. Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagen: Danish
Institute for Educational Research.
van der Linden, W. J., and Hambleton, R. K. (Eds.) 1997. Handbook of Modern Item Response Theory.
New York: Springer.
Contributors

Raymond J. Adams is an honorary professorial fellow of the University of Melbourne, and


head of ACER’s global educational monitoring center. Dr. Adams specializes in psycho-
metrics, educational statistics, large-scale testing, and international comparative studies.
He earned his PhD from the University of Chicago in 1989. He led the OECD PISA pro-
gramme from its inception until 2013. Dr. Adams has published widely on the technical
aspects of educational measurement and his item response modeling software packages
are amongst the most widely used in educational and psychological measurement. He has
served as chair of the technical advisory committee for the International Association for
the Evaluation of Educational Achievement and as head of measurement at the Australian
Council for Educational Research.

David Andrich is the Chapple professor of education at the University of Western Australia.
He earned his PhD from the University of Chicago in 1973. In 1990, he was elected as a
fellow of the Academy of Social Sciences of Australia for his contributions to measure-
ment in the social sciences. He has contributed to the development of both single-peaked
(unfolding) and monotonic (cumulative) response models, and is especially known for his
contributions to Rasch measurement theory.

Tihomir Asparouhov earned his PhD in mathematics at the California Institute of


Technology. He is a senior statistician and programmer at Muthén & Muthén, which
develops and distributes the Mplus program. He has published on complex survey analy-
sis, multilevel modeling, survival analysis, structural equation modeling, and Bayesian
analysis.

Li Cai is a professor of education and psychology at UCLA, where he also serves as codi-
rector of the National Center for Research on Evaluation, Standards, and Student Testing
(CRESST). His methodological research agenda involves the development, integration,
and evaluation of innovative latent variable models that have wide-ranging applications
in educational, psychological, and health-related domains of study. A component on this
agenda is statistical computing, particularly as related to item response theory (IRT) and
multilevel modeling. He has also collaborated with substantive researchers at UCLA and
elsewhere on projects examining measurement issues in educational games and simula-
tions, mental health statistics, substance abuse treatment, and patient-reported outcomes.

Jodi M. Casabianca is an assistant professor of quantitative methods in the Department of


Educational Psychology in the College of Education at the University of Texas, Austin. She
earned a BA in statistics and psychology and an MS in applied and mathematical statistics
from Rutgers University, and an MA and PhD in psychometrics from Fordham University.
Her research interests are in psychometrics and educational measurement, specifically
measurement models for rated data, measurement and computational methodology for
large-scale testing and assessment, and evaluation of teaching quality and teacher profes-
sional development programs.

xxi
xxii Contributors

Paul De Boeck earned his PhD from the KU Leuven (Belgium) in 1977, with a disserta-
tion on personality inventory responding. He has held positions at the KU Leuven as a
professor of psychological assessment and at the University of Amsterdam (Netherlands)
as a professor of psychological methods from 2009 to 2012. Since 2012 he is a professor of
quantitative psychology at the Ohio State University. He is past section editor of ARCS
Psychometrika and a past president of the Psychometric Society (2007–2008). His main
research interests are explanatory item response models and applications in the domain of
psychology and educational measurement.

Susan E. Embretson earned her PhD at the University of Minnesota in psychology in 1973.
She was on the faculty of the University of Kansas for many years and has been a profes-
sor of psychology at the Georgia Institute of Technology, since 2004. She has served as
president of the Psychometric Society (1999), the Society of Multivariate Psychology (1998),
and American Psychological Association, Division 5 (1991). Embretson has received career
achievement awards from NCME, AERA, and APA. Her current research interests include
cognitive psychometric models and methods, educational test design, the measurement of
change, and automatic item generation.

Jean-Paul Fox is a professor in the Department of Research Methodology, Measurement


and Data Analysis, University of Twente, the Netherlands. His research is in the area of
Bayesian item response modeling and he is the author of the monograph Bayesian Item
Response Modeling (Springer, 2010). He is known for his work on multilevel IRT modeling,
especially the integration of multilevel survey design and psychometric modeling. In 2001,
he received the Psychometric Society Dissertation Award for his work on multilevel IRT
modeling. He has also received two personal grants from the Netherlands Organisation
for Scientific Research to develop psychometric models for large-scale survey research.

Hanneke Geerlings earned her PhD in psychometrics from the University of Twente, the
Netherlands, where she is currently appointed as an assistant professor. Her PhD thesis
was on multilevel response models for rule-based item generation.

Cees A. W. Glas is chair of the Department of Research Methodology, Measurement


and Data Analysis, at the Faculty of Behavioral, Management and Social Sciences of the
University of Twente in the Netherlands. He participated in numerous research projects
including projects of the Law School Admission Council and the OECD international edu-
cational survey PISA. His published articles, book chapters, and supervised theses cover
such topics as testing of fit to IRT models, Bayesian estimation of multidimensional and
multilevel IRT models using MCMC, modeling with nonignorable missing data, concur-
rent modeling of item response and textual input, and the application of computerized
adaptive testing in the context of health assessment and organizational psychology.

Margo G. H. Jansen earned her PhD in psychology from the University of Groningen in
1977 with a dissertation on applying Bayesian statistical methods in educational measure-
ment. She has held positions at the Central Institute for Test Development (Cito) until 1979
and as an associate professor at the University of Groningen. Her current research inter-
ests are in educational measurement and linguistics.

Rianne Janssen earned her PhD on componential IRT models from the KU Leuven
(Belgium) in 1994. She has been an associate professor at the same university since 1996.
Contributors xxiii

Her research interests are in nearly every aspect of educational measurement. She is
currently responsible for the national assessments of educational progress in Flanders
(Belgium).

Brian W. Junker is a professor of statistics and associate dean for academic affairs in
the Dietrich College of Humanities and Social Sciences at Carnegie Mellon University.
Dr. Junker has broad interests in psychometrics, education research, and applied statistics,
ranging from nonparametric and Bayesian item response theory, to Markov Chain Monte
Carlo and other computing and estimation methods, and rating protocols for teacher
quality, educational data mining, social network analysis, and mixed membership mod-
eling. He earned a BA in mathematics from the University of Minnesota, and an MS in
mathematics and PhD in statistics from the University of Illinois.

George Karabatsos earned his PhD from the University of Chicago in 1998, with spe-
cialties in psychometric methods and applied statistics. He has been a professor at the
University of Illinois-Chicago since 2002. He received a New Investigator Award from the
Society for Mathematical Psychology in 2002, and is an associate editor of Psychometrika.
His current research focuses on the development and use of Bayesian nonparametrics,
especially the advancement of regression and psychometrics.

Geoff N. Masters is chief executive officer and a member of the Board of the Australian
Council for Educational Research (ACER)—roles he has held since 1998. He is an adjunct
professor in the Queensland Brain Institute. He has a PhD in educational measurement
from the University of Chicago and has published widely in the fields of educational
assessment and research. Professor Masters has served on a range of bodies, including
terms as president of the Australian College of Educators; founding president of the Asia–
Pacific Educational Research Association; member of the Business Council of Australia’s
Education, Skills, and Innovation Taskforce; member of the Australian National Commission
for UNESCO; and member of the International Baccalaureate Research Committee.

Gideon J. Mellenbergh earned his PhD in psychology from the University of Amsterdam
in The Netherlands. He is professor emeritus of psychological methods, the University of
Amsterdam, former director of the Interuniversity Graduate School of Psychometrics and
Sociometrics (IOPS), and emeritus member of the Royal Netherlands Academy of Arts and
Sciences (KNAW). His research interests are in the areas of test construction, psychometric
decision making, measurement invariance, and the analysis of psychometrical concepts.

Dylan Molenaar earned his PhD in psychology from the University of Amsterdam in
the Netherlands in 2012 (cum laude). His dissertation research was funded by a personal
grant from the Netherlands Organization for Scientific Research and he was awarded the
Psychometric Dissertation Award in 2013. As a postdoc he studied item response theory
models for responses and response times. In addition, he has been a visiting scholar at Ohio
State University. Currently, he is an assistant professor at the University of Amsterdam.
His research interests include item response theory, factor analysis, response time model-
ing, intelligence, and behavior genetics.

Ivo W. Molenaar (PhD, University of Amsterdam) is professor emeritus of statistics


and measurement, University of Groningen, The Netherlands. He is a past president
of the Psychometric Society, a former editor of Psychometrika, and past president of
xxiv Contributors

The Netherlands Society for Statistics and Operations Research (VvS). His research is
in measurement models for abilities and attitudes (Rasch models and Mokken models),
Bayesian methods (prior elicitation, robustness of model choice), and behavior studies of
the users of statistical software. Together with Gerard H. Fischer, he coedited a monograph
on Rasch models in 1995.

Eiji Muraki is a professor emeritus, Tohoku University, School of Educational Informatics,


and a research advisor for the Japan Institute for Educational Measurement. He earned a
PhD in measurement, evaluation, and statistical analysis from the University of Chicago
and has developed several psychometric software programs, including PARSCALE,
RESGEN, BILOG-MG (formerly BIMAIN), and TESTFACT. His research interests in psy-
chometrics have been in polytomous and multidimensional response models, item param-
eter drift, and the method of marginal maximum-likelihood estimation. Recently, he has
added computer-based testing, web-based education, and instructional design to his
research agenda.

Mari Muraki is an education data consultant and was a Code for America Fellow in 2015.
She currently builds technology to help schools use their student data efficiently. Previously,
she led the Stanford University Center for Education Policy Analysis data warehouse and
district data partnerships across the United States. She earned a BA in mathematics and
statistics from the University of Chicago and an MS in statistics, measurement, assessment
and research technology from the University of Pennsylvania.

Bengt Muthén obtained his PhD in statistics at the University of Uppsala, Sweden and is
professor emeritus at UCLA. He was the president of the Psychometric Society from 1988
to 1989 and the recipient of the Psychometric Society’s Lifetime Achievement Award in
2011. He has published extensively on latent variable modeling and many of his proce-
dures are implemented in Mplus.

Richard J. Patz is a chief measurement officer at ACT, with responsibilities for research
and development. His research interests include statistical methods, assessment design,
and management of judgmental processes in education and assessment. He served as
president of the National Council on Measurement in Education from 2015 to 2016. He
earned a BA in mathematics from Grinnell College, and an MS and a PhD in statistics from
Carnegie Mellon University.

Sophia Rabe-Hesketh is a professor of education and biostatistics at the University of


California, Berkeley. Her previous positions include professor of social statistics at the
Institute of Education, University of London. Her research interests include multilevel,
longitudinal, and latent variable modeling and missing data. Rabe-Hesketh has over 100
peer-reviewed articles in Psychometrika, Biometrika, and Journal of Econometrics, among
others. Her six coauthored books include Generalized Latent Variable Modeling and Multilevel
and Longitudinal Modeling Using Stata. She has been elected to the National Academy of
Education, is a fellow of the American Statistical Association, and was president of the
Psychometric Society.

James O. Ramsay is professor emeritus of psychology and an associate member in the


Department of Mathematics and Statistics at McGill University. He earned a PhD from
Princeton University in quantitative psychology in 1966. He served as chair of the
Contributors xxv

department from 1986 to 1989. Dr. Ramsay has contributed research on various topics in
psychometrics, including multidimensional scaling and test theory. His current research
focus is on functional data analysis, and involves developing methods for analyzing sam-
ples of curves and images. He has been the president of the Psychometric Society and
the Statistical Society of Canada. He received the Gold Medal of the Statistical Society
of Canada in 1998 and the Award for Technical or Scientific Contributions to the Field of
Educational Measurement of the National Council on Measurement in Education in 2003,
and was made an honorary member of the Statistical Society of Canada in 2012.

Mark D. Reckase is a university distinguished professor emeritus at Michigan State


University in East Lansing, Michigan. He earned his PhD in psychology from Syracuse
University in Syracuse, New York. His professional interests are in the areas of advanced
item response theory models, the design of educational tests, setting standards of per-
formance on educational tests, computerized adaptive testing, and statistical methods
for evaluating the quality of teaching. He is the author of the book Multidimensional Item
Response Theory. He has been the president of the National Council on Measurement in
Education and the vice president of division D of the American Educational Research
Association (AERA). He has received the E. F. Lindquist Award from AERA for contribu-
tions to educational measurement.

James S. Roberts earned his PhD in experimental psychology from the University of
South Carolina in 1995 with a specialty in quantitative psychology. He subsequently com-
pleted a postdoctoral fellowship in the division of statistics and psychometric research at
Educational Testing Service. He is currently an associate professor of psychology at the
Georgia Institute of Technology and has previously held faculty positions at the Medical
University of South Carolina and the University of Maryland. His research interests focus
on the development and application of new model-based measurement methodology in
education and the social sciences.

H. Jane Rogers earned her bachelor’s and master’s at the University of New England in
Australia and her PhD in psychology at the University of Massachusetts Amherst. Her
research interests are applications of item response theory, assessment of differential item
functioning, and educational statistics. She is the coauthor of a book on item response the-
ory and has published papers on a wide range of psychometric issues. She has consulted
on psychometric issues for numerous organizations and agencies as well as on projects
funded by Educational Testing Service, Law School Admissions Council, Florida Bar, and
National Center for Educational Statistics.

Jürgen Rost earned his PhD from the University of Kiel in Germany in 1980. He became
a professor at its Institute of Science Education and led its methodology department until
2005. He has authored more than 50 papers published in peer-reviewed journals on Rasch
models and latent class models. He developed his mixture-distribution Rasch model
in 1990. He edited two volumes on latent class and latent trait models. In addition, he
authored a textbook on test theory and is the founding editor of Methods of Psychological
Research, the first online open-access journal on research methodology.

Fumiko Samejima earned her PhD in psychology from Keio University, Japan, in 1965.
She has held academic positions at the University of New Brunswick, Canada, and the
University of Tennessee. Although her research is wide-ranging, it is best known for her
xxvi Contributors

pioneering work in polytomous item response modeling. Dr. Samejima is a past president
of the Psychometric Society.

Klaas Sijtsma earned his PhD from the University of Groningen in the Netherlands in
1988, with a dissertation on the topic of nonparametric item response theory. Since 1981,
he has held positions at the University of Groningen, Vrije Universiteit in Amsterdam,
and Utrecht University, and has been a professor of methods of psychological research
at Tilburg University since 1997. He is a past president of the Psychometric Society. His
research interests encompass all topics with respect to the measurement of individual dif-
ferences in psychology. Together with Ivo W. Molenaar, he published a monograph on
nonparametric item response theory, in particular Mokken models, in 2002.

Anders Skrondal earned his PhD in statistics from the University of Oslo for which he
was awarded the Psychometric Society Dissertation Prize. He is currently a senior sci-
entist, Norwegian Institute of Public Health, adjunct professor, Centre for Educational
Measurement, University of Oslo, and adjunct professor, Graduate School of Education,
University of California, Berkeley. Previous positions include professor of statistics and
director of the Methodology Institute, London School of Economics, and adjunct profes-
sor of biostatistics, University of Oslo. His coauthored books include Generalized Latent
Variable Modeling and The Cambridge Dictionary of Statistics. His research interests span top-
ics in psychometrics, biostatistics, social statistics, and econometrics. Skrondal is currently
president-elect of the Psychometric Society.

Hariharan Swaminathan earned his BS (hon.; mathematics) from Dalhousie University,


Halifax, Canada, and MS (mathematics) and MEd and PhD from the University of Toronto
specializing in psychometrics, statistics, and educational measurement/evaluation. He is
currently a professor of education at the University of Connecticut. His research interests
are in the areas of Bayesian statistics, psychometrics, item response theory, and multi-
variate analysis. He has more than 300 papers, chapters, technical reports, and conference
presentations to his credit. Professor Swaminathan is the coauthor of two books (with
Hambleton and Rogers) on item response theory and a fellow of the American Educational
Research Association. He has received outstanding teacher and mentoring awards from
both the University of Massachusetts and the American Psychological Association as well
as the Governor’s award for outstanding contribution to the state of Connecticut by a natu-
ralized citizen for his work with its department of education.

David Thissen is a professor of psychology at the University of North Carolina at


Chapel Hill and the L.L. Thurstone Psychometric Laboratory. He earned his PhD from
the University of Chicago in 1976. He was previously at the University of Kansas. His
research interests include statistical models and estimation in item response theory, test
scoring, test linking, and graphical data analysis. He published Test Scoring (with Howard
Wainer) in 2001. He is a past president of the Psychometric Society, and has received a
career achievement award from the NCME.

Francis Tuerlinckx earned his PhD in psychology from the University of Leuven in
Belgium in 2000. He held a research position at the Department of Statistics of Columbia
University, New York. Since 2004, he is a professor of quantitative psychology at the KU
Leuven, Belgium. His research interests are item response theory, response time modeling
in experimental psychology and measurement, Bayesian statistics, and time series analysis.
Contributors xxvii

Gerhard Tutz is a professor of statistics at the Ludwig Maximilian University (LMU),


Munich and a former director of the Department of Statistics at LMU. His research inter-
ests include categorical data, latent trait models, survival analysis, multivariate statis-
tics, and regularization methods. He has authored several books, including Regression
for Categorical Data and Multivariate Statistical Modelling Based on Generalized Linear Models
(with Ludwig Fahrmeir).

Wim J. van der Linden is a distinguished scientist and director of research innovation,
Pacific Metrics Corporation, Monterey, California, and professor emeritus of measurement
and data analysis, University of Twente, the Netherlands. He earned his PhD in psychomet-
rics from the University of Amsterdam in 1981. His research interests include test theory,
computerized adaptive testing, optimal test assembly, parameter linking, test equating,
and response-time modeling, as well as decision theory and its application to problems
of educational decision making. He is a past president of the Psychometric Society and
the National Council on Measurement in Education and has received career achievement
awards from NCME, ATP, and the Psychometric Society, as well as the E. F. Lindquist
award from AERA.

Han L. J. van der Maas earned his PhD in developmental psychology in 1993 (cum
laude), with dissertation research on methods for the analysis of phase transitions in
cognitive development. After a five-year KNAW fellowship, he joined the faculty of the
Developmental Group of the University of Amsterdam, first as an associate professor and
in 2003 as a professor. In 2005, he became professor and chair of the Psychological Methods
Group at the University of Amsterdam. Since 2008, he is also director of the Graduate
School of Psychology at the University of Amsterdam. His current research includes net-
work models of general intelligence, new psychometric methods, and adaptive learning
systems for education.

Matthias von Davier currently holds the position of senior research director, global assess-
ment, at Educational Testing Service, Princeton, New Jersey, USA. He earned his PhD
from the University of Kiel in Germany in 1996, specializing in psychometric methods. He
serves as the editor of the British Journal of Mathematical and Statistical Psychology and is a
coeditor of the Journal of Large Scale Assessments in Education. He received the ETS Research
Scientist Award in 2006 and the Bradley Hanson Award for contributions to Educational
Measurement in 2012. His research interests are item response theory, including extended
Rasch models and mixture distribution models for item response data, latent structure
models, diagnostic classification models, computational statistics, and developing advanced
psychometric methods for international large-scale surveys of educational outcomes.

Mark R. Wilson is a professor of education at the University of California, Berkeley. He


earned his PhD in psychometrics from the University of Chicago in 1984. His research
interests include item response modeling, especially extensions of Rasch models, test, and
instrument design based on cognitive models, philosophy of measurement, and the devel-
opment of psychometric models for use in formative assessment contexts. In recent years
he was elected president of the Psychometric Society, he became a member of the U.S.
National Academy of Education, and he has published three books: Constructing Measures:
An Item Response Modeling Approach, Explanatory Item Response Models: A Generalized Linear
and Nonlinear Approach (with Paul De Boeck), and Towards Coherence between Classroom
Assessment and Accountability.
Another random document with
no related content on Scribd:
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status by
the Internal Revenue Service. The Foundation’s EIN or federal
tax identification number is 64-6221541. Contributions to the
Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or
determine the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.

You might also like