You are on page 1of 8

Rasch model

The Rasch model, named after Georg Rasch, is a obtain measurements from the data; i.e. it provides a cri-
psychometric model for analyzing categorical data, such terion for successful measurement. Beyond data, Raschs
as answers to questions on a reading assessment or ques- equations model relationships we expect to obtain in the
tionnaire responses, as a function of the trade-o between real world. For instance, education is intended to pre-
(a) the respondents abilities, attitudes or personality traits
pare children for the entire range of challenges they will
and (b) the item diculty.[1] For example, they may be face in life, and not just those that appear in textbooks or
used to estimate a students reading ability, or the extrem- on tests. By requiring measures to remain the same (in-
ity of a persons attitude to capital punishment from re- variant) across dierent tests measuring the same thing,
sponses on a questionnaire. In addition to psychometrics Rasch models make it possible to test the hypothesis that
and educational research, the Rasch model and its exten- the particular challenges posed in a curriculum and on
sions are used in other areas, including the health pro- a test coherently represent the innite population of all
fession[2] and market research[3] because of their general possible challenges in that domain. A Rasch model is
applicability.[4] therefore a model in the sense of an ideal or standard that
The mathematical theory underlying Rasch models is a provides a heuristic ction serving as a useful organizing
special case of item response theory and, more generally, principle even when it is never actually observed in prac-
a special case of a generalized linear model. However, tice.
there are important dierences in the interpretation of the The perspective or paradigm underpinning the Rasch
model parameters and its philosophical implications[5] model is distinct from the perspective underpinning sta-
that separate proponents of the Rasch model from the tistical modelling. Models are most often used with the
item response modeling tradition. A central aspect of intention of describing a set of data. Parameters are mod-
this divide relates to the role of specic objectivity,[6] a ied and accepted or rejected based on how well they t
dening property of the Rasch model according to Georg the data. In contrast, when the Rasch model is employed,
Rasch, as a requirement for successful measurement. the objective is to obtain data which t the model (An-
drich, 2004; Wright, 1984, 1999). The rationale for this
perspective is that the Rasch model embodies require-
1 Overview ments which must be met in order to obtain measurement,
in the sense that measurement is generally understood in
the physical sciences.
1.1 The Rasch model for measurement
A useful analogy for understanding this rationale is to
In the Rasch model, the probability of a specied re- consider objects measured on a weighing scale. Suppose
sponse (e.g. right/wrong answer) is modeled as a function the weight of an object A is measured as being substan-
of person and item parameters. Specically, in the orig- tially greater than the weight of an object B on one occa-
inal Rasch model, the probability of a correct response is sion, then immediately afterward the weight of object B
modeled as a logistic function of the dierence between is measured as being substantially greater than the weight
the person and item parameter. The mathematical form of object A. A property we require of measurements is
of the model is provided later in this article. In most con- that the resulting comparison between objects should be
texts, the parameters of the model characterize the pro- the same, or invariant, irrespective of other factors. This
ciency of the respondents and the diculty of the items as key requirement is embodied within the formal structure
locations on a continuous latent variable. For example, in of the Rasch model. Consequently, the Rasch model is
educational tests, item parameters represent the diculty not altered to suit data. Instead, the method of assess-
of items while person parameters represent the ability or ment should be changed so that this requirement is met,
attainment level of people who are assessed. The higher in the same way that a weighing scale should be rectied
a persons ability relative to the diculty of an item, the if it gives dierent comparisons between objects upon
higher the probability of a correct response on that item. separate measurements of the objects.
When a persons location on the latent trait is equal to the Data analysed using the model are usually responses to
diculty of the item, there is by denition a 0.5 proba- conventional items on tests, such as educational tests with
bility of a correct response in the Rasch model. right/wrong answers. However, the model is a general
A Rasch model is a model in one sense in that it repre- one, and can be applied wherever discrete data are ob-
sents the structure which data should exhibit in order to tained with the intention of measuring a quantitative at-


tribute or trait.

1.2 Scaling

Figure 2: Graph showing histograms of person distribution (top)

and item distribution (bottom) on a scale

a question with diculty greater than the persons loca-

tion is less than 0.5. The Item Characteristic Curve (ICC)
or Item Response Function (IRF) shows the probability of
Figure 1: Test characteristic curve showing the relationship be- a correct response as a function of the ability of persons.
tween total score on a test and person location estimate
A single ICC is shown and explained in more detail in
relation to Figure 4 in this article (see also the item re-
When all test-takers have an opportunity to attempt all sponse function). The leftmost ICCs in Figure 3 are the
items on a single test, each total score on the test maps easiest items, the rightmost items in the same gure are
to a unique estimate of ability and the greater the total, the most dicult items.
the greater the ability estimate. Total scores do not have
a linear relationship with ability estimates. Rather, the When responses of a person are listed according to item
relationship is non-linear as shown in Figure 1. The total diculty, from lowest to highest, the most likely pattern
score is shown on the vertical axis, while the correspond- is a Guttman pattern or vector; i.e. {1,1,...,1,0,0,0,...,0}.
ing person location estimate is shown on the horizontal However, while this pattern is the most probable given
axis. For the particular test on which the test character- the structure of the Rasch model, the model requires only
istic curve (TCC) shown in Figure 1 is based, the rela- probabilistic Guttman response patterns; that is, patterns
tionship is approximately linear throughout the range of which tend toward the Guttman pattern. It is unusual
total scores from about 10 to 33. The shape of the TCC is for responses to conform strictly to the pattern because
generally somewhat sigmoid as in this example. However, there are many possible patterns. It is unnecessary for re-
the precise relationship between total scores and person sponses to conform strictly to the pattern in order for data
location estimates depends on the distribution of items on to t the Rasch model.
the test. The TCC is steeper in ranges on the continuum
in which there are a number of items, such as in the range
on either side of 0 in Figures 1 and 2.
In applying the Rasch model, item locations are often
scaled rst, based on methods such as those described be-
low. This part of the process of scaling is often referred
to as item calibration. In educational tests, the smaller the
proportion of correct responses, the higher the diculty
of an item and hence the higher the items scale location.
Once item locations are scaled, the person locations are Figure 3: ICCs for a number of items. ICCs are coloured to high-
measured on the scale. As a result, person and item lo- light the change in the probability of a successful response for a
cations are estimated on a single scale as shown in Figure person with ability location at the vertical line. The person is
2. likely to respond correctly to the easiest items (with locations to
the left and higher curves) and unlikely to respond correctly to
dicult items (locations to the right and lower curves).
1.3 Interpreting scale locations
Each ability estimate has an associated standard error of
For dichotomous data such as right/wrong answers, by measurement, which quanties the degree of uncertainty
denition, the location of an item on a scale corresponds associated with the ability estimate. Item estimates also
with the person location at which there is a 0.5 probabil- have standard errors. Generally, the standard errors of
ity of a correct response to the question. In general, the item estimates are considerably smaller than the standard
probability of a person responding correctly to a question errors of person estimates because there are usually more
with diculty lower than that persons location is greater response data for an item than for a person. That is,
than 0.5, while the probability of responding correctly to the number of people attempting a given item is usually
2.1 Invariant comparison and suciency 3

greater than the number of items attempted by a given from physics and, consequently, did not invoke any
person. Standard errors of person estimates are smaller assumptions about the distribution of levels of a trait
where the slope of the ICC is steeper, which is gener- in a population.
ally through the middle range of scores on a test. Thus,
there is greater precision in this range since the steeper the 3. Raschs approach explicitly recognizes that it is a sci-
slope, the greater the distinction between any two points entic hypothesis that a given trait is both quantita-
on the line. tive and measurable, as operationalized in a partic-
ular experimental context.
Statistical and graphical tests are used to evaluate the cor-
respondence of data with the model. Certain tests are Thus, congruent with the perspective articulated by
global, while others focus on specic items or people. Thomas Kuhn in his 1961 paper The function of mea-
Certain tests of t provide information about which items surement in modern physical science, measurement was
can be used to increase the reliability of a test by omitting
regarded both as being founded in theory, and as being
or correcting problems with poor items. In Rasch Mea- instrumental to detecting quantitative anomalies incon-
surement the person separation index is used instead of gruent with hypotheses related to a broader theoretical
reliability indices. However, the person separation index framework.[10] This perspective is in contrast to that gen-
is analogous to a reliability index. The separation index iserally prevailing in the social sciences, in which data such
a summary of the genuine separation as a ratio to separa- as test scores are directly treated as measurements with-
tion including measurement error. As mentioned earlier, out requiring a theoretical foundation for measurement.
the level of measurement error is not uniform across the Although this contrast exists, Raschs perspective is ac-
range of a test, but is generally larger for more extreme tually complementary to the use of statistical analysis or
scores (low and high). modelling that requires interval-level measurements, be-
cause the purpose of applying a Rasch model is to ob-
tain such measurements. Applications of Rasch mod-
2 Features of the Rasch model els are described in a wide variety of sources, including
Alagumalai, Curtis & Hungi (2005), Bezruczko (2005),
The class of models is named after Georg Rasch, a Bond & Fox (2007), Fisher & Wright (1994), Masters &
Danish mathematician and statistician who advanced the Keeves (1999), and the Journal of Applied Measurement.
epistemological case for the models based on their con-
gruence with a core requirement of measurement in
2.1 Invariant comparison and suciency
physics; namely the requirement of invariant compari-
son. This is the dening feature of the class of models,
The Rasch model for dichotomous data is often regarded
as is elaborated upon in the following section. The Rasch
as an item response theory (IRT) model with one item
model for dichotomous data has a close conceptual re-
parameter. However, rather than being a particular IRT
lationship to the law of comparative judgment (LCJ), a
model, proponents of the model[11] regard it as a model
model formulated and used extensively by L. L. Thur-
that possesses a property which distinguishes it from other
stone,[7][8] and therefore also to the Thurstone scale.[9]
IRT models. Specically, the dening property of Rasch
Prior to introducing the measurement model he is best models is their formal or mathematical embodiment of the
known for, Rasch had applied the Poisson distribution principle of invariant comparison. Rasch summarised the
to reading data as a measurement model, hypothesizing principle of invariant comparison as follows:
that in the relevant empirical context, the number of er-
rors made by a given individual was governed by the ra- The comparison between two stimuli should
tio of the text diculty to the persons reading ability. be independent of which particular individu-
Rasch referred to this model as the multiplicative Poisson als were instrumental for the comparison; and
model. Raschs model for dichotomous data i.e. where it should also be independent of which other
responses are classiable into two categories is his most stimuli within the considered class were or
widely known and used model, and is the main focus here. might also have been compared.
This model has the form of a simple logistic function.
The brief outline above highlights certain distinctive and Symmetrically, a comparison between two in-
interrelated features of Raschs perspective on social dividuals should be independent of which par-
measurement, which are as follows: ticular stimuli within the class considered were
instrumental for the comparison; and it should
also be independent of which other individu-
1. He was concerned principally with the measurement
als were also compared, on the same or some
of individuals, rather than with distributions among
other occasion.[12]
2. He was concerned with establishing a basis for meet- Rasch models embody this principle because their for-
ing a priori requirements for measurement deduced mal structure permits algebraic separation of the person

and item parameters, in the sense that the person param- upon interaction between the relevant person and assess-
eter can be eliminated during the process of statistical ment item. It is readily shown that the log odds, or logit,
estimation of item parameters. This result is achieved of correct response by a person to an item, based on the
through the use of conditional maximum likelihood esti- model, is equal to n i . Given two examinees with
mation, in which the response space is partitioned accord- the same ability parameters 1 and 2 and an arbitrary
ing to person total scores. The consequence is that the item with diculty i , compute the dierence in logits
raw score for an item or person is the sucient statistic for these two examinees by (1 i ) (2 i ) . This
for the item or person parameter. That is to say, the per- dierence becomes 1 2 . Conversely, it can be shown
son total score contains all information available within that the log odds of a correct response by the same per-
the specied context about the individual, and the item son to one item, conditional on a correct response to one
total score contains all information with respect to item, of two items, is equal to the dierence between the item
with regard to the relevant latent trait. The Rasch model locations. For example,
requires a specic structure in the response data, namely
a probabilistic Guttman structure.
In somewhat more familiar terms, Rasch models provide log-odds{Xn1 = 1 | rn = 1} = 2 1 ,
a basis and justication for obtaining person locations where rn is the total score of person n over the two items,
on a continuum from total scores on assessments. Al- which implies a correct response to one or other of the
though it is not uncommon to treat total scores directly items.[1][13][14] Hence, the conditional log odds does not
as measurements, they are actually counts of discrete involve the person parameter n , which can therefore
observations rather than measurements. Each observa- be eliminated by conditioning on the total score rn = 1
tion represents the observable outcome of a comparison . That is, by partitioning the responses according to
between a person and item. Such outcomes are directly raw scores and calculating the log odds of a correct re-
analogous to the observation of the rotation of a balance sponse, an estimate 2 1 is obtained without involve-
scale in one direction or another. This observation would ment of n . More generally, a number of item param-
indicate that one or other object has a greater mass, but eters can be estimated iteratively through application of
counts of such observations cannot be treated directly as a process such as Conditional Maximum Likelihood es-
measurements. timation (see Rasch model estimation). While more in-
Rasch pointed out that the principle of invariant compar- volved, the same fundamental principle applies in such
ison is characteristic of measurement in physics using, by estimations.
way of example, a two-way experimental frame of refer-
ence in which each instrument exerts a mechanical force
upon solid bodies to produce acceleration. Rasch[1]:1123
stated of this context: Generally: If for any two objects
we nd a certain ratio of their accelerations produced
by one instrument, then the same ratio will be found for
any other of the instruments. It is readily shown that
Newtons second law entails that such ratios are inversely
proportional to the ratios of the masses of the bodies.

Figure 4: ICC for the Rasch model showing the comparison be-
tween observed and expected proportions correct for ve Class
3 The mathematical form of the Intervals of persons
Rasch model for dichotomous
The ICC of the Rasch model for dichotomous data is
data shown in Figure 4. The grey line maps a person with a
location of approximately 0.2 on the latent continuum, to
Let Xni = x {0, 1} be a dichotomous random variable the probability of the discrete outcome Xni = 1 for items
where, for example, x = 1 denotes a correct response and with dierent locations on the latent continuum. The lo-
x = 0 an incorrect response to a given assessment item. cation of an item is, by denition, that location at which
In the Rasch model for dichotomous data, the probability the probability that Xni = 1 is equal to 0.5. In gure
of the outcome Xni = 1 is given by: 4, the black circles represent the actual or observed pro-
portions of persons within Class Intervals for which the
outcome was observed. For example, in the case of an
en i
Pr{Xni = 1} = , assessment item used in the context of educational psy-
1 + en i chology, these could represent the proportions of persons
where n is the ability of person n and i is the diculty who answered the item correctly. Persons are ordered
of item i . Thus, in the case of a dichotomous attain- by the estimates of their locations on the latent contin-
ment item, Pr{Xni = 1} is the probability of success uum and classied into Class Intervals on this basis in

order to graphically inspect the accordance of observa- stralen, 1995; Verhelst & Glas, 1995). In OPLM, the
tions with the model. There is a close conformity of the values of the discrimination index are restricted to be-
data with the model. In addition to graphical inspection tween 1 and 15. A limitation of this approach is that in
of data, a range of statistical tests of t are used to eval- practice, values of discrimination indexes must be preset
uate whether departures of observations from the model as a starting point. This means some type of estimation
can be attributed to random eects alone, as required, or of discrimination is involved when the purpose is to avoid
whether there are systematic departures from the model. doing so.
The Rasch model for dichotomous data inherently en-
tails a single discrimination parameter which, as noted
4 The polytomous form of the by Rasch,[1]:121 constitutes an arbitrary choice of the unit
Rasch model in terms of which magnitudes of the latent trait are ex-
pressed or estimated. However, the Rasch model requires
that the discrimination is uniform across interactions be-
Main article: Polytomous Rasch model tween persons and items within a specied frame of ref-
erence (i.e. the assessment context given conditions for
The polytomous Rasch model, which is a generalisation assessment).
of the dichotomous model, can be applied in contexts Application of the models provides diagnostic informa-
in which successive integer scores represent categories tion regarding how well the criterion is met. Application
of increasing level or magnitude of a latent trait, such of the models can also provide information about how
as increasing ability, motor function, endorsement of a well items or questions on assessments work to measure
statement, and so forth. The Polytomous response model the ability or trait. Prominent advocates of Rasch mod-
is, for example, applicable to the use of Likert scales, els include Benjamin Drake Wright, David Andrich and
grading in educational assessment, and scoring of perfor- Erling Andersen.
mances by judges.

5 Other considerations 6 See also

Mokken scale
A criticism of the Rasch model is that it is overly restric-
tive or prescriptive because it does not permit each item to Guttman scale
have a dierent discrimination. A criticism specic to the
use of multiple choice items in educational assessment is
that there is no provision in the model for guessing be-
cause the left asymptote always approaches a zero prob- 7 Further reading
ability in the Rasch model. These variations are avail-
able in models such as the two and three parameter logis- Alagumalai, S., Curtis, D.D. & Hungi, N. (2005).
tic models.[15] However, the specication of uniform dis- Applied Rasch Measurement: A book of exemplars.
crimination and zero left asymptote are necessary prop- Springer-Kluwer.
erties of the model in order to sustain suciency of the
simple, unweighted raw score. Andrich, D. (1978a). A rating formulation for or-
dered response categories. Psychometrika, 43, 357
Verhelst & Glas (1995) derive Conditional Maximum
Likelihood (CML) equations for a model they refer to as
the One Parameter Logistic Model (OPLM). In algebraic
Andrich, D. (1988). Rasch models for measurement.
form it appears to be identical with the 2PL model, but
Beverly Hills: Sage Publications.
OPLM contains preset discrimination indexes rather than
2PLs estimated discrimination parameters. As noted by Andrich, D. (2004). Controversy and the Rasch
these authors, though, the problem one faces in estima- model: a characteristic of incompatible paradigms?
tion with estimated discrimination parameters is that the Medical Care, 42, 116.
discriminations are unknown, meaning that the weighted
raw score is not a mere statistic, and hence it is impos- Baker, F. (2001). The Basics of Item Response The-
sible to use CML as an estimation method (Verhelst & ory. ERIC Clearinghouse on Assessment and Eval-
Glas, 1995, p. 217). That is, suciency of the weighted uation, University of Maryland, College Park, MD.
score in the 2PL cannot be used according to the way Available free with software included from IRT at
in which a sucient statistic is dened. If the weights are
imputed instead of being estimated, as in OPLM, condi-
tional estimation is possible and some of the properties Bezruczko, N. (Ed.). (2005). Rasch measurement in
of the Rasch model are retained (Verhelst, Glas & Ver- health sciences. Maple Grove, MN: JAM Press.

Bond, T.G. & Fox, C.M. (2007). Applying the Rasch Wright, B. D. (1999). Fundamental measurement
Model: Fundamental measurement in the human sci- for psychology. In S. E. Embretson & S. L. Hersh-
ences. 2nd Edn (includes Rasch software on CD- berger (Eds.), The new rules of measurement: What
ROM). Lawrence Erlbaum. every educator and psychologist should know (pp.
65104. Hillsdale, New Jersey: Lawrence Erlbaum
Fischer, G.H. & Molenaar, I.W. (1995). Rasch Associates.
models: foundations, recent developments and appli-
cations. New York: Springer-Verlag. Wright, B.D., & Stone, M.H. (1979). Best Test De-
sign. Chicago, IL: MESA Press.
Fisher, W. P., Jr., & Wright, B. D. (Eds.). (1994).
Applications of probabilistic conjoint measure- Wu, M. & Adams, R. (2007). Applying the
ment. International Journal of Educational Re- Rasch model to psycho-social measurement: A prac-
search, 21(6), 557-664. tical approach. Melbourne, Australia: Educa-
Goldstein H & Blinkhorn.S (1977). Monitoring tional Measurement Solutions. Available free from
Educational Standards: an inappropriate model. . Educational Measurement Solutions
Bull.Br.Psychol.Soc. 30 309311
Goldstein H & Blinkhorn.S (1982). The Rasch 8 References
Model Still Does Not Fit. . BERJ 82 167170.
Hambleton RK, Jones RW. Comparison of classi- [1] Rasch, G. (1960/1980). Probabilistic models for some
cal test theory and item response, Educational Mea- intelligence and attainment tests.(Copenhagen, Danish
surement: Issues and Practice 1993; 12(3):3847. Institute for Educational Research), expanded edition
available in the ITEMS Series from the National (1980) with foreword and afterword by B.D. Wright.
Chicago: The University of Chicago Press.
Council on Measurement in Education
Harris D. Comparison of 1-, 2-, and 3-parameter [2] Bezruczko, N. (2005). Rasch measurement in health sci-
ences. Maple Grove, MN: Jam Press.
IRT models. Educational Measurement: Issues and
Practice;. 1989; 8: 3541 available in the ITEMS [3] Bechtel, G. G. (1985). Generalizing the Rasch model for
Series from the National Council on Measurement consumer rating scales. Marketing Science, 4(1), 62-73.
in Education
[4] Wright, B. D. (1977). Solving measurement problems
Kuhn, T.S. (1961). The function of measurement with the Rasch model. Journal of Educational Measure-
in modern physical science. ISIS, 52, 161193. ment, 14(2), 97-116.
[5] Linacre J.M. (2005). Rasch dichotomous model vs. One-
Linacre, J. M. (1999). Understanding Rasch mea- parameter Logistic Model. Rasch Measurement Transac-
surement: Estimation methods for Rasch mea- tions, 19:3, 1032
sures. Journal of Outcome Measurement 3 (4):
382405. [6] Rasch, G. (1977). On Specic Objectivity: An attempt at
formalizing the request for generality and validity of sci-
Masters, G. N., & Keeves, J. P. (Eds.). (1999). Ad- entic statements. The Danish Yearbook of Philosophy,
vances in measurement in educational research and 14, 58-93.
assessment. New York: Pergamon.
[7] Thurstone, L. L. (1927). A law of comparative judgment.
Verhelst, N.D. and Glas, C.A.W. (1995). The one Psychological review, 34(4), 273.
parameter logistic model. In G.H. Fischer and I.W.
[8] Thurstone and sensory scaling: Then and now. (1994).
Molenaar (Eds.), Rasch Models: Foundations, re-
Thurstone and sensory scaling: Then and now. Psycholog-
cent developments, and applications (pp. 215238). ical Review, 101(2), 271277. doi:10.1037/0033-295X.
New York: Springer Verlag. 101.2.271
Verhelst, N.D., Glas, C.A.W. and Verstralen, [9] Andrich, D. (1978b). Relationships between the Thur-
H.H.F.M. (1995). One parameter logistic model stone and Rasch approaches to item scaling. Applied Psy-
(OPLM). Arnhem: CITO. chological Measurement, 2, 449460.
von Davier, M., & Carstensen, C. H. (2007). Multi- [10] Kuhn, Thomas S. The function of measurement in mod-
variate and Mixture Distribution Rasch Models: Ex- ern physical science. Isis (1961): 161-193.
tensions and Applications. New York: Springer.
[11] Bond, T.G. & Fox, C.M. (2007). Applying the
Wright, B. D. (1984). Despair and hope for educa- Rasch Model: Fundamental measurement in the hu-
tional measurement. Contemporary Education Re- man sciences. 2nd Edn (includes Rasch software on
view, 3(1), 281-288 . CD-ROM). Lawrence Erlbaum. Page 265

[12] Rasch, G. (1961). On general laws and the meaning of

measurement in psychology, pp. 321334 in Proceedings
of the Fourth Berkeley Symposium on Mathematical Statis-
tics and Probability, IV. Berkeley, California: University
of California Press. Available free from Project Euclid

[13] Andersen, E.B. (1977). Sucient statistics and latent trait

models, Psychometrika, 42, 6981.

[14] Andrich, D. (2010). Suciency and conditional estima-

tion of person parameters in the polytomous Rasch model.
Psychometrika, 75(2), 292-308.

[15] Birnbaum, A. (1968). Some latent trait models and their

use in inferring an examinees ability. In Lord, F.M.
& Novick, M.R. (Eds.), Statistical theories of mental test
scores. Reading, MA: AddisonWesley.

9 External links
Institute for Objective Measurement Online Rasch

Pearson Psychometrics Laboratory, with informa-

tion about Rasch models

Journal of Applied Measurement

Journal of Outcome Measurement (all issues avail-
able for free downloading)
Berkeley Evaluation & Assessment Research Center
(ConstructMap software)
Directory of Rasch Software freeware and paid

IRT Modeling Lab at U. Illinois Urbana Champ.

National Council on Measurement in Education
Rasch analysis

Rasch Measurement Transactions

The Standards for Educational and Psychological
The Trouble with Rasch

10 Text and image sources, contributors, and licenses

10.1 Text
Rasch model Source: Contributors: Mjb, Michael Hardy, Egil, Den fjt-
trade ankan~enwiki, Amead, Niteowlneils, Jeremykemp, Bender235, RoyBoy, Ricky81682, Btyner, Mathbot, Physchim62, The Rambling
Man, Nesbit, Salsb, Sanguinity, Holon, Crasshopper, Laminado, Johndburger, C mon, A bit iy, SmackBot, Commander Keane bot, Hon-
gooi, Wissons, RichardF, CmdrObot, CBM, D4g0thur, OrenBochman, Mack2, Zapp645, Hubbardaie, Tonyfaull, JaGa, Gjhernandezp,
Robroot, TinJack, DavidMorgan1950, Mangotree, Cycologist, StAnselm, Melcombe, ClueBot, Winsteps, SchreiberBike, Addbot, Yobot,
AnomieBOT, Pmetric, H2otto, Shadowjams, FrescoBot, Klmackenzie, WeijiBaikeBianji, WPFisherJr, Wholehearted, MathewTownsend,
Mark viking, Dti21, Akorpak and Anonymous: 41

10.2 Images
File:ICCs_prog.png Source: License: Cc-by-sa-3.0 Contributors: ? Orig-
inal artist: ?
File:PersItm.PNG Source: License: Cc-by-sa-3.0 Contributors: ? Original
artist: ?
File:RaschICC.gif Source: License: CC-BY-SA-3.0 Contributors:
? Original artist: ?
File:TCC.PNG Source: License: Cc-by-sa-3.0 Contributors: ? Original artist:
File:Text_document_with_red_question_mark.svg Source:
with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg
from the Tango project. Original artist: Benjamin D. Esham (bdesham)

10.3 Content license

Creative Commons Attribution-Share Alike 3.0