You are on page 1of 3

ASSIGNMENT (SEPTEMBER 17,2022)

1. WHAT IS THE GENERALIZABILITY THEORY ALL ABOUT AND WHAT DOES IT SEEK TO INFORM US
ABOUT LANGUANGE ASSESSMENT?

In generalizability theory, sources of variation are referred to as facets. Facets are similar to the
"factors" used in analysis of variance, and may include persons, raters, items/forms, time, and
settings among other possibilities. These facets are potential sources of error and the purpose of
generalizability theory is to quantify the amount of error caused by each facet and interaction of
facets. The usefulness of data gained from a Generalizability study is crucially dependent on the
design of the study. Therefore, the researcher must carefully consider the ways in which he/she
hopes to generalize any specific results. Is it important to generalize from one setting to a larger
number of settings? From one rater to a larger number of raters? From one set of items to a larger
set of items? The answers to these questions will vary from one researcher to the next, and will
drive the design of a G study in different ways.
In addition to deciding which facets the researcher generally wishes to examine; it is necessary to
determine which facet will serve as the object of measurement (e.g. the systematic source of
variance) for the purpose of analysis. The remaining facets of interest are then considered to be
sources of measurement error. In most cases, the object of measurement will be the person to
whom a number/score is assigned. In other cases, it may be a group or performers such as a team
or classroom. Ideally, nearly all of the measured variance will be attributed to the object of
measurement (e.g. individual differences), with only a negligible amount of variance attributed
to the remaining facets (e.g., rater, time, setting).

A generalizability is highly useful theory that informs reliability, validity, elements of study
design, and data analysis for examining, determining, and designing the reliability of various
observations or ratings. Using G-theory we can design Generalizability studies (G-studies) to
better understand the composition of assessment scores to help predict the reliability of the same
data collected under different conditions.

https://journals.sagepub.com/doi/abs/10.1177/02655322211070840
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6699529/
2. What are usual language assessment tools that falls into category of those needed G
theory analysis?

Generalizability theory provides a flexible framework for examining the dependability of


behavior and educational measurement. The utility of behavioral and education measurements
depends on the degree to which the measurement sample allows us to generalize accurately to
the behavior of the same person in a wider set of situations. Perhaps generalizability theory’s
most significant extension of classical theory’s reliability of measurement, is the ability to make
relative (comparisons between individuals) and (absolute comparison within individuals)
interpretation of measurements.
Generalizability studies provide a better understanding of the composition of an assessment
score. In G-theory we first define the universe of scores and facets we wish to generalize from
and to. In a G-study, the facets being considered are predetermined to be fixed or random. We
then conduct several G-studies to calculate G-coefficients. Each calculated G-coefficient
evaluates the reliability of a given aspect of the measurement tool, for example, interrater
reliability. In D-studies we can evaluate the impact of changing a facet's label, such as from fixed
to random. We can use these calculations to make predictions about performance in a similar
assessment situation.

3. What statistical tools work with the theory and what are their limitations?

Statistic is a branch of science that deals with the collection, organization, analysis of data and
drawing of inferences from the samples to the whole population. From this definition of statics,
statistical tool theory, Statistics need substantial evidences called data. It is designed to
understand inherent patterns distributed in data. It is a methodology. The main limitations of
statistics are cost related to collect or cache substantial evidences, its costly thing. If data is not
reliable and valid statistics become nonsense. To handle big amount of data we need computers
so with knowledge of statistics you need to have exposure with computer technology too.
Statistical methods itself is mathematical science so it is quite difficult to understand it without
proper understandings of mathematical concepts. And second major drawback is that statistics
cannot predict exact future though it can predict useful. The limitation we have to considered in
Statistics is not suited to the study of qualitative phenomena, It is not a study of individual , it is a
study of group, statistics laws are not exact (unlike physical and natural sciences statistical laws
are only approximations not exact), homogeneity of data , an essential requirements, statistics
results truly based on averages, statistics can be misused (the use of statistical tools by
inexperienced and untrained persons might lead to very fallacious conclusion).

For Misuse of statistics, it has infinite ways. Actually, all scientific methods (statistical) are
designed to help truth seekers, the process of seeking is research. If someone pretend that he
already knows truth and telling biased things, exactly the way people Misuse statistics.

They first setup things as truth and then they manufacture data accordingly. Due to lack of
statistical understandings most people believe easily.

You might also like