Professional Documents
Culture Documents
A Validation of the Dyslexia Adult Screening Test (DAST) in a Post Secondary Population
Dr. Harrison has worked as the Learning Disabilities Specialist at Queen’s University
Student Counseling Service for the past 14 years. She is also an Adjunct Professor in the
Task Force team and the Promoting Early Intervention Steering Committee.
Eva Nichols was the Executive Director of the Learning Disabilities Association of
Ontario from 1984 to 1994 and again in 1996/97. She has written books, manuals and articles
related to learning disabilities in both children and adults. Since 1997 she has acted as researcher
Abstract
In Ontario, Canada, there is a demand for psychometrically robust screening tools capable of
efficiently identifying students with specific learning disabilities (SLD), such as dyslexia. The
present study investigated the ability of the DAST (R. I. Nicolson & A. J. Fawcett, 1997) to
discriminate between 117 post-secondary students with carefully diagnosed SLDs and 121
comparison students. Results indicated that the DAST correctly identified only 74% of the
students with SLDs as “highly at risk” for dyslexia. Although employing the cutoff for “mildly at
risk” correctly identified 85% of the students with SLDs, this also increased the percentage of
students with no major history of learning problems identified as “at risk” for dyslexia from 16%
to 26%. These findings suggest that the DAST in its present form is limited in its ability to
Although dyslexia has been traditionally defined as a discrepancy between reading ability
and intelligence in individuals receiving adequate reading instruction, it is now established that
dyslexia is a neurological disorder with a genetic origin that has lifelong persistence, of which
reading deficiency is merely one manifestation (Ramus et al., 2003). Other associated difficulties
processing, and in some cases problems with motor coordination (Rack, 1997). This broader
definition of dyslexia is consistent with what are more commonly referred to as specific learning
disorder in some, but not all, areas of learning that affects one or more of the basic psychological
manifest themselves as impairments in the individual’s ability to listen, think, speak, read, write,
spell, or do mathematics (Lokerson, 2000). Because these learning difficulties are viewed as
motor skills implicated in dyslexia, the use of instruments originally developed to assess dyslexia
In Ontario, Canada, students with disabilities in reading and related SLDs represent the
single largest group of persons identified and served in the post-secondary sector (Inter-
university Disability Issue Association, personal communication, 2004). It has been estimated
that 12% of children in the Canadian school system have learning disabilities (Learning
about 4% of the school aged population is formally identified as having learning disabilities.
This represents about 50% of the total population of students with identified special needs,
A Validation of 5
including those who are gifted. Despite the large demand for diagnostic assessment, 85% of
these students do not receive their first formal diagnosis until they reach college or university
(Learning Opportunities Task Force, personal communication, 2002). Because it is simply not
feasible to offer comprehensive cognitive assessments to every individual, who presents with a
learning difficulty, an empirically valid instrument that can be used as a widespread screening
tool to efficiently and accurately identify SLDs would be extremely valuable, so that more
extensive and costly assessments can be preferentially provided to those individuals, who need
them most.
Until recently, there has been a dearth of psychometrically sound, commercially available
screening instruments for identifying potential dyslexia or SLDs in post-secondary students. One
problem is that many available instruments have not been validated for their ability to
discriminate between students with dyslexia and students with confounding conditions, such as
inadequate education, medical problems, drug or alcohol abuse, and post-traumatic stress
disorder. A second problem is that a number of these tests, such as those developed by Boder and
Jarrico (1982), Guerin, Griffin, Gottfried, & Christenson (1993), Payne (1998), and Weisel
(1998), either do not have a strong theoretical foundation, or are based on narrow definitions of
processing difficulties. Thus, these instruments do not adequately sample the domains of
In 1997 Nicolson and Fawcett developed the Dyslexia Adult Screening Test (DAST)
based on the cerebellar hypothesis of dyslexia, which is consistent with a broad definition of
dyslexia and with SLD (Nicolson, Fawcett, & Dean, 2001). Of the 11 subtests of the DAST, nine
subtests were designed to tap areas of impairment, including a test of cerebellar functioning, and
A Validation of 6
two subtests were designed to assess areas of relative strength among individuals with dyslexia,
namely, Nonverbal Reasoning and Semantic Fluency. In their initial study Nicolson and Fawcett
reported that the DAST had a hit rate of 94% using a sample of 15 dyslexic students and a false
alarm rate of 0% using a control group of 150 students. An instrument with this magnitude of
discriminative ability would be highly desirable as a screen for the presence of SLDs.
First, future studies should clearly describe the samples utilized. For instance, the normative
sample for the DAST, which was based on 1100 adults from nationwide testing in Sheffield,
Kent, Leeds, London, County Durham, and Bristol (Nicolson & Fawcett, 1998) does not specify
how many of these participants were dyslexic, or how dyslexia was diagnosed. In addition, the
manual offers the option of using norms derived from a student population, although the exact
composition of this sample is not indicated. The reader is left to assume that similar criteria as
those reported in Nicolson and Fawcett (1997) were utilized, such as poor performance on the
Arithmetic, Coding, Information, and Digit Span (ACID) subtests from the Wechsler Adult
Intelligence Scale, low scores on a spelling test and a nonsense passage reading test, and a prior
history of suspected or confirmed dyslexia. Second, larger sample sizes are needed to validate
the hit rates and false alarm rates obtained by Nicolson and Fawcett (1997). The inclusion of
only 15 individuals with dyslexia in the latter study begs the question of how effective the DAST
In Ontario, Canada, the government established the Learning Opportunities Task Force to
research transition and post-secondary educational issues for students with SLD. The LOTF
funded eight pilot projects in 10 institutions involving over 1200 students over a five year period.
A Validation of 7
Students who wished to participate in these programs were required to meet very strict diagnostic
criteria (see Appendix). This project provided an ideal opportunity to validate the ability of the
DAST to correctly identify large numbers of students with well-documented cases of SLD, as
well as its ability to distinguish this group from a comparison group recruited from a general
population of post-secondary students. Thus, the purpose of the present study was to validate the
accuracy and utility of the DAST using 117 students with dyslexia and other specific learning
disabilities enrolled in the LOTF funded and supported pilot programs, and 121 students
recruited from the general post-secondary population, and to simultaneously address limitations
in previous research. It was hypothesized that a high hit rate combined with a low false alarm
rate would be obtained, that the pattern of incidence rates for each of the subtests would be
similar to those obtained by Nicolson and Fawcett (1997), and that the DAST would demonstrate
convergent and discriminant validity, respectively, through strong correlations with self-rated
reading skill and reading enjoyment and near-zero correlations with standardized, individually
administered measures of overall intelligence. Thus, it was anticipated that the DAST would be
able to fill an important gap in screening assessments at the post-secondary level. The presence
of a relatively fast and user-friendly instrument would have the potential of assisting referral
sources to better manage limited assessment resources, and to reduce the number of non-disabled
Method
Participants
One hundred seventeen students (58 men and 59 women) with diagnosed SLDs were
recruited from six post-secondary institutions where LOTF pilot projects were underway.
Students enrolled in programs for individuals with SLDs were invited to participate. These
A Validation of 8
demographic questionnaire and the DAST during the data collection period, and gave researchers
permission to access their intelligence quotient (IQ) and aptitude scores, which had been
previously collected to determine SLD status according to LOTF eligibility criteria (see
Appendix).
In addition, 121 students from the general post-secondary population (30 men and 91
women) were recruited to form a comparison group from the same six institutions as the SLD
students. Comparison participants were canvassed from: (1) first-year psychology classes, who
received credit for their involvement, (2) work study employees in the Disability Services
offices, and (3) poster advertisements placed on the campuses where LOTF pilot projects were
underway. The comparison students ranged in age from 17 to 54 years (M = 23.80, SD = 7.50).
There were substantially more women than men in the comparison group. However, there were
no statistically significant differences in the DAST index indicating risk for dyslexia due to sex,
t(119) = .13, p > .05. Although time did not permit the administration of individual intelligence
tests to participants in the comparison group, these students had all met the acceptance criteria
for enrollment at the same schools as did the SLD students, and none were at risk for being asked
questionnaire and the DAST, the students in the comparison group were surveyed regarding their
academic history, subjective areas of academic difficulty, specifically about their reading skills
and subjective estimates of reading ability. All participants were treated in accordance with the
Materials
Checklist for Identified Learning Disability Students. A checklist was developed by the
participant information about services obtained, to specify the type of learning disability
observed, as well as to note any other disabilities or conditions that might be impacting learning.
Information regarding supports included indications (i.e., yes or no) of whether the student met
LOTF eligibility criteria, participated in the LOTF pilot project, was registered with the on-
campus Special Needs Office (SNO) or Office of Students with Disability (OSD), or received
support services for a specific learning disability. The diagnosis obtained during the student’s
last assessment, and intelligence scores from previous assessments were also recorded, if
available. Types of specific learning disability included: visual, auditory, motor, organizational,
conceptual, non-verbal, math, or other. All types of specific learning disability experienced by
collect information relevant to learning from all participants. Information on age, sex, and first
language, and services received from the SNO or OSD during college or university, or during
elementary or high school (if any) was obtained. Students were also asked to indicate any
academic areas of difficulty. Finally, participants were asked to rate their self-appraised reading
skills on a scale ranging from 1 (excellent) to 5 (very poor), and their enjoyment of reading on a
Dyslexia Adult Screening Test (DAST). The DAST was developed to assess dyslexia in
adults (Nicolson & Fawcett, 1997). The DAST has 11 subtests, 9 of which tap skills in domains
relevant to dyslexia, such as phonological awareness, auditory memory, and motor coordination.
A Validation of 10
These subtests include Rapid Naming, One Minute Reading, Postural Stability, Phonemic
Segmentation, Two Minute Spelling, Backwards Digit Span, Nonsense Passage Reading, One
Minute Writing, and Verbal Fluency. In addition, two subtests, Nonverbal Reasoning and
Semantic Fluency, are designed to tap areas of relative strength among individuals with dyslexia.
Reliability and validity have been demonstrated in dyslexic adult and student samples (Nicolson
& Fawcett).
The scoring of the DAST utilizes an At Risk Quotient (ARQ). For each of the 11 subtests
participants’ raw scores are first converted to visual codes using scoring keys based on age. For
instance, a triple minus (---) indicates that performance falls at least 3 standard deviations below
the mean for that particular subtest, which is equivalent to a z-score of -3 or lower. Once visual
codes are recorded for each subtest, they are translated into indices, such that (---) = 3, (--) = 2,
(-) = 1, (0) = 0, and (+) = 0. The indices are then summed and divided by the number of subtests
to obtain the ARQ. An ARQ of 1 indicates that an individual is highly at risk for dyslexia, and an
ARQ of .7 indicates that an individual is mildly at risk for dyslexia. Although the DAST manual
states that the ARQ is obtained by dividing by 11, Nicolson and Fawcett (1997, p. 79) divided by
9, although only 8 subtests are presented in their original validation study. No explanation is
provided for this departure from the manualized scoring procedure. However, the present study
conforms to the method described in the manual, and divides the subtest total score by the total
Procedure
Prior to data collection, assessors from each of the institutions offering LOTF pilot
programs participated in a full day training workshop to learn and practice the DAST. They were
A Validation of 11
provided with a DAST kit and a summary of the guidelines for test administration, as well as the
opportunity to clarify any questions. Inter-rater agreement for the cerebellar (Postural Stability)
subtest was discussed, and objective criteria for rating each level of response were outlined for
the assessors.
Data collection took place during the 2002-2003 academic year. Participants in the SLD
group were recruited after their initial diagnostic assessments were completed, to ensure that
their DAST results did not bias the final diagnosis. For the participants in the comparison group,
the most important inclusion criteria was the absence of any previous or current diagnosis of
SLD.
After recruitment, participants in both the SLD and comparison groups were given
consent forms and the Demographic Questionnaire. Then participants were administered the
DAST by trained assessors. After completing the tests, each participant was given a debriefing
sheet and provided with an opportunity to raise any questions or concerns. Finally, participants
were presented with information sheets containing contact numbers to which future questions
could be addressed.
Results
Using an ARQ cutoff of 1, the DAST correctly identified 74% of the students who met
the stringent diagnostic criteria for a specific learning disability (SLD) as highly at risk for
dyslexia. Thus, 26% of students with well-documented cases of SLD were misidentified as not
being at risk. In the comparison group, the DAST correctly identified 84% of students as not at
risk, but also misidentified 16% of students as being highly at risk for dyslexia, even though the
problems. Furthermore, it was not simply the case that the DAST misidentified borderline cases,
A Validation of 12
whose scores were close to 1.00. The 17 students with SLD who were misidentified as not at risk
for dyslexia had ARQ scores ranging from 0 (no risk) to 0.64. As well, the 19 students in the
comparison group who were misidentified as being at risk, had ARQ scores ranging from 1.00 to
2.73. When an ARQ cutoff of .7 was used as an indicator of mild risk for dyslexia, the hit rate
increased to 85%, but the false alarm rate rose correspondingly to 26%.
The extent to which each of the 11 DAST subtests could discriminate between students in
the SLD and comparison groups was also investigated. Table 1 displays the percentage of
individuals in each group, whose performance on the subtest in question indicated a high risk for
dyslexia. The subtests with the greatest ability to discriminate between the two diagnostic groups
were Nonsense Passage Reading, Two Minute Spelling, Phonemic Segmentation, One Minute
____________________
____________________
Interestingly, the cerebellar subtest, Postural Stability, and the two subtests hypothesized
by Nicolson and Fawcett (1997) to measure relative strengths in dyslexic adults, Nonverbal
Reasoning and Semantic Fluency, could not effectively distinguish between SLD and
comparison students. Thus, we investigated whether the ability of the DAST to discriminate
between SLD and comparison students would improve if these three subtests were eliminated
from the battery. In addition, cases of nonverbal learning disability were excluded (6 SLD
students and 1 comparison student), because this particular type of disability may have little
overlap with dyslexia. Mean ARQ scores (calculated by dividing total risk scores by eight),
standard deviations, and percentages of individuals with performance indicative of high risk for
A Validation of 13
dyslexia for both groups are shown in Table 2. The results were almost identical to those found
for the full DAST. Using an ARQ cutoff of 1, the 8-subtest version of the DAST correctly
identified 77% of the SLD students as highly at risk for dyslexia. However, 17% of the
comparison students were misidentified as being highly at risk. When an ARQ cutoff of .7 was
used as an indicator of mild risk for dyslexia, the hit rate of the 8-subtest version increased to
____________________
_____________________
The relationship of the full 11-subtest DAST with other constructs, such as overall
intelligence and reading were also explored. In the SLD group a moderate and negative
association between the DAST ARQ scores and Wechsler Full Scale IQ was found, controlling
for both reading skill and reading enjoyment, partial r = -.41, p < .001, which suggests that
individuals with lower IQ scores are more likely to be identified as dyslexic despite their actual
level of reading proficiency. In the narrowest sense of the term, dyslexia has been defined as a
disability involving core deficits in reading skills (Lerner, 1993). In the comparison group, 9.1%
of students self-reported the presence of past or current reading problems (see Table 2). We
examined the possibility that this subset of individuals reporting reading difficulties was
responsible for the inflation of ARQ scores in the comparison group. However, findings revealed
no significant association between ARQ scores and reported reading problems, r = .08, p > .05,
with only two individuals from the comparison group scoring above the cutoff for highly at risk.
Furthermore, ARQ scores correlated with neither self-rated reading skill (r = -.07, p > .05), nor
reading enjoyment (r = -.11, p > .05) in the comparison group. Indeed, 11% of students who
A Validation of 14
reported being good readers and 12% of students who reported enjoying reading in the
comparison group obtained ARQ scores above the cutoff indicative of a high risk for dyslexia.
_____________________
_____________________
Discussion
The hypotheses of the present study were partially supported. First, the DAST was found
to have a relatively high hit rate among students with specific learning disabilities (SLD).
However, the false alarm rate was substantially higher than the 0% rate reported by Nicolson and
Fawcett (1997). Second, the percentage of students whose performance indicated high risk for
dyslexia on various subtests was found to be relatively consistent with the incidence rates
obtained by Nicolson and Fawcett (1997) in the SLD group, but not in the comparison group. For
instance, while the latter authors indicated that among the non-dyslexic students, only 5%
obtained scores indicative of high risk for dyslexia on the Phonemic Segmentation subtest, and
8% scored above the cutoff for high risk on the Postural Stability subtest, we found that an
astounding 40% and 52% of comparison students, respectively, obtained ARQ scores indicative
Moreover, removal of the subtests found to have limited ability to discriminate between
SLD and comparison students (i.e., Postural Stability) and those that were developed primarily to
measure relative strengths of dyslexics (i.e., Nonverbal Reasoning and Semantic Fluency) had
very little effect on the hit rate over the 11-subtest version. Given the importance of accurate
diagnosis for the provision of access to limited resources for those people who need them most, a
A Validation of 15
hit rate of at least 90% is desirable for greater utility in clinical settings. In order to achieve this,
we found that an ARQ cutoff of .55 would need to be employed. However, doing so would lead
to the identification of 33% of students in the comparison group as at risk for dyslexia.
Finally, convergent and discriminant validity was not supported. The negative correlation
between DAST ARQ and Wechsler Full Scale IQ scores, even when the effects of reading skills
and reading enjoyment were partialed out, is cause for concern regarding the discriminant
validity of the DAST. This finding suggests that individuals with lower general intelligence are
more likely to be identified as dyslexic with the DAST regardless of their level of reading
proficiency. Arguably, screening instruments for learning disability should not be confounded
with IQ. However, further investigation of the correlation between ARQ and IQ scores in non-
dyslexic groups other than post-secondary students is needed before any final conclusions can be
The association between ARQ scores and self-rated reading skill, itself, also raises
questions regarding the convergent validity of the DAST. The fact that substantial numbers of
students who subjectively rated themselves as having good reading skills, and who were
successful in gaining admission to post-secondary programs, were identified as highly at risk for
dyslexia implies that the relationship between ARQ scores and actual levels of impairment in
Several limitations of the present study are noted. It is possible that the use of self-ratings
as criterion variables were susceptible to response bias and social desirability effects (Cohen &
Swerdlik, 1999). For instance, future studies should more carefully assess actual reading skills of
SLD and comparison participants using identical measures. As well, the DAST was developed to
assess the construct of dyslexia as defined by the cerebellar deficit hypothesis (Nicolson et al.,
A Validation of 16
2001). The validation of the DAST’s performance in samples of students with SLD and dyslexia
should overlap only to the extent that the definitions of their respective constructs are mutually
co-extensive. Finally, because time restrictions did not allow for the assessment of discrepancies
instruments in the comparison group, it is possible that some of these students may actually have
had undiagnosed learning disabilities. It is difficult to imagine, however, that between 17-27% of
these post-secondary students had previously undiagnosed SLD. In future, however, the same
rigorous set of diagnostic criteria used to determine SLD status should be applied to the
comparison group.
distinguish students with actual, well-documented SLDs from those without any substantial
demonstrated that the DAST may have unacceptably high false alarm rates in certain
populations. Because a primary purpose of screening tools is to reduce the number of individuals
referred unnecessarily for time-intensive, costly individual assessments, false alarm rates as high
as those associated with the DAST would threaten to render it impractical for use in large-scale
screening. Reconsideration of the normative and cutoff scores and the subtest composition may
facilitate critical adjustments needed to increase the hit rate and reduce the false alarm rate of the
DAST. The use of a larger sample of dyslexic participants in any recalibration would likely
result in more stable incidence rates, hit rates, and false alarm rates across different populations.
order to improve the scoring criteria of the DAST was attempted in the present study. Despite its
limitations, the relatively high hit rates attainable by the DAST suggest that this instrument has
A Validation of 17
great promise as a tool that can be both efficient and cost-effective given further refinement and
validation.
A Validation of 18
References
Boder, E., & Jarrico, S. (1982). The Boder Test of Reading-Spelling Patterns. New York: Grune
& Stratton.
Cohen, R. J., & Swerdlik, M. E. (1999). Psychological testing and assessment: An introduction
Guerin, D. W., Griffin, J. R., Gottfried, A. W., & Christenson, G. N. (1993). Concurrent validity
Learning Opportunities Task Force (2002a). Checklist for Identified Learning Disability
manuscript.
Lerner, J. (1993). Learning disabilities: Theories, diagnosis, and teaching strategies. Boston,
Lokerson, J. (2000). Focus on adolescent services: Learning disabilities. Retrieved May 30,
Nichols, E., Harrison, A., McCloskey, L., & Weintraub, L. (2002). Learning Opportunities Task
http://www.lotf.ca/english/about/reports/AppendixC.pdf
Nicolson, R. I., & Fawcett, A. J. (1997). Development of objective procedures for screening and
77-83.
A Validation of 19
Nicolson, R. I., & Fawcett, A. J. (1998). The Dyslexia Adult Screening Test. London, England:
Psychological Corporation.
Nicolson, R. I., Fawcett, A. J., & Dean, P. (2001). Developmental dyslexia: The cerebellar
Payne, N. (1998). The rationale, components, and usefulness of informal assessment of adults
with learning disabilities. In S. A. Vogel, & S. Reder (Eds.), Learning disabilities, literacy,
Rack, J. (1997). Issues in the assessment of developmental dyslexia in adults: Theoretical and
Ramus, F., Rosen, S., Dakin, S. C., Day, B. L., Castellone, J. M., White, S., & Frith, U. (2003).
Theories of developmental dyslexia: Insights from a multiple case study of dyslexic adults.
Weisel, L. P. (1998). PowerPath to adult basic learning. In S. A. Vogel, & S. Reder (Eds.),
Learning disabilities, literacy, and adult education (pp. 133-154). Baltimore: Paul H.
Brookes Publishing.
Wolff, U. (2003). A technique for group screening of dyslexia among adults. Annals of Dyslexia,
53, 324-339.
A Validation of 20
Table 1
______________________________________________________________________________
________________________ ________________________
DAST Subtest Mean (SD) Incidence (%) Mean (SD) Incidence (%)
______________________________________________________________________________
______________________________________________________________________________
Note. Mean ARQ scores are calculated according to the manualized procedure. A score of 1.0 or
more indicates ‘high risk’ performance on the subtest in question. Incidence is based on the
Table 2
A Validation of 21
______________________________________________________________________________
________________________ ________________________
DAST Subtest Mean (SD) Incidence (%) Mean (SD) Incidence (%)
______________________________________________________________________________
______________________________________________________________________________
Note. Mean ARQ scores are calculated by dividing the total risk score by eight. A score of 1.0 or
more indicates ‘high risk’ performance on the subtest in question. Incidence is based on the
Table 3
A Validation of 22
______________________________________________________________________________
____________________ _________________________
______________________________________________________________________________
______________________________________________________________________________
Appendix
All of the following criteria must be met for a diagnosis of a learning disability to be made.
and reasoning, and one or more of the specific psychological processes related to learning
B. Academic achievement that is unexpectedly low relative to the individual’s thinking and
C. Evidence that learning difficulties are logically related to observed deficits in specific
psychological processes.
D. While co-morbid disorders can exist, evidence that identified learning difficulties cannot
Note. This document has been adapted, with approval, from the Learning Opportunities Task