You are on page 1of 24

A Validation of 1

Running head: VALIDATION OF THE DAST

A Validation of the Dyslexia Adult Screening Test (DAST) in a Post Secondary Population

Allyson G. Harrison Eva Nichols

Queen’s University Learning Opportunities Task Force

Accepted for publication, Journal of Reading Research, 2005.


A Validation of 2

About the Authors

Dr. Harrison has worked as the Learning Disabilities Specialist at Queen’s University

Student Counseling Service for the past 14 years. She is also an Adjunct Professor in the

Department of Psychology at Queen’s University, and a member of the Learning Opportunities

Task Force team and the Promoting Early Intervention Steering Committee.

Eva Nichols was the Executive Director of the Learning Disabilities Association of

Ontario from 1984 to 1994 and again in 1996/97. She has written books, manuals and articles

related to learning disabilities in both children and adults. Since 1997 she has acted as researcher

and senior consultant to the Learning Opportunities Task Force.

Address for Correspondence: Dr. Allyson Harrison, Department of Psychology, Queen’s

University, Kingston, Ontario, K7L 3N6 Canada. Electronic mail: harrisna@post.queensu.ca.


A Validation of 3

Abstract

In Ontario, Canada, there is a demand for psychometrically robust screening tools capable of

efficiently identifying students with specific learning disabilities (SLD), such as dyslexia. The

present study investigated the ability of the DAST (R. I. Nicolson & A. J. Fawcett, 1997) to

discriminate between 117 post-secondary students with carefully diagnosed SLDs and 121

comparison students. Results indicated that the DAST correctly identified only 74% of the

students with SLDs as “highly at risk” for dyslexia. Although employing the cutoff for “mildly at

risk” correctly identified 85% of the students with SLDs, this also increased the percentage of

students with no major history of learning problems identified as “at risk” for dyslexia from 16%

to 26%. These findings suggest that the DAST in its present form is limited in its ability to

screen for SLDs. Implications for future research are discussed.


A Validation of 4

A Validation of the Dyslexia Adult Screening Test (DAST) in a Post-Secondary Population

Although dyslexia has been traditionally defined as a discrepancy between reading ability

and intelligence in individuals receiving adequate reading instruction, it is now established that

dyslexia is a neurological disorder with a genetic origin that has lifelong persistence, of which

reading deficiency is merely one manifestation (Ramus et al., 2003). Other associated difficulties

in dyslexia typically include deficits in short-term auditory memory and phonological

processing, and in some cases problems with motor coordination (Rack, 1997). This broader

definition of dyslexia is consistent with what are more commonly referred to as specific learning

disabilities in North American legislation. A specific learning disability (SLD) is defined as a

disorder in some, but not all, areas of learning that affects one or more of the basic psychological

processes involved in understanding or using language (Lokerson, 2000). Difficulties may

manifest themselves as impairments in the individual’s ability to listen, think, speak, read, write,

spell, or do mathematics (Lokerson, 2000). Because these learning difficulties are viewed as

consequences of the same neurological dysfunctions in memory, phonological processing, and

motor skills implicated in dyslexia, the use of instruments originally developed to assess dyslexia

to identify individuals with SLDs merits investigation.

In Ontario, Canada, students with disabilities in reading and related SLDs represent the

single largest group of persons identified and served in the post-secondary sector (Inter-

university Disability Issue Association, personal communication, 2004). It has been estimated

that 12% of children in the Canadian school system have learning disabilities (Learning

Disabilities Association of Canada, 2001). However, in Ontario, Canada’s largest province,

about 4% of the school aged population is formally identified as having learning disabilities.

This represents about 50% of the total population of students with identified special needs,
A Validation of 5

including those who are gifted. Despite the large demand for diagnostic assessment, 85% of

these students do not receive their first formal diagnosis until they reach college or university

(Learning Opportunities Task Force, personal communication, 2002). Because it is simply not

feasible to offer comprehensive cognitive assessments to every individual, who presents with a

learning difficulty, an empirically valid instrument that can be used as a widespread screening

tool to efficiently and accurately identify SLDs would be extremely valuable, so that more

extensive and costly assessments can be preferentially provided to those individuals, who need

them most.

Until recently, there has been a dearth of psychometrically sound, commercially available

screening instruments for identifying potential dyslexia or SLDs in post-secondary students. One

problem is that many available instruments have not been validated for their ability to

discriminate between students with dyslexia and students with confounding conditions, such as

inadequate education, medical problems, drug or alcohol abuse, and post-traumatic stress

disorder. A second problem is that a number of these tests, such as those developed by Boder and

Jarrico (1982), Guerin, Griffin, Gottfried, & Christenson (1993), Payne (1998), and Weisel

(1998), either do not have a strong theoretical foundation, or are based on narrow definitions of

dyslexia that do not simultaneously consider phonological, memory, and perceptual-motor

processing difficulties. Thus, these instruments do not adequately sample the domains of

behavior relevant to dyslexia or SLD (Wolff, 2003).

In 1997 Nicolson and Fawcett developed the Dyslexia Adult Screening Test (DAST)

based on the cerebellar hypothesis of dyslexia, which is consistent with a broad definition of

dyslexia and with SLD (Nicolson, Fawcett, & Dean, 2001). Of the 11 subtests of the DAST, nine

subtests were designed to tap areas of impairment, including a test of cerebellar functioning, and
A Validation of 6

two subtests were designed to assess areas of relative strength among individuals with dyslexia,

namely, Nonverbal Reasoning and Semantic Fluency. In their initial study Nicolson and Fawcett

reported that the DAST had a hit rate of 94% using a sample of 15 dyslexic students and a false

alarm rate of 0% using a control group of 150 students. An instrument with this magnitude of

discriminative ability would be highly desirable as a screen for the presence of SLDs.

However, several limitations of previous research on the DAST remain to be addressed.

First, future studies should clearly describe the samples utilized. For instance, the normative

sample for the DAST, which was based on 1100 adults from nationwide testing in Sheffield,

Kent, Leeds, London, County Durham, and Bristol (Nicolson & Fawcett, 1998) does not specify

how many of these participants were dyslexic, or how dyslexia was diagnosed. In addition, the

manual offers the option of using norms derived from a student population, although the exact

composition of this sample is not indicated. The reader is left to assume that similar criteria as

those reported in Nicolson and Fawcett (1997) were utilized, such as poor performance on the

Arithmetic, Coding, Information, and Digit Span (ACID) subtests from the Wechsler Adult

Intelligence Scale, low scores on a spelling test and a nonsense passage reading test, and a prior

history of suspected or confirmed dyslexia. Second, larger sample sizes are needed to validate

the hit rates and false alarm rates obtained by Nicolson and Fawcett (1997). The inclusion of

only 15 individuals with dyslexia in the latter study begs the question of how effective the DAST

would be at accurately discriminating between larger groups of individuals with carefully

diagnosed SLD and individuals with no reported learning impairment.

In Ontario, Canada, the government established the Learning Opportunities Task Force to

research transition and post-secondary educational issues for students with SLD. The LOTF

funded eight pilot projects in 10 institutions involving over 1200 students over a five year period.
A Validation of 7

Students who wished to participate in these programs were required to meet very strict diagnostic

criteria (see Appendix). This project provided an ideal opportunity to validate the ability of the

DAST to correctly identify large numbers of students with well-documented cases of SLD, as

well as its ability to distinguish this group from a comparison group recruited from a general

population of post-secondary students. Thus, the purpose of the present study was to validate the

accuracy and utility of the DAST using 117 students with dyslexia and other specific learning

disabilities enrolled in the LOTF funded and supported pilot programs, and 121 students

recruited from the general post-secondary population, and to simultaneously address limitations

in previous research. It was hypothesized that a high hit rate combined with a low false alarm

rate would be obtained, that the pattern of incidence rates for each of the subtests would be

similar to those obtained by Nicolson and Fawcett (1997), and that the DAST would demonstrate

convergent and discriminant validity, respectively, through strong correlations with self-rated

reading skill and reading enjoyment and near-zero correlations with standardized, individually

administered measures of overall intelligence. Thus, it was anticipated that the DAST would be

able to fill an important gap in screening assessments at the post-secondary level. The presence

of a relatively fast and user-friendly instrument would have the potential of assisting referral

sources to better manage limited assessment resources, and to reduce the number of non-disabled

students referred for more time-intensive, costly assessments.

Method

Participants

One hundred seventeen students (58 men and 59 women) with diagnosed SLDs were

recruited from six post-secondary institutions where LOTF pilot projects were underway.

Students enrolled in programs for individuals with SLDs were invited to participate. These
A Validation of 8

participants ranged in age from 18 to 41 years (M = 22.75, SD = 4.77). They completed a

demographic questionnaire and the DAST during the data collection period, and gave researchers

permission to access their intelligence quotient (IQ) and aptitude scores, which had been

previously collected to determine SLD status according to LOTF eligibility criteria (see

Appendix).

In addition, 121 students from the general post-secondary population (30 men and 91

women) were recruited to form a comparison group from the same six institutions as the SLD

students. Comparison participants were canvassed from: (1) first-year psychology classes, who

received credit for their involvement, (2) work study employees in the Disability Services

offices, and (3) poster advertisements placed on the campuses where LOTF pilot projects were

underway. The comparison students ranged in age from 17 to 54 years (M = 23.80, SD = 7.50).

There were substantially more women than men in the comparison group. However, there were

no statistically significant differences in the DAST index indicating risk for dyslexia due to sex,

t(119) = .13, p > .05. Although time did not permit the administration of individual intelligence

tests to participants in the comparison group, these students had all met the acceptance criteria

for enrollment at the same schools as did the SLD students, and none were at risk for being asked

to withdraw due to poor academic performance. In addition to completing a demographic

questionnaire and the DAST, the students in the comparison group were surveyed regarding their

academic history, subjective areas of academic difficulty, specifically about their reading skills

and subjective estimates of reading ability. All participants were treated in accordance with the

ethical guidelines established by the American Psychiatric Association.


A Validation of 9

Materials

Checklist for Identified Learning Disability Students. A checklist was developed by the

LOTF team (2002a) to be completed by learning strategists at participating institutions to collect

participant information about services obtained, to specify the type of learning disability

observed, as well as to note any other disabilities or conditions that might be impacting learning.

Information regarding supports included indications (i.e., yes or no) of whether the student met

LOTF eligibility criteria, participated in the LOTF pilot project, was registered with the on-

campus Special Needs Office (SNO) or Office of Students with Disability (OSD), or received

support services for a specific learning disability. The diagnosis obtained during the student’s

last assessment, and intelligence scores from previous assessments were also recorded, if

available. Types of specific learning disability included: visual, auditory, motor, organizational,

conceptual, non-verbal, math, or other. All types of specific learning disability experienced by

each participant were recorded.

Demographic Questionnaire. A questionnaire was created by the LOTF team (2002b) to

collect information relevant to learning from all participants. Information on age, sex, and first

language, and services received from the SNO or OSD during college or university, or during

elementary or high school (if any) was obtained. Students were also asked to indicate any

academic areas of difficulty. Finally, participants were asked to rate their self-appraised reading

skills on a scale ranging from 1 (excellent) to 5 (very poor), and their enjoyment of reading on a

scale ranging from 1(very much) to 5 (not at all).

Dyslexia Adult Screening Test (DAST). The DAST was developed to assess dyslexia in

adults (Nicolson & Fawcett, 1997). The DAST has 11 subtests, 9 of which tap skills in domains

relevant to dyslexia, such as phonological awareness, auditory memory, and motor coordination.
A Validation of 10

These subtests include Rapid Naming, One Minute Reading, Postural Stability, Phonemic

Segmentation, Two Minute Spelling, Backwards Digit Span, Nonsense Passage Reading, One

Minute Writing, and Verbal Fluency. In addition, two subtests, Nonverbal Reasoning and

Semantic Fluency, are designed to tap areas of relative strength among individuals with dyslexia.

The DAST is administered individually, and requires approximately 30 minutes to complete.

Reliability and validity have been demonstrated in dyslexic adult and student samples (Nicolson

& Fawcett).

The scoring of the DAST utilizes an At Risk Quotient (ARQ). For each of the 11 subtests

participants’ raw scores are first converted to visual codes using scoring keys based on age. For

instance, a triple minus (---) indicates that performance falls at least 3 standard deviations below

the mean for that particular subtest, which is equivalent to a z-score of -3 or lower. Once visual

codes are recorded for each subtest, they are translated into indices, such that (---) = 3, (--) = 2,

(-) = 1, (0) = 0, and (+) = 0. The indices are then summed and divided by the number of subtests

to obtain the ARQ. An ARQ of 1 indicates that an individual is highly at risk for dyslexia, and an

ARQ of .7 indicates that an individual is mildly at risk for dyslexia. Although the DAST manual

states that the ARQ is obtained by dividing by 11, Nicolson and Fawcett (1997, p. 79) divided by

9, although only 8 subtests are presented in their original validation study. No explanation is

provided for this departure from the manualized scoring procedure. However, the present study

conforms to the method described in the manual, and divides the subtest total score by the total

number of subtests involved in the analyses.

Procedure

Prior to data collection, assessors from each of the institutions offering LOTF pilot

programs participated in a full day training workshop to learn and practice the DAST. They were
A Validation of 11

provided with a DAST kit and a summary of the guidelines for test administration, as well as the

opportunity to clarify any questions. Inter-rater agreement for the cerebellar (Postural Stability)

subtest was discussed, and objective criteria for rating each level of response were outlined for

the assessors.

Data collection took place during the 2002-2003 academic year. Participants in the SLD

group were recruited after their initial diagnostic assessments were completed, to ensure that

their DAST results did not bias the final diagnosis. For the participants in the comparison group,

the most important inclusion criteria was the absence of any previous or current diagnosis of

SLD.

After recruitment, participants in both the SLD and comparison groups were given

consent forms and the Demographic Questionnaire. Then participants were administered the

DAST by trained assessors. After completing the tests, each participant was given a debriefing

sheet and provided with an opportunity to raise any questions or concerns. Finally, participants

were presented with information sheets containing contact numbers to which future questions

could be addressed.

Results

Using an ARQ cutoff of 1, the DAST correctly identified 74% of the students who met

the stringent diagnostic criteria for a specific learning disability (SLD) as highly at risk for

dyslexia. Thus, 26% of students with well-documented cases of SLD were misidentified as not

being at risk. In the comparison group, the DAST correctly identified 84% of students as not at

risk, but also misidentified 16% of students as being highly at risk for dyslexia, even though the

overwhelming majority of these students reported no history of any considerable learning

problems. Furthermore, it was not simply the case that the DAST misidentified borderline cases,
A Validation of 12

whose scores were close to 1.00. The 17 students with SLD who were misidentified as not at risk

for dyslexia had ARQ scores ranging from 0 (no risk) to 0.64. As well, the 19 students in the

comparison group who were misidentified as being at risk, had ARQ scores ranging from 1.00 to

2.73. When an ARQ cutoff of .7 was used as an indicator of mild risk for dyslexia, the hit rate

increased to 85%, but the false alarm rate rose correspondingly to 26%.

The extent to which each of the 11 DAST subtests could discriminate between students in

the SLD and comparison groups was also investigated. Table 1 displays the percentage of

individuals in each group, whose performance on the subtest in question indicated a high risk for

dyslexia. The subtests with the greatest ability to discriminate between the two diagnostic groups

were Nonsense Passage Reading, Two Minute Spelling, Phonemic Segmentation, One Minute

Reading, and One Minute Writing.

____________________

Insert Table 1 about here.

____________________

Interestingly, the cerebellar subtest, Postural Stability, and the two subtests hypothesized

by Nicolson and Fawcett (1997) to measure relative strengths in dyslexic adults, Nonverbal

Reasoning and Semantic Fluency, could not effectively distinguish between SLD and

comparison students. Thus, we investigated whether the ability of the DAST to discriminate

between SLD and comparison students would improve if these three subtests were eliminated

from the battery. In addition, cases of nonverbal learning disability were excluded (6 SLD

students and 1 comparison student), because this particular type of disability may have little

overlap with dyslexia. Mean ARQ scores (calculated by dividing total risk scores by eight),

standard deviations, and percentages of individuals with performance indicative of high risk for
A Validation of 13

dyslexia for both groups are shown in Table 2. The results were almost identical to those found

for the full DAST. Using an ARQ cutoff of 1, the 8-subtest version of the DAST correctly

identified 77% of the SLD students as highly at risk for dyslexia. However, 17% of the

comparison students were misidentified as being highly at risk. When an ARQ cutoff of .7 was

used as an indicator of mild risk for dyslexia, the hit rate of the 8-subtest version increased to

88%, but the false alarm rate rose correspondingly to 27%.

____________________

Insert Table 2 about here.

_____________________

The relationship of the full 11-subtest DAST with other constructs, such as overall

intelligence and reading were also explored. In the SLD group a moderate and negative

association between the DAST ARQ scores and Wechsler Full Scale IQ was found, controlling

for both reading skill and reading enjoyment, partial r = -.41, p < .001, which suggests that

individuals with lower IQ scores are more likely to be identified as dyslexic despite their actual

level of reading proficiency. In the narrowest sense of the term, dyslexia has been defined as a

disability involving core deficits in reading skills (Lerner, 1993). In the comparison group, 9.1%

of students self-reported the presence of past or current reading problems (see Table 2). We

examined the possibility that this subset of individuals reporting reading difficulties was

responsible for the inflation of ARQ scores in the comparison group. However, findings revealed

no significant association between ARQ scores and reported reading problems, r = .08, p > .05,

with only two individuals from the comparison group scoring above the cutoff for highly at risk.

Furthermore, ARQ scores correlated with neither self-rated reading skill (r = -.07, p > .05), nor

reading enjoyment (r = -.11, p > .05) in the comparison group. Indeed, 11% of students who
A Validation of 14

reported being good readers and 12% of students who reported enjoying reading in the

comparison group obtained ARQ scores above the cutoff indicative of a high risk for dyslexia.

_____________________

Insert Table 3 about here.

_____________________

Discussion

The hypotheses of the present study were partially supported. First, the DAST was found

to have a relatively high hit rate among students with specific learning disabilities (SLD).

However, the false alarm rate was substantially higher than the 0% rate reported by Nicolson and

Fawcett (1997). Second, the percentage of students whose performance indicated high risk for

dyslexia on various subtests was found to be relatively consistent with the incidence rates

obtained by Nicolson and Fawcett (1997) in the SLD group, but not in the comparison group. For

instance, while the latter authors indicated that among the non-dyslexic students, only 5%

obtained scores indicative of high risk for dyslexia on the Phonemic Segmentation subtest, and

8% scored above the cutoff for high risk on the Postural Stability subtest, we found that an

astounding 40% and 52% of comparison students, respectively, obtained ARQ scores indicative

of high risk on these subtests.

Moreover, removal of the subtests found to have limited ability to discriminate between

SLD and comparison students (i.e., Postural Stability) and those that were developed primarily to

measure relative strengths of dyslexics (i.e., Nonverbal Reasoning and Semantic Fluency) had

very little effect on the hit rate over the 11-subtest version. Given the importance of accurate

diagnosis for the provision of access to limited resources for those people who need them most, a
A Validation of 15

hit rate of at least 90% is desirable for greater utility in clinical settings. In order to achieve this,

we found that an ARQ cutoff of .55 would need to be employed. However, doing so would lead

to the identification of 33% of students in the comparison group as at risk for dyslexia.

Finally, convergent and discriminant validity was not supported. The negative correlation

between DAST ARQ and Wechsler Full Scale IQ scores, even when the effects of reading skills

and reading enjoyment were partialed out, is cause for concern regarding the discriminant

validity of the DAST. This finding suggests that individuals with lower general intelligence are

more likely to be identified as dyslexic with the DAST regardless of their level of reading

proficiency. Arguably, screening instruments for learning disability should not be confounded

with IQ. However, further investigation of the correlation between ARQ and IQ scores in non-

dyslexic groups other than post-secondary students is needed before any final conclusions can be

drawn about the exact meaning of this relationship.

The association between ARQ scores and self-rated reading skill, itself, also raises

questions regarding the convergent validity of the DAST. The fact that substantial numbers of

students who subjectively rated themselves as having good reading skills, and who were

successful in gaining admission to post-secondary programs, were identified as highly at risk for

dyslexia implies that the relationship between ARQ scores and actual levels of impairment in

reading may not be exceptionally strong.

Several limitations of the present study are noted. It is possible that the use of self-ratings

as criterion variables were susceptible to response bias and social desirability effects (Cohen &

Swerdlik, 1999). For instance, future studies should more carefully assess actual reading skills of

SLD and comparison participants using identical measures. As well, the DAST was developed to

assess the construct of dyslexia as defined by the cerebellar deficit hypothesis (Nicolson et al.,
A Validation of 16

2001). The validation of the DAST’s performance in samples of students with SLD and dyslexia

should overlap only to the extent that the definitions of their respective constructs are mutually

co-extensive. Finally, because time restrictions did not allow for the assessment of discrepancies

between IQ scores and academic achievement using individually administered, standardized

instruments in the comparison group, it is possible that some of these students may actually have

had undiagnosed learning disabilities. It is difficult to imagine, however, that between 17-27% of

these post-secondary students had previously undiagnosed SLD. In future, however, the same

rigorous set of diagnostic criteria used to determine SLD status should be applied to the

comparison group.

In conclusion, the DAST as presently constructed appears to be limited in its ability to

distinguish students with actual, well-documented SLDs from those without any substantial

history of learning problems. In particular, the use of a heterogeneous comparison sample

demonstrated that the DAST may have unacceptably high false alarm rates in certain

populations. Because a primary purpose of screening tools is to reduce the number of individuals

referred unnecessarily for time-intensive, costly individual assessments, false alarm rates as high

as those associated with the DAST would threaten to render it impractical for use in large-scale

screening. Reconsideration of the normative and cutoff scores and the subtest composition may

facilitate critical adjustments needed to increase the hit rate and reduce the false alarm rate of the

DAST. The use of a larger sample of dyslexic participants in any recalibration would likely

result in more stable incidence rates, hit rates, and false alarm rates across different populations.

A preliminary exploration of the implications of recalculating normative scores and cutoffs in

order to improve the scoring criteria of the DAST was attempted in the present study. Despite its

limitations, the relatively high hit rates attainable by the DAST suggest that this instrument has
A Validation of 17

great promise as a tool that can be both efficient and cost-effective given further refinement and

validation.
A Validation of 18

References

Boder, E., & Jarrico, S. (1982). The Boder Test of Reading-Spelling Patterns. New York: Grune

& Stratton.

Cohen, R. J., & Swerdlik, M. E. (1999). Psychological testing and assessment: An introduction

to tests and measurement. Mountain View, CA: Mayfield Publishing.

Guerin, D. W., Griffin, J. R., Gottfried, A. W., & Christenson, G. N. (1993). Concurrent validity

and screening efficiency of the Dyslexia Screener. Psychological Assessment, 5, 369-373.

Learning Disabilities Association of Canada (2001). Fact sheet: Statistics on learning

disabilities. Retrieved May 30, 2004 from http://www.ldac-taac.ca

Learning Opportunities Task Force (2002a). Checklist for Identified Learning Disability

Students. Unpublished manuscript.

Learning Opportunities Task Force (2002b). Demographic Questionnaire. Unpublished

manuscript.

Lerner, J. (1993). Learning disabilities: Theories, diagnosis, and teaching strategies. Boston,

MA: Houghton Mifflin Co.

Lokerson, J. (2000). Focus on adolescent services: Learning disabilities. Retrieved May 30,

2004 from http://www.focusas.com/LearningDisabilities.html

Nichols, E., Harrison, A., McCloskey, L., & Weintraub, L. (2002). Learning Opportunities Task

Force 1997 to 2002: Final report. Retrieved July 1, 2004, from

http://www.lotf.ca/english/about/reports/AppendixC.pdf

Nicolson, R. I., & Fawcett, A. J. (1997). Development of objective procedures for screening and

assessment of dyslexic students in higher education. Journal of Research in Reading, 20,

77-83.
A Validation of 19

Nicolson, R. I., & Fawcett, A. J. (1998). The Dyslexia Adult Screening Test. London, England:

Psychological Corporation.

Nicolson, R. I., Fawcett, A. J., & Dean, P. (2001). Developmental dyslexia: The cerebellar

deficit hypothesis. Trends in Neurosciences, 24, 508-511.

Payne, N. (1998). The rationale, components, and usefulness of informal assessment of adults

with learning disabilities. In S. A. Vogel, & S. Reder (Eds.), Learning disabilities, literacy,

and adult education (pp. 107-131). Baltimore: Paul H. Brookes Publishing.

Rack, J. (1997). Issues in the assessment of developmental dyslexia in adults: Theoretical and

applied perspectives. Journal of Research in Reading, 20, 66-76.

Ramus, F., Rosen, S., Dakin, S. C., Day, B. L., Castellone, J. M., White, S., & Frith, U. (2003).

Theories of developmental dyslexia: Insights from a multiple case study of dyslexic adults.

Brain, 126, 841-865.

Weisel, L. P. (1998). PowerPath to adult basic learning. In S. A. Vogel, & S. Reder (Eds.),

Learning disabilities, literacy, and adult education (pp. 133-154). Baltimore: Paul H.

Brookes Publishing.

Wolff, U. (2003). A technique for group screening of dyslexia among adults. Annals of Dyslexia,

53, 324-339.
A Validation of 20

Table 1

Validation of Performance on the 11 DAST Subtests in a Post-Secondary Student Sample

______________________________________________________________________________

SLD Students (n = 117) Comparison Students (n = 121)

________________________ ________________________

DAST Subtest Mean (SD) Incidence (%) Mean (SD) Incidence (%)

______________________________________________________________________________

Rapid Naming 0.79 (0.99) 47 0.31 (0.69) 21

One Minute Reading 1.81 (1.17) 79 0.44 (0.72) 32

Postural Stability 1.08(0.93) 66 0.87 (0.90) 52

Phonemic Segmentation 2.24 (1.10) 85 0.87 (1.20) 40

Two Minute Spelling 1.78 (1.08) 85 0.43 (0.69) 31

Backwards Digit Span 1.33 (1.20) 64 0.50 (0.91) 29

Nonsense Passage Reading 2.25 (1.00) 91 0.52 (0.89) 31

Nonverbal Reasoning 1.22 (1.18) 60 1.06 (1.12) 56

One Minute Writing 1.62 (1.12) 77 0.60 (0.97) 33

Verbal Fluency 0.51 (0.93) 27 0.26 (0.71) 14

Semantic Fluency 0.24 (0.67) 15 0.12 (0.49) 7

Overall Mean 1.35 (0.58) 74 0.54 (0.45) 16

______________________________________________________________________________

Note. Mean ARQ scores are calculated according to the manualized procedure. A score of 1.0 or

more indicates ‘high risk’ performance on the subtest in question. Incidence is based on the

percentage of students showing an ARQ score of 1.0 or more on each subtest.

Table 2
A Validation of 21

Validation of Performance on Eight DAST Subtests in a Post-Secondary Student Sample

______________________________________________________________________________

SLD Students (n = 111) Comparison Students (n = 120)

________________________ ________________________

DAST Subtest Mean (SD) Incidence (%) Mean (SD) Incidence (%)

______________________________________________________________________________

Rapid Naming 0.80 (1.00) 48 0.31 (0.70) 22

One Minute Reading 1.85 (1.17) 79 0.43 (0.72) 32

Phonemic Segmentation 2.24 (1.11) 86 0.87 (1.21) 40

Two Minute Spelling 1.80 (1.09) 85 0.43 (0.69) 31

Backwards Digit Span 1.31 (1.19) 64 0.48 (0.88) 28

Nonsense Passage Reading 2.28 (0.99) 91 0.53 (0.89) 32

One Minute Writing 1.62 (1.11) 77 0.61 (0.97) 33

Verbal Fluency 0.50 (0.91) 27 0.26 (0.72) 14

Overall Mean 1.55 (0.70) 77 0.49 (0.52) 17

______________________________________________________________________________

Note. Mean ARQ scores are calculated by dividing the total risk score by eight. A score of 1.0 or

more indicates ‘high risk’ performance on the subtest in question. Incidence is based on the

percentage of students showing an ARQ score of 1.0 or more on each subtest.

Table 3
A Validation of 22

Self-reported Past or Current Academic Difficulties by Group

______________________________________________________________________________

Incidence Rate (%)

____________________ _________________________

Academic Problem Area SLD Students (n = 117) Comparison students (n = 121)

______________________________________________________________________________

None 0.8 16.5

Reading 75.4 9.1

Writing 78.8 7.4

Spelling 83.9 18.2

Mathematics 59.3 44.6

Listening 34.7 8.3

Speaking 19.5 11.6

Memory 58.5 20.7

Analytical Skills 13.6 3.3

Reasoning 9.3 1.7

Organization Skills 35.6 11.6

Abstract Concepts 28.8 13.2

Study Skills 43.2 26.4

Other 4.2 3.3

Don’t Know 0.8 1.7

______________________________________________________________________________

Appendix

Diagnostic Criteria for Specific Learning Disabilities


A Validation of 23

All of the following criteria must be met for a diagnosis of a learning disability to be made.

A. A non-random, clinically significant discrepancy between abilities essential for thinking

and reasoning, and one or more of the specific psychological processes related to learning

(phonological processing; memory and attention; processing speed; language processing;

perceptual-motor processing; visual-spatial processing; executive functions).

B. Academic achievement that is unexpectedly low relative to the individual’s thinking and

reasoning abilities OR academic achievement that is within expected levels, but is

sustainable only by extremely high levels of effort and support.

C. Evidence that learning difficulties are logically related to observed deficits in specific

psychological processes.

D. While co-morbid disorders can exist, evidence that identified learning difficulties cannot

primarily be accounted for by:

1. Other conditions, such as global developmental delay, primary sensory deficits

(e.g., visual or hearing impairments), or other physical difficulties;

2. Environmental factors, such as deprivation, abuse, inadequate or inappropriate

instruction, socio-economic status, or lack of motivation;

3. Cultural or linguistic diversity;

4. Any other co-existing condition such as Developmental Coordination Disorder,

Attention Deficit Hyperactivity Disorder, primary sensory deficits (e.g., visual or

hearing impairments, other physical difficulties, or anxiety).


A Validation of 24

Note. This document has been adapted, with approval, from the Learning Opportunities Task

Force: Recommended Practices for Assessment, Diagnosis, and Documentation of Learning

Disabilities prepared by the Learning Disabilities Association of Ontario (LDAO).

You might also like