You are on page 1of 14

LSHSS

Tutorial

Using Language Sample Analysis to Assess


Spoken Language Production in Adolescents
Jon F. Miller,a,b Karen Andriacchi,a,b and Ann Nockertsa,b

Purpose: This tutorial discusses the importance of language by grade and elicitation context, highlights language
sample analysis and how Systematic Analysis of Language measures that are higher or lower than the database mean
Transcripts (SALT) software can be used to simplify the values. Differences in values are measured in standard
process and effectively assess the spoken language deviations.
production of adolescents. Conclusion: Language sample analysis remains a
Method: Over the past 30 years, thousands of language powerful method of documenting language use in everyday
samples have been collected from typical speakers, aged speaking situations. A sample of talking reveals an
3–18 years, in conversational and narrative contexts. These individual’s ability to meet specific speaking demands.
samples have been formatted as reference databases These demands vary across contexts, and speakers can
included with SALT. Using the SALT software, individual have difficulty in any one or all of these communication
samples are compared with age- and grade-matched samples tasks. Language use for spoken communication is a
selected from these databases. foundation for literacy attainment and contributes to
Results: Two case studies illustrate that comparison success in navigating relationships for school, work, and
with database samples of typical adolescents, matched community participation.

L
anguage sample analysis (LSA), as a data-generating language. Some discussion is also included relating to LSA
tool, is a powerful method of documenting language and written language.
use in everyday speaking situations (Leadholm &
Miller, 1992; Paul, 2012). A sample of talking reveals an
individual’s ability to meet specific speaking demands. These Why LSA?
demands vary across contexts (e.g., conversation, narration,
exposition, and persuasion), and speakers may have dif- LSA has been considered the gold standard for asses-
ficulty in any one or all of these communication tasks. sing spoken language production by researchers and clini-
Language use for spoken communication is a foundation cians for more than 90 years (Bloom & Lahey, 1978; Brown,
for literacy attainment (Miller et al., 2006; Roskos, Tabors, 1973; Donaldson, 1986; Hart & Risley, 1995; Miller, 1982;
& Lenhart, 2009) and contributes to success in navigating Miller, Andriacchi, & Nockerts, 2011; K. Nelson, 1973;
relationships for school, work, and community integra- Piaget, 1926; Slobin, 1985; Weir, 1962). It is a method for
tion. This tutorial discusses the importance of LSA in effec- thoroughly describing language production and for monitor-
tively assessing speakers’ spoken language. It describes how ing change associated with linguistic development, variation
Systematic Analysis of Language Transcripts (SALT) soft- in linguistic contexts, and/or change from intervention. It
ware (Miller & Iglesias, 2012) can be used to simplify the can be used daily, weekly, monthly, or yearly, providing ac-
process, overcoming the hurdles of transcription and analy- cess to a large range of word, morpheme, syntax, and dis-
sis. The focus is on using LSA with adolescents and in- course measures. It allows for documentation of specific
cludes case studies to illustrate the effectiveness of using problems in a variety of contexts, pinpointing strengths and
language samples with comparison data to assess spoken weaknesses in spoken language. LSA evaluates natural,
functional language use within real-life contexts. Language
samples, elicited properly, often substantiate the reason for
a
University of Wisconsin–Madison referral, and the variety of analyses help to pinpoint which
b
SALT Software LLC, Madison, WI features of spoken language most and least impact effective
Correspondence to Jon F. Miller: jfmille2@wisc.edu communication. LSA outcomes can address the aims
Editor: Marilyn Nippold of assessment, answering questions such as “Does this
Received July 7, 2015
Revision received September 2, 2015 Disclosure: Jon F. Miller is CEO and co-owner of SALT Software, LLC, and has a
Accepted November 30, 2015 financial interest. Karen Andriacchi is an employee of SALT Software, LLC. Ann
DOI: 10.1044/2015_LSHSS-15-0051 Nockerts is COO and co-owner of SALT Software, LLC, and has a financial interest.

Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016 • Copyright © 2016 American Speech-Language-Hearing Association 99
Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
student qualify for services?”; “Which language features exposition for adolescents). The transcript provides the data
most impact communication?”; and “How much progress to compare with typical speakers and can be used to track
has been achieved over the past weeks?” These are ques- progress associated with intervention.
tions clinicians are frequently asked to address, yet many Transcribing a language sample greatly benefits the
of the assessment tools available today are limited by age examiner. Listening to the sample and typing it into a text
range, sensitivity, cultural bias, and/or frequency of deliv- file provides initial insights into language strengths and
ery. In contrast, language samples may be elicited from weaknesses. These observations can shape the approach to
speakers of any age, are sensitive to change, minimize cul- analysis, creating impressions of overall language production
tural bias, and may be repeated frequently (Heilmann & under specific speaking conditions. This is time well spent.
Westerveld, 2013; Rojas & Iglesias, 2010). Listening to a recorded language sample sometime after live
The most common excuse cited for omitting language elicitation offers the opportunity to attend to details possi-
sampling from assessment is the lack of time available to bly overlooked at the time of elicitation. Transcribing the
complete the process in a clinic or school setting. The time language sample offers the experience of sharpening focus
constraint may have been accurate years ago, but the pro- to features of the sample that will require further analysis.
cess is much more efficient today (Heilmann, 2010). Evalu- SALT staff have worked with a number of school dis-
ators are no longer bound by tape recorders, paper and tricts and clinics who have reduced the time of the LSA
pencil transcription, or by hand-counting units for analysis. process for clinicians by setting up their own transcription
Computer programs have helped to standardize the LSA service using speech-language pathology assistants or other
process and have added many analyses that advance and transcription staff. Typically, clinicians send their deiden-
assist the interpretation process (Long, 2015; MacWhinney, tified audio or video samples, often via a secure website, to
2000; Miller & Iglesias, 2012). There are many compelling the transcription staff who transcribe the sample and return
reasons to use LSA, yet there are hurdles that must be the coded text to the clinicians for analysis. Similarly, ex-
overcome to comfortably implement this powerful assess- ternal transcription services have been used to eliminate the
ment tool. need to train and support on-site transcribers. Whether this
service is provided in-house or externally, having one or
more dedicated transcribers improves efficiency. On the
Transcription: Overcoming the Hurdles basis of data from SALT’s transcription services with eight
Perhaps the biggest hurdle to overcome is transcribing school districts across the United States, clinicians signifi-
the language sample. “Why can’t this be done automatically? cantly increased their use of LSA when transcription was
If my phone or tablet computer can understand my speech, taken off their hands. A dedicated transcriber can reduce
why can’t it create a text file from a student speaking?” It costs and free up the clinicians’ time for analysis, interpreta-
can, with some accuracy, for well-constructed and articulated tion, and therapy. For clinicians who have transcription
sentences. But these speakers are typically not the students support, the first step in analyzing the sample is to go back
receiving services, and the resulting transcripts would not and listen to the recording while looking through the tran-
include many of the features clinicians are interested in script. This step will often highlight aspects of the recorded
documenting such as faithful utterance boundaries, bound sample that were not noticed during elicitation. It also pro-
morphemes, pauses, repetitions, and revisions. There are vides opportunity to make changes to the transcript that
a number of speech-to-text programs available. SALT staff reflect the clinician’s interpretation of the communication
tested many of them over the years and concluded that cor- interaction versus the perception of the transcriber who was
recting the result takes as long, if not longer, than it takes not present or who has little or no familiarity with the target
an experienced transcriber to complete the sample. SALT speaker(s). Transcribers strive for the most accurate repre-
transcription lab data, which are based on 550 audio sam- sentation of the communicative event. It is the responsibility
ples elicited from a variety of speakers in conversation and of the clinician, however, to ensure that the transcript is
narration, show it takes approximately 4–6 min per minute as valid as possible.
of audio to transcribe samples from typically developing Since the early 1980s, SALT developers have worked
speakers. In contrast, it takes approximately 7–8 min to tran- to implement consistent transcription formats (Miller, 1982).
scribe each minute of an audio sample produced by speakers The benefits are enormous. Measurement of the same units,
being evaluated for services or receiving services. This time such as clear identification of what constitutes a word, rules
difference is primarily the result of the intelligibility of the for identifying bound morphemes, and rules for utterance
speakers and the amount of mazing, pausing, errors, and segmentation, are prominent examples of implementing
overlapping speech that occurs in the sample. There are consistency that have resulted in improved, valid, and reli-
many other factors that affect transcription time including able evaluation outcomes (Heilmann et al., 2008; Miller,
(a) familiarity with the transcription conventions, (b) play- 1982; Miller et al., 2011). If samples are transcribed the
back equipment, (c) typing skills, (d) quality of the recording, same way, then analyses will be calculated on the same units:
(e) length of the sample, (f) context of the sample, and (g) the morphemes, words, and utterances. The consistent transcrip-
number of nonstandard features coded. The time spent on tion format was essential for the development of databases
transcription pays off with a permanent record of language of typical speakers for specific speaking contexts across
use in curriculum-relevant contexts (e.g., persuasion or ages and grades. Using these databases, measures from an

100 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016

Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016


Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
individual speaker could be compared with samples of other 2010; Thordardottir, 2015; Tilstra & McMaster, 2007).
age- and grade-matched speakers to assess performance and Heilmann et al. (2008) looked at 241 narrative retell samples
to document intervention progress over time. Computers de- produced by English language learners in grades K–2.
mand consistency, and as analysis programs were developed, English and Spanish samples were collected using the word-
consistent transcription formats became part of the process. less picture book Frog, Where Are You? (Mayer, 1969).
The transcript itself is a very effective clinical tool. Heilmann et al. (2008) documented that these short samples
Because it is readable, it provides a vehicle to highlight areas (4 min on average) generated reliable estimates of spoken
of concern with parents, teachers, and administrators. The language skills.
clinician is able to point out examples of specific areas of The length of the sample should be linked to the mea-
difficulty. The parents understand: “Oh, I see what you are sures one expects to use and the type of sample being elic-
concerned about. He does that at home too.” A language ited. Conversations have no particular beginning or end.
sample captures language use within real-life contexts. The Depending on the speakers, a 5-min conversational sample
transcript format provides direct access to enhance com- typically contains 40–50 utterances which, in most cases, is
munication about specific language issues. sufficient to yield meaningful results (Heilmann, Nockerts,
& Miller, 2010). Unlike conversations, narratives do have
an inherent structure, and the entire sample should be tran-
Elicitation: The Clinician’s Role scribed and analyzed. Story retell narratives, for example,
When eliciting narrative samples, such as story retells, require a beginning, a middle, and an end with specifics
the clinician’s role is primarily that of an attentive listener. regarding characters, conflicts, and resolutions. With these
Narrative samples can be time saving, can highlight many narratives, analysis of the entire sample takes advantage
academically based linguistic features, and are familiar for, of the speaker’s ability to organize language, to cohesively
and interesting to, the target speaker. The clinician must be retell a complete story, and to produce as much relevant
an active participant, however, when eliciting conversational lexicon as possible. As students get older, they are able to
samples. This sample type requires the examiner to establish retell more details of the stories (Heilmann et al., 2010;
rapport, introduce topics, ask open-ended questions, and Nippold, 2014). Similarly, expository and persuasive sam-
follow the other speaker’s lead. The clinician’s role in direct- ples have an inherent structure with required components
ing a conversation can certainly alter the results and may (Heilmann & Malone, 2014).
cause some clinicians to avoid the conversational context or Depending upon a speaker’s age/grade, a short narra-
to abandon LSA altogether. Although reticence is understood, tive sample may indicate spoken language deficits. It is
if there are speakers being assessed with social language important to note, however, whereas the length of a narra-
concerns, recording conversational samples can be critical tive language sample can be an important measure diagnos-
to an effective assessment. For example, students diagnosed tically, it is not always a predictor of a student’s ability to
with autism spectrum disorder and the related diagnosis include necessary structural elements. Concise stories with
of social communication disorder, as they are defined in the the required structural features communicate elegantly,
fifth edition of the Diagnostic and Statistical Manual of whereas longer stories emphasizing some features and omit-
Mental Disorders (American Psychiatric Association, 2013), ting others may not hold listeners’ attention. The length
have a basic problem with social reciprocity. These students of a narrative language sample can be an important mea-
tend to perform better with narrative tasks than in conversa- sure, but interpretation of results requires clinical expertise
tional contexts. Eliciting and analyzing both types of samples to evaluate the narrative content. For any measure calcu-
to document strengths and weaknesses can be essential to a lated automatically, caution needs to be exercised when
thorough assessment of spoken language performance that interpreting the results. The results of analyses should “make
compares performance with and without the demands of sense” (i.e., conform to observational analyses and corrobo-
conversation. In this case, the clinician’s active role in sam- rate with a rereading of the transcript). A high score on
ple elicitation is necessary to evaluate the student’s perfor- number of errors, for example, could be misleading if the
mance of the social demands in a communication setting. coder holds the speaker to a higher standard than is rea-
sonable, such as coding a dialect speaker’s sample for Stan-
dard American English.
Using Short Conversational Samples
and Structured Narrative Samples
Analysis: Comparing With Databases
As previously mentioned, much has changed with the
LSA process over recent decades. Of significant importance of Typical Speakers
is the revelation that language samples do not have to con- Regardless of the streamlining of the LSA process over
tain at least 100 utterances or be approximately 15 min in the years, interpretation of the data still remains dependent
length in order to accurately capture and assess natural lan- on the clinician’s competence, skill, and expertise. Clinicians
guage. Short conversational samples and structured narratives have to make sense of the data to recognize and resolve
take less time to elicit and transcribe, and they produce con- the specific language characteristics, behaviors, or commu-
sistent linguistic outcomes (Heilmann, Nockerts, & Miller, nication interaction styles that emerge from the sampling

Miller et al.: Using LSA to Assess Spoken Language in Adolescents 101


Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Table 1. Summary of Systematic Analysis of Language Transcripts
contexts. It is important to reiterate the many ways students
(SALT) reference databases: Grades 4–12.
express difficulty with language use in everyday speaking
tasks. Clusters of variables from spoken language assess- Grade in Age range No. of
ment that characterize patterns of disordered performance school SALT database ( years;months) samples
have been noted in past research (Leadholm & Miller,
1992; Miller, 1991). This work quantified what clinicians 4th Narrative Story Retell 9;3–10;0 46
report: that students identified with language disorders are 5th Conversation 10;9–11;4 27
Narrative Story Retell 10;7–11;9 86
not alike. Miller’s research, summarized most recently in Narrative Student 10;9–11;4 27
Miller et al. (2011), recognized the following profiles of Selects Story
disordered performance: general developmental delay (low Exposition 10;7–11;9 86
mean length of utterance, number of different words, total 6th Narrative Story Retell 11;5–12;8 69
Exposition 11;5–12;8 69
words, and words per minute), word finding and utterance 7th Conversation 12;9–13;3 27
formulation problems (high number of repetitions and Narrative Student 12;9–13;3 27
revisions at the word or phrase level), discourse deficits Selects Story
(trouble maintaining topic, failure to respond to examiner Exposition 12;7–13;7 36
9th Exposition 14;8–18;4 76
questions), slow speaking rate (low words per minute, high Persuasion 14;8–18;4 35
number of pauses), and fast speaking rate with low semantic 10th Exposition 15;5–16;11 24
content (high words per minute with circumlocution). These Persuasion 15;5–16;11 24
profiles appear to be distinct types of deficits but can be 11th Exposition 16;4–17;8 24
Persuasion 16;4–17;8 24
overlapping as well. Each is characterized by a pattern 12th Exposition 17;4–18;9 29
of deficits resulting from analysis of a series of more than Persuasion 17;4–18;9 30
20 different measures, calculated automatically from a lan-
guage sample. Each deficit category helps describe language
production with the goals of pinpointing areas of concern
Using LSA with reference databases can help docu-
and guiding the development of intervention targets. A
ment the outcomes of practice. The databases provide access
detailed analysis of 260 conversational samples of students
to several powerful tools in the quest to understand an indi-
3–13 years of age (Miller, 1991) found these profiles equally
vidual speaker’s language production. They provide norm-
distributed across ages. We find it surprising that the pro-
referenced data on multiple measures of language production
portion of delayed development profiles was virtually the
with which to compare a student’s performance. They can
same for students aged 3–5, 6–8, and 9–13 years.
substantiate the validity of the sampling protocol for reveal-
With improved automation of LSA, the opportunity
ing specific deficits and allow clinicians to describe and
to analyze large sample sets without human error has pro-
validate how specific linguistic contexts affect students’ spoken
duced a plethora of comparison data (MacWhinney, 2000;
communication skills.
Miller & Iglesias, 2012). Examiners no longer have to rely on
summary charts of language development (Miller, 1982),
though they can still be useful. SALT includes databases of
typical speakers in various speaking conditions: conversation, Case Studies
narration, exposition, and persuasion for speakers 3–18 years The following case studies illustrate how databases of
of age. These databases contain thousands of samples of typically developing speakers document individual perfor-
English and Spanish monolingual and bilingual speakers. mance. The first case study examines an expository sam-
They provide norm-referenced data to highlight the strengths ple from a ninth grader. The second case study examines a
and weaknesses of individual speakers (Heilmann, Miller, & narrative sample from a sixth grader.
Nockerts, 2010). Analysis using SALT allows the user to
access these databases to compare the performance of an
individual student with age- or grade-matched peers under Case Study 1: Teresa
the same speaking conditions. Consider the following scenario. Teresa is 15 years
Table 1 lists the SALT reference databases available old and is in the ninth grade. As part of a language assess-
for English-fluent students in fourth through 12th grade. ment, Teresa was asked to explain how to play her favorite
Ages for each data set provide a snapshot of grade and age game or sport, an elicitation task based on Nippold’s (2014)
relationship. Note that there are conversational samples research. The specific protocol used, which includes a
for fifth and seventh graders, narrative samples where the planning sheet listing eight expository components (object,
student selects the story (Narrative SSS) for fifth and preparations, start, course of play, rules, scoring, dura-
seventh graders, narrative story retell samples for fourth to tion, and strategies) can be found on the SALT website
sixth graders, expository samples (specifically how to play (SALT Software, 2015a). Teresa chose soccer and was
a game or sport) for fifth to 12th graders, and persuasive given a few minutes to fill out the planning sheet. She was
samples for ninth to 12th graders. These databases provide directed to explain how to play soccer while having her
performance standards across advancing grades and lan- completed planning sheet available for reference. The lan-
guage sample demands. guage sample was recorded and later transcribed following

102 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016

Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016


Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
the SALT transcription conventions (SALT Software, The Expository Scoring Scheme (ESS; Heilmann &
2015b). Refer to Figure 1 for a copy of her transcript. Malone, 2014; SALT Software, 2015d) was also applied
From listening to the sample, several areas of concern to the transcript to assess the content, organization, and
were identified by the clinician. The sample seemed very structure of Teresa’s sample. The ESS provides an overall
short. Teresa paused a lot, and her explanation of how to measure of a speaker’s skill in explaining how to play a
play soccer was not very detailed. Analysis in SALT allowed favorite game or sport. The ESS is scored for the following
the clinician to select a group of grade-matched language sam- 10 categories using a scale of 1 (minimal ) to 5 ( proficient):
ples from the SALT expository database (SALT Software, preparations, object of contest, start of play, course of play,
2015a). Once the comparison set of samples was identified, scoring, rules, strategy, duration, terminology, and cohe-
SALT produced a variety of reports showing how Teresa’s sion. The clinician or transcriber reads the expository sam-
language sample compares with her peers. The comparison ple and inserts the scores on a template in the transcript.
values are presented as means, ranges, and standard devia- All the expository samples in the SALT database have
tion values for each score. Values of more than 1 SD above been scored for ESS. Figure 6 compares Teresa’s scores with
or below the database values are highlighted. those of her grade-matched peers. ESS analysis documents
Figure 2 compares the length of Teresa’s sample with Teresa’s difficulties with expository organization and struc-
samples from 41 typically developing ninth graders selected ture. Knowledge of complex syntax and the ability explain
from the SALT expository database. Results confirm how to play a game or sport could be related skills. Without
Teresa produced a shorter expository sample, using only the ability to produce complex syntactic structures, it is im-
23 utterances and 197 words, compared with her peers who possible to put together multiple propositions, expressing
used an average of 54 utterances and 709 words. This is complex relationships. Although low ESS scores sometimes
approximately 2 SD below the database mean. reflect this type of deficit, further examination of the sample
The next analysis (see Figure 3) looks at specific lan- reveals that the ESS scores expose an underlying problem
guage measures on the basis of samples of the same length. with the structure of explaining how to play a favorite
Teresa’s sample contained 179 number of total words game or sport.
(NTW) excluding words in mazes and in unintelligible or The SALT expository database offers direct com-
incomplete utterances. To generate the report in Figure 3, parison data to document Teresa’s expository language,
only the first 179 NTW of the database samples are included confirming initial concerns from eliciting and listening to
for the comparison. Cutting the database samples to the the sample and from reading the transcript. Explaining how
same number of words as Teresa’s sample ensures valid to do something is part of the Common Core State Stan-
comparison of measures, such as number of different words dards (CCSS; CCSS Initiative, 2015) for spoken as well
(NDW), number of pauses, and number of errors, which as written language. This example highlights Teresa’s per-
are directly affected by the length of the transcript. Notice formance on an exposition task. As a follow-up assessment,
that Teresa’s mean length of utterance (MLU) was 2 SD her written language skills could be examined to address
below the database mean. NDW, a measure of vocabulary her understanding of explanation as a genre and focus on
diversity, was similar to that of her peers. The observa- the reciprocity between spoken and written language chal-
tion that her transcript contained a high number of pauses lenges posed by the curriculum.
was confirmed, and, as a consequence, her speaking rate,
measured in words per minute, was slow.
Given her low MLU value, it is suspected Teresa may Case Study 2: Patrick
have a complex syntax deficit. To verify, her transcript was This case study examines the story retelling skills of a
coded for the Subordination Index (SI; Loban, 1976; SALT sixth grader, providing an opportunity to explore the SALT
Software, 2015c). The SI is an assessment applied to an narrative story retell database (SALT Software, 2015e) to
existing transcript that produces a ratio of the total number document narrative skills.
of clauses to the total number of communication units. Patrick is almost 12 years old and is in the sixth grade.
Using the software tools to expedite the analysis, each As part of an assessment, the clinician recorded Patrick re-
utterance was assigned a code indicating the number of telling the story Doctor De Soto (Steig, 1982). From listen-
clauses produced in the utterance. To illustrate, Figure 4 ing to the language sample recording while reading the
contains several utterances, coded for SI, taken from transcript, the examiner noticed that Patrick made several
Teresa’s expository sample. word errors and omissions not expected of a student his
All of the SALT database samples have been coded age. In addition, he frequently repeated and revised utter-
for SI. Figure 5 compares Teresa’s SI scores with those of ance segments. Figure 7 contains an excerpt from Patrick’s
other ninth graders in the comparison set. Notice that none narrative.
of Teresa’s utterances contained more than two clauses, Figure 8 compares the length of Patrick’s sample
resulting in a composite score that is 1.2 SD below that with 69 typically developing sixth graders selected from the
of her peers. Teresa’s scores confirm a suspicion regarding SALT narrative story retell database. The length of his sam-
a deficit with complex syntax. This leaves her at a sub- ple is comparable with that of his peers.
stantial disadvantage in explaining a complex game like The next analysis (see Figure 9) looks at specific lan-
soccer. guage measures that are most valid when compared with

Miller et al.: Using LSA to Assess Spoken Language in Adolescents 103


Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 1. Case study 1–Expository transcript in Systematic Analysis of Language Transcripts (SALT) format.

104 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016

Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016


Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 2. Case study 1–Taken from Systematic Analysis of Language Transcripts (SALT) Transcript Length and Intelligibility report.

Figure 3. Case study 1–Taken from Systematic Analysis of Language Transcripts (SALT) Standard Measures report.

Miller et al.: Using LSA to Assess Spoken Language in Adolescents 105


Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 4. Case study 1–Utterances coded for Subordination Index.

samples of the same length. Patrick’s sample contains report also includes the utterances, allowing the examiner
513 NTW. To generate this report, the database samples to review the context of each omission and coded error.
were cut at 513 NTW, creating a set of length-matched It is interesting that he made each omission and error only
samples for comparison. once. Although one can gain some preliminary insight by
Figure 9 shows that Patrick’s MLU was more than reviewing the utterances in this report, it may be more pro-
1 SD below the database mean, though his NDW, a mea- ductive to review the transcript again as a way of processing
sure of vocabulary diversity, was higher than his peers. the context for the omissions and errors.
His false starts, repetitions, and revisions (mazes) were very Patrick’s language sample revealed a number of
high, with more than 20% of his words in mazes. His omis- strengths as well as deficits. His vocabulary use was above
sions and errors were very frequent for his age. These that of his peers for this task, though his average utterance
analyses reveal a short average utterance length, perhaps length was below. Follow-up analyses of complex syntax
due to a complex syntax deficit. revealed an SI composite score at the low end of the typical
To explore Patrick’s use of complex syntax, SI coding group. His very frequent mazes indicate he may have been
(SALT Software, 2015c) was applied to each utterance in struggling with utterance formulation, perhaps related to
his transcript. Figure 10 reveals how his scores compared facility with complex sentence forms.
with his grade-matched peers. His SI composite score was Although this case study illustrates how Patrick per-
slightly lower than the database samples and may warrant formed on a text-based narrative task, it may be prudent
further investigation of the sample to explore the possibility to assess his language skills using the content of the class-
of a deficit in complex syntax. room curriculum. The results can offer detailed information
To learn more about the content in the increased to share with classroom teachers and provide a platform
number of mazes, the Maze Summary report (see Figure 11) from which to build an intervention plan.
was run. This report revealed that Patrick’s mazes are pre-
dominantly at the phrase level, indicating difficulties with
utterance formulation. Phrase-level mazes can point to Written Language
problems with attempting to incorporate more propositions The correlation between spoken language and written
into utterances without the necessary syntactic ability. language is well established (Gillam & Johnston, 1992;
The Omissions and Error Codes report in Figure 12 Scott & Windsor, 2000). Hall-Mills and Apel (2015) inves-
lists four words and one bound morpheme that Patrick tigated written language development in elementary grades
omitted as well as nine word-level errors he produced. The across narrative and expository genres. They argue that

Figure 5. Case study 1–Systematic Analysis of Language Transcripts (SALT) Subordination Index report.

106 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016

Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016


Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 6. Case study 1–Systematic Analysis of Language Transcripts (SALT) Expository Scoring Scheme report.

speech-language pathologists are well trained to work with developing rules for dealing with issues specific to written
teachers on language-based analyses, particularly for those samples (e.g., misspellings, capitalization, and incorrect or
students receiving speech and language services. Funda- missing punctuation). SALT has been used by written lan-
mental to their study is the need for more research on writ- guage researchers to facilitate analyses (Hall-Mills & Apel,
ten language skill development. Written language analysis 2015; N. W. Nelson, 2014a, 2014b). It can expedite written
has used similar methods as spoken LSA without having language analysis, but the interpretation of the results rela-
to transcribe the acoustic signal into standard orthography. tive to typical students is left to curriculum standards.
Both share similar issues for consistency: identifying Price and Jackson (2015) detail the analysis of written
words, morphemes, and utterance segmentation as well as language using SALT as the methodology to work through

Figure 7. Case study 2–Excerpt of story retell transcript in Systematic Analysis of Language Transcripts (SALT) format.

Miller et al.: Using LSA to Assess Spoken Language in Adolescents 107


Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 8. Case study 2–Taken from Systematic Analysis of Language Transcripts (SALT) Transcript Length and Intelligibility report.

Figure 9. Case study 2–Taken from Systematic Analysis of Language Transcripts (SALT) Standard Measures report.

108 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016

Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016


Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 10. Case study 2–Systematic Analysis of Language Transcripts (SALT) Subordination Index report.

multiple levels of analysis. They provide an overview of important as students pursue specialized fields of study in
analysis options and explain why the analysis of written high school and beyond. The CCSS for literacy in science,
language is important for developing a complete under- social studies, history, and the technical subjects guide edu-
standing of the challenges students face within the curricu- cators to help students meet the literacy challenges within
lum. The authors are not aware of any available written each particular field of study. This national effort is referred
language databases but believe they would be very useful in to as disciplinary literacy (Wisconsin Department of Public
documenting academic progress. These could be organized Instruction, 2015). In a recent e-mail communication,
around the CCSS (CCSS Initiative, 2015) by grade and M. Nippold suggested,
genre, with students being asked to produce written narra-
Once a language sample has been elicited, transcribed,
tives and expository work across subjects (e.g., history, so- and analyzed using SALT normative data, and the
cial studies, and the sciences). Written language databases results indicate significant deficits in spoken language
would allow teachers and speech-language pathologists to production, it is important to use this information
match performance with expectations for students with to improve the student’s language skills in [other] real
typical literacy skills and those with language challenges. world contexts, such as the classroom. For a middle
school student, short samples of spoken language
could be taken in the classroom to plan individualized
Disciplinary Literacy intervention and to monitor the student’s progress
The ability to read, write, listen, speak, and think crit- over time. Appropriate contexts might include re-
ically begins to develop early and becomes increasingly cording the student while giving an oral report in

Figure 11. Case study 2–Systematic Analysis of Language Transcripts (SALT) Maze Summary report.

Miller et al.: Using LSA to Assess Spoken Language in Adolescents 109


Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Figure 12. Case study 2–Systematic Analysis of Language Transcripts (SALT) Omissions and Error Codes report.

history class (expository discourse); retelling a folk content by creating lists of words reflecting the curriculum
tale in English class (narrative discourse); debating a content. SALT can count the frequency of use for each
classmate in civics class (persuasive discourse); or chat- word and pull up the utterances containing them. These
ting with a peer during lunch break (conversational data provide access to the student’s vocabulary in explaining
discourse). (personal communication as part of the the subject content. Advanced syntax and text-level analy-
review process, September 22, 2015). ses can be evaluated in a similar manner by coding features
Although these sampling contexts are beyond the relevant to the genre. Although no database comparisons
scope of the current SALT databases, the software can be can be made, the results support evidence-based practice.
used to examine literacy across the curriculum. Samples of Users can code any language feature of interest at the word,
spoken or written language can be examined for vocabulary morpheme, utterance, or text level. Using SALT’s advanced

110 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016

Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016


Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
search and reporting features, these codes can be summa- analyzed using similar methods. However, at this time
rized for interpretation. Utterances containing coded features SALT does not include reference databases for comparison.
can be examined, providing context for further analyses. Middle and high school students must meet challenges
in the curriculum, some of which stem from the CCSS,
which mentions the need for students to integrate skills in
Current Research speaking, listening, reading, and writing in specific con-
tent areas. These curricular challenges require clinicians to
SALT Software’s newest project is the creation of a find ways to elucidate their specific difficulties. Spoken and
reference database of persuasive language samples. This written language samples can capture these expectations
database project targets high school students in ninth to for student learning (Miller, Andriacchi, & Nockerts, 2013;
12th grades. The choice of a persuasive task emanated from SALT Software, 2015f ).
the CCSS in which students are expected to be competent The case studies demonstrate how databases of typi-
in spoken and written argumentation across the curriculum. cally developing speakers can be used to confirm clinical
The persuasion protocol used to elicit the samples includes impressions and direct follow-up with additional analyses
providing the student with a list of suggested topics, such to more precisely document individual challenges. The
as convincing school officials to start school later in the databases capture typical language use in the contexts that
day, or persuading government officials to increase the min- these students meet every day in the classroom. Similar
imum wage. The protocol also includes a planning sheet databases could be constructed for spoken and written liter-
to provide the student with a scaffold for components that acy contexts to document expectations for vocabulary, syn-
are expected in this task. Preliminary analyses of 113 stu- tax, and text-level organizational skills across subjects and
dents from Wisconsin, grades 9–12, compared their per- grades. Computer solutions provide the tools to explore lan-
formance in exposition and persuasion. Students produced guage use in detail. The challenge is to find ways to use these
shorter samples with more complex syntax in persuasion powerful tools within the demands of clinical practice.
than exposition (see Table 2). Additional samples are being
collected in California and Australia, which will then be
analyzed in detail to determine whether they can be com-
bined with the Wisconsin samples.
References
American Psychiatric Association. (2013). Diagnostic and statistical
manual of mental disorders (5th ed.). Washington, DC: Author.
Summary Bloom, L., & Lahey, M. (1978). Language development and lan-
LSA is a versatile tool to assess language production guage disorders. New York, NY: Wiley.
in everyday speaking situations, providing evidence to vali- Brown, R. (1973). A first language: The early stages. Cambridge,
date clinical practice. A language sample is easily understood MA: Harvard University Press.
Common Core State Standards (CCSS) Initiative. (2015). English
by parents, teachers, administrators, and other professionals,
language arts standards. Retrieved from http://www.corestandards.
offering the opportunity to explain assessment outcomes org/ELA-Literacy/
and intervention decisions. Using databases of samples from Donaldson, M. (1986). Children’s explanations: A psycholinguistic
typically developing students provides detailed maps of study. Cambridge, United Kingdom: Cambridge University
age- and grade-level performance expectations. These data Press.
will confirm that typical students’ language production Gillam, R., & Johnston, J. (1992). Spoken and written language
is not perfect and includes omissions, repetitions, and revi- relationships in language/learning-impaired and normally
sions as well as word choice and grammatical errors. Pro- achieving school-age children. Journal of Speech and Hearing
Research, 35, 1303–1315.
duction errors of typical students, however, occur far less
Hall-Mills, S., & Apel, K. (2015). Linguistic feature development
frequently or in different combinations than in students across grades and genre in elementary writing. Language, Speech,
requiring therapy. Samples of written language can be and Hearing Services in Schools, 46, 242–255.
Hart, B., & Risley, T. (1995). Meaningful differences in the every-
Table 2. Comparison of sample length (NTW) and sentence day experiences of young American children. Baltimore, MD:
complexity (SI) by grade: Expository and persuasive samples. Brookes.
Heilmann, J. (2010). Myths and realities of language sample analy-
Grade in No. of sis. Perspectives on Language Learning and Education, 17, 4–8.
school samples Protocol Mean NTW Mean SI Heilmann, J., & Malone, T. (2014). The rules of the game: Proper-
ties of a database of expository language samples. Language,
9th 35 Persuasion 457.03 2.04 Speech, and Hearing Services in Schools, 45, 277–290.
35 Exposition 829.26 1.69 Heilmann, J., Miller, J., Iglesias, A., Fabiano-Smith, L., Nockerts, A.,
10th 24 Persuasion 584.58 2.02 & Digney Andriacchi, K. (2008). Narrative transcription accuracy
24 Exposition 789.13 1.66 and reliability in two languages. Topics in Language Disorders,
11th 23 Persuasion 534.09 2.19 28, 178–188.
23 Exposition 874.52 1.70
Heilmann, J., Miller, J., & Nockerts, A. (2010). Using language
12th 29 Persuasion 535.66 2.15
29 Exposition 896.93 1.69 sample databases. Language, Speech, and Hearing Services in
Schools, 41, 84–95.

Miller et al.: Using LSA to Assess Spoken Language in Adolescents 111


Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
Heilmann, J., Nockerts, A., & Miller, J. (2010). Language sam- Paul, R. (2012). Language disorders from infancy through adoles-
pling: Does the length of the transcript matter? Language, cence (4th ed.). St. Louis, MO: Elsevier Mosby.
Speech, and Hearing Services in Schools, 41, 393–404. Piaget, J. (1926). Language and thought of the child. The Hague:
Heilmann, J., & Westerveld, M. (2013). Bilingual language sample Mouton.
analysis: Considerations and technological advances. Journal Price, J., & Jackson, S. (2015). Procedures for obtaining and ana-
of Clinical Practice in Speech-Language Pathology, 15, 87–93. lyzing writing samples of school-age children and adolescents.
Leadholm, B., & Miller, J. (1992). Language sample analysis: Language, Speech, and Hearing Services in Schools, 46, 277–293.
The Wisconsin guide. Madison, WI: Wisconsin Department of Rojas, R., & Iglesias, A. (2010). Using language sampling to mea-
Public Instruction. sure language growth. Perspectives on Language Learning and
Loban, W. (1976). Language development: Kindergarten through Education, 17, 24–31.
grade twelve. Urbana, IL: National Council of Teachers. Roskos, K., Tabors, P., & Lenhart, L. (2009). Oral language
Long, S. (2015). Computerized Profiling [Computer software]. and early literacy in preschool: Talking, reading, and writing
Milwaukee, WI: Marquette University. (2nd ed.). Newark, DE: International Reading Association.
MacWhinney, B. (2000). The CHILDES project: Tools for analyz- SALT Software. (2015a). SALT expository database. Retrieved
ing talk (3rd ed.). Mahwah, NJ: Erlbaum. from http://saltsoftware.com/media/wysiwyg/reference_database/
Mayer, M. (1969). Frog, where are you? New York, NY: Dial Press. ExpoRDBDoc.pdf
Miller, J. (1982). Assessing language production in children. Baltimore, SALT Software. (2015b). Summary of SALT transcription conven-
MD: University Park Press. tions. Retrieved from http://saltsoftware.com/media/wysiwyg/
Miller, J. (1991). Quantifying productive language disorders. In tranaids/TranConvSummary.pdf
J. Miller (Ed.), Research on child language disorders: A decade SALT Software. (2015c). Subordination Index. Retrieved from
of progress (pp. 211–231). Austin, TX: Pro-Ed. http://saltsoftware.com/media/wysiwyg/codeaids/SI_Scoring_
Miller, J., Andriacchi, K., & Nockerts, A. (Eds.) (2011). Assessing Guide.pdf
language production using SALT software: A clinician’s guide SALT Software. (2015d). Expository Scoring Scheme (ESS) guide.
to language sample analysis. Middleton, WI: SALT Software Retrieved from http://saltsoftware.com/media/wysiwyg/
LLC. codeaids/ESS_Scoring_Guide.pdf
Miller, J., Andriacchi, K., & Nockerts, A. (2013, July 1). Analysis SALT Software. (2015e). SALT narrative story retell reference da-
assistance: SLPs turn to language sample analysis software tabase. Retrieved from http://saltsoftware.com/media/wysiwyg/
to document progress on the Common Core State Standards. reference_database/NarStoryRetellRDBDoc.pdf
Advance Healthcare Network for Speech and Hearing. Retrieved SALT Software. (2015f ). Using SALT to access the Common
from http://speech-language-pathology-audiology.advanceweb. Core. State standards for speaking and listening: Grades K–12.
com/Features/Articles/Analysis-Assistance.aspx Retrieved from http://saltsoftware.com/media/wysiwyg/CCSS_
Miller, J., Heilmann, J., Nockerts, A., Iglesias, A., Fabiano, L., & grid.pdf
Francis, D. (2006). Oral language and reading in bilingual chil- Scott, C., & Windsor, J. (2000). General language performance
dren. Learning Disabilities Research & Practice, 21, 30–43. measures in spoken and written narrative and expository
Miller, J., & Iglesias, A. (2012). Systematic Analysis of Language discourse of school-age children with language learning dis-
Transcripts (SALT) [Computer software]. Middleton, WI: abilities. Journal of Speech, Language, and Hearing Research,
SALT Software LLC. 43, 324–339.
Nelson, K. (1973). Structure and strategy in learning to talk. Slobin, D. (Ed.). (1985). The cross-linguistic study of language
Monographs of the Society for Research in Child Development, acquisition. Hillsdale, NJ: Erlbaum.
38, 1–135. Steig, W. (1982). Doctor De Soto. New York, NY: Farrar, Straus,
Nelson, N. W. (2014a). Classroom-based writing assessment. In & Giroux.
A. Stone, E. Silliman, B. Ehren, & G. Wallach (Eds.), Handbook Thordardottir, E. (2015, June). French language samples: Does
of language and literacy development and disorders (2nd ed., length matter? Poster session presented at the meeting of
pp. 524–543). New York, NY: Guilford. the Symposium on Research in Child Language Disorders,
Nelson, N. W. (2014b). Integrating language assessment, instruc- Madison, WI.
tion, and intervention in an inclusive writing lab approach. Tilstra, J., & McMaster, K. (2007). Productivity, fluency, and
In B. Arfé, J. Dockrell, & V. Berninger (Eds.), Writing devel- grammaticality measures from narratives: Potential indicators
opment in children with hearing loss, dyslexia, or oral language of language proficiency? Communication Disorders Quarterly,
problems (pp. 273–300). New York, NY: Oxford University 29, 43–53.
Press. Weir, R. (1962). Language in the crib. The Hague: Mouton.
Nippold, M. (2014). Language sampling with adolescents: Im- Wisconsin Department of Public Instruction. (2015). CCSS for
plications for intervention (2nd ed.). San Diego, CA: Plural literacy in all subjects. Retrieved from http://dpi.wi.gov/sites/
Publishing. default/files/imce/standards/pdf/las-stds.pdf

112 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016

Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016


Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx

You might also like