Professional Documents
Culture Documents
Tutorial
Purpose: This tutorial discusses the importance of language by grade and elicitation context, highlights language
sample analysis and how Systematic Analysis of Language measures that are higher or lower than the database mean
Transcripts (SALT) software can be used to simplify the values. Differences in values are measured in standard
process and effectively assess the spoken language deviations.
production of adolescents. Conclusion: Language sample analysis remains a
Method: Over the past 30 years, thousands of language powerful method of documenting language use in everyday
samples have been collected from typical speakers, aged speaking situations. A sample of talking reveals an
3–18 years, in conversational and narrative contexts. These individual’s ability to meet specific speaking demands.
samples have been formatted as reference databases These demands vary across contexts, and speakers can
included with SALT. Using the SALT software, individual have difficulty in any one or all of these communication
samples are compared with age- and grade-matched samples tasks. Language use for spoken communication is a
selected from these databases. foundation for literacy attainment and contributes to
Results: Two case studies illustrate that comparison success in navigating relationships for school, work, and
with database samples of typical adolescents, matched community participation.
L
anguage sample analysis (LSA), as a data-generating language. Some discussion is also included relating to LSA
tool, is a powerful method of documenting language and written language.
use in everyday speaking situations (Leadholm &
Miller, 1992; Paul, 2012). A sample of talking reveals an
individual’s ability to meet specific speaking demands. These Why LSA?
demands vary across contexts (e.g., conversation, narration,
exposition, and persuasion), and speakers may have dif- LSA has been considered the gold standard for asses-
ficulty in any one or all of these communication tasks. sing spoken language production by researchers and clini-
Language use for spoken communication is a foundation cians for more than 90 years (Bloom & Lahey, 1978; Brown,
for literacy attainment (Miller et al., 2006; Roskos, Tabors, 1973; Donaldson, 1986; Hart & Risley, 1995; Miller, 1982;
& Lenhart, 2009) and contributes to success in navigating Miller, Andriacchi, & Nockerts, 2011; K. Nelson, 1973;
relationships for school, work, and community integra- Piaget, 1926; Slobin, 1985; Weir, 1962). It is a method for
tion. This tutorial discusses the importance of LSA in effec- thoroughly describing language production and for monitor-
tively assessing speakers’ spoken language. It describes how ing change associated with linguistic development, variation
Systematic Analysis of Language Transcripts (SALT) soft- in linguistic contexts, and/or change from intervention. It
ware (Miller & Iglesias, 2012) can be used to simplify the can be used daily, weekly, monthly, or yearly, providing ac-
process, overcoming the hurdles of transcription and analy- cess to a large range of word, morpheme, syntax, and dis-
sis. The focus is on using LSA with adolescents and in- course measures. It allows for documentation of specific
cludes case studies to illustrate the effectiveness of using problems in a variety of contexts, pinpointing strengths and
language samples with comparison data to assess spoken weaknesses in spoken language. LSA evaluates natural,
functional language use within real-life contexts. Language
samples, elicited properly, often substantiate the reason for
a
University of Wisconsin–Madison referral, and the variety of analyses help to pinpoint which
b
SALT Software LLC, Madison, WI features of spoken language most and least impact effective
Correspondence to Jon F. Miller: jfmille2@wisc.edu communication. LSA outcomes can address the aims
Editor: Marilyn Nippold of assessment, answering questions such as “Does this
Received July 7, 2015
Revision received September 2, 2015 Disclosure: Jon F. Miller is CEO and co-owner of SALT Software, LLC, and has a
Accepted November 30, 2015 financial interest. Karen Andriacchi is an employee of SALT Software, LLC. Ann
DOI: 10.1044/2015_LSHSS-15-0051 Nockerts is COO and co-owner of SALT Software, LLC, and has a financial interest.
Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016 • Copyright © 2016 American Speech-Language-Hearing Association 99
Downloaded From: http://lshss.pubs.asha.org/ by a New York University User on 09/04/2016
Terms of Use: http://pubs.asha.org/ss/rights_and_permissions.aspx
student qualify for services?”; “Which language features exposition for adolescents). The transcript provides the data
most impact communication?”; and “How much progress to compare with typical speakers and can be used to track
has been achieved over the past weeks?” These are ques- progress associated with intervention.
tions clinicians are frequently asked to address, yet many Transcribing a language sample greatly benefits the
of the assessment tools available today are limited by age examiner. Listening to the sample and typing it into a text
range, sensitivity, cultural bias, and/or frequency of deliv- file provides initial insights into language strengths and
ery. In contrast, language samples may be elicited from weaknesses. These observations can shape the approach to
speakers of any age, are sensitive to change, minimize cul- analysis, creating impressions of overall language production
tural bias, and may be repeated frequently (Heilmann & under specific speaking conditions. This is time well spent.
Westerveld, 2013; Rojas & Iglesias, 2010). Listening to a recorded language sample sometime after live
The most common excuse cited for omitting language elicitation offers the opportunity to attend to details possi-
sampling from assessment is the lack of time available to bly overlooked at the time of elicitation. Transcribing the
complete the process in a clinic or school setting. The time language sample offers the experience of sharpening focus
constraint may have been accurate years ago, but the pro- to features of the sample that will require further analysis.
cess is much more efficient today (Heilmann, 2010). Evalu- SALT staff have worked with a number of school dis-
ators are no longer bound by tape recorders, paper and tricts and clinics who have reduced the time of the LSA
pencil transcription, or by hand-counting units for analysis. process for clinicians by setting up their own transcription
Computer programs have helped to standardize the LSA service using speech-language pathology assistants or other
process and have added many analyses that advance and transcription staff. Typically, clinicians send their deiden-
assist the interpretation process (Long, 2015; MacWhinney, tified audio or video samples, often via a secure website, to
2000; Miller & Iglesias, 2012). There are many compelling the transcription staff who transcribe the sample and return
reasons to use LSA, yet there are hurdles that must be the coded text to the clinicians for analysis. Similarly, ex-
overcome to comfortably implement this powerful assess- ternal transcription services have been used to eliminate the
ment tool. need to train and support on-site transcribers. Whether this
service is provided in-house or externally, having one or
more dedicated transcribers improves efficiency. On the
Transcription: Overcoming the Hurdles basis of data from SALT’s transcription services with eight
Perhaps the biggest hurdle to overcome is transcribing school districts across the United States, clinicians signifi-
the language sample. “Why can’t this be done automatically? cantly increased their use of LSA when transcription was
If my phone or tablet computer can understand my speech, taken off their hands. A dedicated transcriber can reduce
why can’t it create a text file from a student speaking?” It costs and free up the clinicians’ time for analysis, interpreta-
can, with some accuracy, for well-constructed and articulated tion, and therapy. For clinicians who have transcription
sentences. But these speakers are typically not the students support, the first step in analyzing the sample is to go back
receiving services, and the resulting transcripts would not and listen to the recording while looking through the tran-
include many of the features clinicians are interested in script. This step will often highlight aspects of the recorded
documenting such as faithful utterance boundaries, bound sample that were not noticed during elicitation. It also pro-
morphemes, pauses, repetitions, and revisions. There are vides opportunity to make changes to the transcript that
a number of speech-to-text programs available. SALT staff reflect the clinician’s interpretation of the communication
tested many of them over the years and concluded that cor- interaction versus the perception of the transcriber who was
recting the result takes as long, if not longer, than it takes not present or who has little or no familiarity with the target
an experienced transcriber to complete the sample. SALT speaker(s). Transcribers strive for the most accurate repre-
transcription lab data, which are based on 550 audio sam- sentation of the communicative event. It is the responsibility
ples elicited from a variety of speakers in conversation and of the clinician, however, to ensure that the transcript is
narration, show it takes approximately 4–6 min per minute as valid as possible.
of audio to transcribe samples from typically developing Since the early 1980s, SALT developers have worked
speakers. In contrast, it takes approximately 7–8 min to tran- to implement consistent transcription formats (Miller, 1982).
scribe each minute of an audio sample produced by speakers The benefits are enormous. Measurement of the same units,
being evaluated for services or receiving services. This time such as clear identification of what constitutes a word, rules
difference is primarily the result of the intelligibility of the for identifying bound morphemes, and rules for utterance
speakers and the amount of mazing, pausing, errors, and segmentation, are prominent examples of implementing
overlapping speech that occurs in the sample. There are consistency that have resulted in improved, valid, and reli-
many other factors that affect transcription time including able evaluation outcomes (Heilmann et al., 2008; Miller,
(a) familiarity with the transcription conventions, (b) play- 1982; Miller et al., 2011). If samples are transcribed the
back equipment, (c) typing skills, (d) quality of the recording, same way, then analyses will be calculated on the same units:
(e) length of the sample, (f) context of the sample, and (g) the morphemes, words, and utterances. The consistent transcrip-
number of nonstandard features coded. The time spent on tion format was essential for the development of databases
transcription pays off with a permanent record of language of typical speakers for specific speaking contexts across
use in curriculum-relevant contexts (e.g., persuasion or ages and grades. Using these databases, measures from an
100 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016
102 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016
104 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016
Figure 3. Case study 1–Taken from Systematic Analysis of Language Transcripts (SALT) Standard Measures report.
samples of the same length. Patrick’s sample contains report also includes the utterances, allowing the examiner
513 NTW. To generate this report, the database samples to review the context of each omission and coded error.
were cut at 513 NTW, creating a set of length-matched It is interesting that he made each omission and error only
samples for comparison. once. Although one can gain some preliminary insight by
Figure 9 shows that Patrick’s MLU was more than reviewing the utterances in this report, it may be more pro-
1 SD below the database mean, though his NDW, a mea- ductive to review the transcript again as a way of processing
sure of vocabulary diversity, was higher than his peers. the context for the omissions and errors.
His false starts, repetitions, and revisions (mazes) were very Patrick’s language sample revealed a number of
high, with more than 20% of his words in mazes. His omis- strengths as well as deficits. His vocabulary use was above
sions and errors were very frequent for his age. These that of his peers for this task, though his average utterance
analyses reveal a short average utterance length, perhaps length was below. Follow-up analyses of complex syntax
due to a complex syntax deficit. revealed an SI composite score at the low end of the typical
To explore Patrick’s use of complex syntax, SI coding group. His very frequent mazes indicate he may have been
(SALT Software, 2015c) was applied to each utterance in struggling with utterance formulation, perhaps related to
his transcript. Figure 10 reveals how his scores compared facility with complex sentence forms.
with his grade-matched peers. His SI composite score was Although this case study illustrates how Patrick per-
slightly lower than the database samples and may warrant formed on a text-based narrative task, it may be prudent
further investigation of the sample to explore the possibility to assess his language skills using the content of the class-
of a deficit in complex syntax. room curriculum. The results can offer detailed information
To learn more about the content in the increased to share with classroom teachers and provide a platform
number of mazes, the Maze Summary report (see Figure 11) from which to build an intervention plan.
was run. This report revealed that Patrick’s mazes are pre-
dominantly at the phrase level, indicating difficulties with
utterance formulation. Phrase-level mazes can point to Written Language
problems with attempting to incorporate more propositions The correlation between spoken language and written
into utterances without the necessary syntactic ability. language is well established (Gillam & Johnston, 1992;
The Omissions and Error Codes report in Figure 12 Scott & Windsor, 2000). Hall-Mills and Apel (2015) inves-
lists four words and one bound morpheme that Patrick tigated written language development in elementary grades
omitted as well as nine word-level errors he produced. The across narrative and expository genres. They argue that
Figure 5. Case study 1–Systematic Analysis of Language Transcripts (SALT) Subordination Index report.
106 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016
speech-language pathologists are well trained to work with developing rules for dealing with issues specific to written
teachers on language-based analyses, particularly for those samples (e.g., misspellings, capitalization, and incorrect or
students receiving speech and language services. Funda- missing punctuation). SALT has been used by written lan-
mental to their study is the need for more research on writ- guage researchers to facilitate analyses (Hall-Mills & Apel,
ten language skill development. Written language analysis 2015; N. W. Nelson, 2014a, 2014b). It can expedite written
has used similar methods as spoken LSA without having language analysis, but the interpretation of the results rela-
to transcribe the acoustic signal into standard orthography. tive to typical students is left to curriculum standards.
Both share similar issues for consistency: identifying Price and Jackson (2015) detail the analysis of written
words, morphemes, and utterance segmentation as well as language using SALT as the methodology to work through
Figure 7. Case study 2–Excerpt of story retell transcript in Systematic Analysis of Language Transcripts (SALT) format.
Figure 9. Case study 2–Taken from Systematic Analysis of Language Transcripts (SALT) Standard Measures report.
108 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016
multiple levels of analysis. They provide an overview of important as students pursue specialized fields of study in
analysis options and explain why the analysis of written high school and beyond. The CCSS for literacy in science,
language is important for developing a complete under- social studies, history, and the technical subjects guide edu-
standing of the challenges students face within the curricu- cators to help students meet the literacy challenges within
lum. The authors are not aware of any available written each particular field of study. This national effort is referred
language databases but believe they would be very useful in to as disciplinary literacy (Wisconsin Department of Public
documenting academic progress. These could be organized Instruction, 2015). In a recent e-mail communication,
around the CCSS (CCSS Initiative, 2015) by grade and M. Nippold suggested,
genre, with students being asked to produce written narra-
Once a language sample has been elicited, transcribed,
tives and expository work across subjects (e.g., history, so- and analyzed using SALT normative data, and the
cial studies, and the sciences). Written language databases results indicate significant deficits in spoken language
would allow teachers and speech-language pathologists to production, it is important to use this information
match performance with expectations for students with to improve the student’s language skills in [other] real
typical literacy skills and those with language challenges. world contexts, such as the classroom. For a middle
school student, short samples of spoken language
could be taken in the classroom to plan individualized
Disciplinary Literacy intervention and to monitor the student’s progress
The ability to read, write, listen, speak, and think crit- over time. Appropriate contexts might include re-
ically begins to develop early and becomes increasingly cording the student while giving an oral report in
Figure 11. Case study 2–Systematic Analysis of Language Transcripts (SALT) Maze Summary report.
history class (expository discourse); retelling a folk content by creating lists of words reflecting the curriculum
tale in English class (narrative discourse); debating a content. SALT can count the frequency of use for each
classmate in civics class (persuasive discourse); or chat- word and pull up the utterances containing them. These
ting with a peer during lunch break (conversational data provide access to the student’s vocabulary in explaining
discourse). (personal communication as part of the the subject content. Advanced syntax and text-level analy-
review process, September 22, 2015). ses can be evaluated in a similar manner by coding features
Although these sampling contexts are beyond the relevant to the genre. Although no database comparisons
scope of the current SALT databases, the software can be can be made, the results support evidence-based practice.
used to examine literacy across the curriculum. Samples of Users can code any language feature of interest at the word,
spoken or written language can be examined for vocabulary morpheme, utterance, or text level. Using SALT’s advanced
110 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016
112 Language, Speech, and Hearing Services in Schools • Vol. 47 • 99–112 • April 2016