Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Early Language Acquisition: Cracking the Speech Code

Early Language Acquisition: Cracking the Speech Code

Ratings: (0)|Views: 178 |Likes:
Published by Yanpinginbkk

More info:

Published by: Yanpinginbkk on Nov 28, 2009
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





The acquisition oflanguage and speech seems decep-tively simple.Young children learn their mother tonguerapidly and effortlessly,from babbling at 6 months ofageto full sentences by the age of3 years,and follow the samedevelopmental path regardless ofculture
.Ling-uists,psychologists and neuroscientists have struggled toexplain how children do this,and why it is so regular if the mechanism ofacquisition depends on learning andenvironmental input.This puzzle,coupled with thefailure ofartificial intelligence approaches to build acomputer that learns language,has led to the idea thatspeech is a deeply encrypted ‘code’.Cracking the speechcode is child’s play for human infants but an unsolvedproblem for adult theorists and our machines.Why?During the last decade there has been an explosionofinformation about how infants tackle this task.Thenew data help us to understand why computers havenot cracked the human linguistic code and shed lighton a long-standing debate about the origins oflanguagein the child.Infants’strategies are surprising and arealso unpredicted by the main historical theorists.Infants approach language with a set ofinitial perceptualabilities that are necessary for language acquisition,although not unique to humans.They then learn rapidlyfrom exposure to language,in ways that are unique tohumans,combining pattern detection and computa-tional abilities (often called
withspecial social skills.An absence ofearly exposure to thepatterns that are inherent in natural language —whether spoken or signed — produces life-long changesin the ability to learn language.Infants’perceptual and learning abilities are alsohighly constrained.Infants cannot perceive all physicaldifferences in speech sounds,and are not computationalslaves to learning all possible stochastic patterns inlanguage input.Moreover,and ofequal importancefrom a neurobiological perspective,social constraintslimit the settings in which learning occurs.The fact thatinfants are ‘primed’to learn the regularities oflinguisticinput when engaged in social exchanges puts languagein a neurobiological framework that resembles commu-nicative learning in other species,such as songbirds,andhelps us to address why non-human animals do notadvance further towards language.The constraints oninfants’abilities to perceive and learn are as importantto theory development as are their successes.Recent neuropsychological and brain imaging work indicates that language acquisition involves
.Early in development,learners commit thebrain’s neural networks to patterns that reflect naturallanguage input.This idea makes empirically testablepredictions about how early learning supports andconstrains future learning,and holds that the basicelements oflanguage,learned initially,are pivotal.Theconcept ofneural commitment is linked to the issue of a ‘critical’or ‘sensitive’period for language acquisition.
Patricia K.Kuhl
Abstract | Infants learn language with remarkable speed, but howthey do it remains a mystery.New data show that infants use computational strategies to detect the statistical and prosodicpatterns in language input, and that this leads to the discovery of phonemes and words. Socialinteraction with another human being affects speech learning in a way that resemblescommunicative learning in songbirds. The brain’s commitment to the statistical and prosodicpatterns that are experienced early in life might help to explain the long-standing puzzle of whyinfants are better language learners than adults. Successful learning by infants, as well asconstraints on that learning, are changing theories of language acquisition.
Acquisition ofknowledgethrough the computation of information about thedistributional frequency withwhich certain items occur inrelation to others,orprobabilistic information insequences ofstimuli,such as theodds (transitional probabilities)that one unit will follow anotherin a given language.NATURE REVIEWS
 Institute for Learning and  Brain Sciences and the Department ofSpeech an Hearing Sciences,University ofWashington,Seattle,Washington 98195,USA.e-mail: pkkuhl@u.washington.edu
Learning results in acommitment ofthe brain’sneural networks to the patternsofvariation that describe aparticular language.Thislearning promotes furtherlearning ofpatterns thatconform to those initiallylearned,while interfering withthe learning ofpatterns that donot conform to those initiallylearned.
Elements ofa language thatdistinguish words by formingthe contrasting element in pairsofwords in a given language (forexample,‘rake’–‘lake’;‘far’–‘fall’
Languages combinedifferent phonetic units intophonemic categories;forexample,Japanese combines the‘r’and ‘l’units into onephonemic category.
The set ofspecific articulatorygestures that constitute vowelsand consonants in a particularlanguage.Phonetic units aregrouped into phonemiccategories.For example,‘r’and ‘l’are phonetic units that,inEnglish,belong to separatephonemic categories.
In speech perception,the abilityto group perceptually distinctsounds into the same category.Unlike computers,infants canclassify as similar phonetic unitsspoken by different talkers,atdifferent rates ofspeech and indifferent contexts.
acquisition oflanguage.However,categorical perceptionalso shows that infant perception is constrained.Infantsdo not discriminate all physically equal acoustic differ-ences;they show heightened sensitivity to those that areimportant for language.Although categorical perception is a building block for language,it is not unique to humans.Non-humanmammals — such as chinchillas and monkeys — alsopartition sounds where languages place phonetic bound-aries
.In humans,non-speech sounds that mimic theacoustic properties ofspeech are also partitioned inthis way
.I have previously argued that the matchbetween basic auditory perception and the acousticboundaries that separate phonetic categories in humanlanguages is not fortuitous:general auditory perceptualabilities provided ‘basic cuts’that influenced the choiceofsounds for the phonetic repertoire ofthe world’slanguages
.The development ofthese languagescapitalized on natural auditory discontinuities.However,the basic cuts provided by audition are primitive,andonly roughly partition sounds.The exact locations of phonetic boundaries differ across languages,and expo-sure to a specific language sharpens infants’perceptionofstimuli near phonetic boundaries in that language
.According to this argument,auditory perception,adomain-general skill,initially constrained choices at thephonetic level oflanguage during its evolution.Thisensured that,at birth,infants are prepared to discerndifferences between phonetic contrasts in any naturallanguage
.As well as discriminating the elementary sounds thatare used in language,infants must learn to perceptuallygroup different sounds that they clearly hear as distinct
(BOX 2)
.This is the problem of 
.In anatural environment,infants hear sounds that varyon many dimensions (for example,talker,rate and pho-netic context).At an early age,infants can categorizeThe idea is that the initial coding ofnative-languagepatterns eventually interferes with the learning of new patterns (such as those ofa foreign language),because they do not conform to the established ‘mentalfilter’.So,early learning promotes future learning thatconforms to and builds on the patterns already learned,but limits future learning ofpatterns that do not conformto those already learned.
The encryption problem
Sorting out the sounds.
The world’s languages containmany basic elements — around 600 consonants and200 vowels
.However,each language uses a unique setofonly about 40 distinct elements,called
,which change the meaning ofa word (for example,from ‘bat’to ‘pat’).These phonemes are actually groupsofnon-identical sounds,called
,that arefunctionally equivalent in the language.The infant’s task is to make some progress in figuring out the composi-tion ofthe 40 or so phonemic categories before trying toacquire words on which these elementary units depend.Three early discoveries inform us about the nature of the innate skills that infants bring to the task ofphoneticlearning and about the timeline ofearly learning.Thefirst,called categorical perception,focused on discrimi-nation ofthe acoustic events that distinguish phoneticunits
(BOX 1)
.Eimas and colleagues showed that younginfants are especially sensitive to acoustic changes at thephonetic boundaries between categories,includingthose oflanguages they have never heard
.Infants candiscriminate among virtually all the phonetic units usedin languages,whereas adults cannot
.The acousticdifferences on which this depends are tiny.A change of 10 ms in the time domain changes /b/ to /p/,and equiva-lentlysmall differences in the frequency domain change /p/ to /k/ 
.Infants can discriminate these subtledifferences from birth,and this ability is essential for the
Time(months)0123456789101112First words producedLanguage-specific speech production'Canonical babbling'Infants producevowel-like soundsInfants producenon-speech soundsInfants discriminatephonetic contrastsof all languagesRecognition oflanguage-specificsound combinationsLanguage-specificperception for vowelsDetection of typicalstress pattern in wordsStatistical learning(distributionalfrequencies)Statisticallearning(transitionalprobabilities)Increase innative-languageconsonantperceptionSensory learningSensory–motor learningLanguage-specific speech perceptionLanguage-specific speech productionUniversal speech perceptionUniversal speech production
Decline in foreign-languageconsonant perception
Figure 1 |
The universal language timeline of speech-perception and speech-production development.
This figure showsthe changes that occur in speech perception and production in typically developing human infants during their first year of life.
identical (Japanese),speakers ofboth languages producehighly variable sounds.Japanese adults produce bothEnglish r
and l
like sounds,so Japanese infants areexposed to both.Similarly,in Swedish there are 16 vowels,whereas English uses 10 and Japanese uses only 5
,but speakers ofthese languages produce a widerange ofsounds
.It is the distributional patterns ofsuchsounds that differ across languages
.When the acousticfeatures ofspeech are analysed,modal values occur wherelanguages place phonemic categories,whereas distribu-tional frequencies are low at the borders between cate-gories.So,distributional patterns ofsounds provide cluesabout the phonemic structure ofa language
.Ifinfantsare sensitive to the relative distributional frequencies of phonetic segments in the language that they hear,andrespond to all instances near a modal value by groupingthem,this would assist ‘category learning’.Experiments on 6-month-old infants indicate thatthis is the case
.Kuhl and colleagues
tested6-month-old American and Swedish infants with proto-type vowel sounds from both languages
.Both theAmerican-English prototype and the Swedish prototypewere synthesized by computer and,by varying the criti-cal acoustic components in small steps,32 variants of each prototype were created.The infants listened to theprototype vowel (either English or Swedish)presented asthe background stimulus,and were trained to respondwith a head-turn when they heard the prototype vowelchange to one ofits variants
.The hypothesiswas that infants would show a ‘perceptual magneteffect’for native-language sounds,because prototypicalsounds function like magnets for surrounding sounds
.The perceptual magnet effect is hypothesized toreflect prototype learning in cognitive psychology
.speech sounds despite such changes
.By contrast,computers are,so far,unable to recognize phonetic simi-larity in this way
.This is a necessary skill ifinfants are toimitate speech and learn their ‘mother tongue’
.Infants’initial universal ability to distinguishbetween phonetic units must eventually give way to alanguage-specific pattern oflistening.In Japanese,thephonetic units r’and ‘l’are combined into a singlephonemic category (Japanese ‘r’),whereas in English,the difference is preserved (‘rake’and ‘lake’);similarly,inEnglish,two Spanish phonetic units (distinguishingbala’from ‘pala’) are united in a single phonemic cate-gory.Infants can initially distinguish these sounds
,and Werker and colleagues investigated when the infantcitizens ofthe world’become ‘culture-bound’listeners
.They showed that English-learning infants could easilydiscriminate Hindi and Salish sounds at 6 months of age,but that this discrimination declined substantiallyby 12 months ofage.English-learning infants at 12months have difficulty in distinguishing betweensounds that are not used in English
.Japanese infantsfind the English r
ldistinction more difficult
,andAmerican infants’discrimination declines for botha Spanish
and a Mandarin distinction
,neither of which is used in English.At the same time,the abilityofinfants to discriminate native-language phoneticunits improves
Computational strategies.
What mechanism is responsi-blefor the developmental change in phonetic perceptionbetween the ages of6 and 12 months? One hypothesis isthat infants analyse the statistical distributions ofsoundsthat they hear in ambient language.Although adult listen-ers hear ‘r’and ‘l’as either distinct (English speakers) or
Box 1 |
What is categorical perception?
Categorical perception is the tendency for adult listenersofa particular language to classify the sounds used intheir languages as one phoneme or another,showing nosensitivity to intermediate sounds.Laboratorydemonstrations ofthis phenomenon involve two tasks,identification and discrimination.Listeners are asked toidentify each sound from a series generated by acomputer.Sounds in the series contain acoustic cues thatvary in small,physically equal steps from one phoneticunit to another,for example in 13 steps from /ra/ to /la/.In this example,both American and Japanese listenersare tested
.Americans distinguish the two sounds andidentify them as a sequence of/ra/ syllables that changesto a sequence of/la/ syllables.Even though the acousticstep size in the series is physically equal,Americanlisteners do not hear a change until stimulus 7 on thecontinuum.When Japanese listeners are tested,they donot hear any change in the stimuli.All the sounds areidentified as the same — the Japanese ‘r’.When pairs ofstimuli from the series are presented tolisteners,and they are asked to identify the sound pairs as‘same’or ‘different’,the results show that Americans are most sensitive to acoustic differences at the boundary between /r/ and /l/ (dashed line).Japanese adults’discrimination values hover near chance all along the continuum.Figure modified,with permission,from
© (1975) The Psychonomic Society.
  1  –  4   2  –   5  3  –   6  4  –   7   5  –   8   6  –   9   7  –  1   0   8  –  1  1   9  –  1   2  1   0  –  1   5
   P  e  r  c  e  n   t  c  o  r  r  e  c   t   P  e  r  c  e  n   t  r  e  s  p  o  n  s  e  s   [  r  a   ]
Stimulus numberDiscriminated pairAmericanJapanese

Activity (6)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads
alexkarousou liked this
novi6229 liked this
rumplocica liked this

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->