SLR29210.1177/0267658313479360Second Language ResearchShoemaker and Rast


second language research
Second Language Research 29(2) 165­–183 © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0267658313479360 slr.sagepub.com

Extracting words from the speech stream at first exposure
Ellenor Shoemaker Rebekah Rast

Université Sorbonne Nouvelle, Paris, France

The American University of Paris, France

The earliest stages of adult language acquisition have received increased attention in recent years (cf. Carroll, introduction to this issue). The study reported here aims to contribute to this discussion by investigating the role of several variables in the development of word recognition strategies during the very first hours of exposure to a novel target language (TL). Eighteen native speakers of French with no previous exposure to Polish were tested at intervals throughout a 6.5-hour intensive Polish course on their ability to extract target words from Polish sentences. Following Rast and Dommergues’ (2003) first exposure study, stimuli were designed to investigate the effect of three factors: 1. 2. 3. the lexical transparency of the target word with respect to the native language (L1); the frequency of the target word in the input; the target word’s position in the sentence.

Results suggest that utterance position plays an essential role in learners’ ability to recognize words in the signal at first exposure, indicating acute sensitivity to the edges of prosodic domains. In addition, transparent words (e.g. professor ‘professor’) were recognized significantly better than non-transparent words (e.g. lekarz ‘doctor’), suggesting that first exposure learners are highly dependent on L1 phonological forms. Furthermore, the frequency of a target word in the input did not affect performance, suggesting that at the very beginning stages of learning, the amount of exposure to a lexical item alone does not play a significant role in recognition.

acquisition of second language phonology, first exposure, Polish language, spoken word recognition
Corresponding author: Ellenor Shoemaker, Département Monde Anglophone, Université Sorbonne Nouvelle – Paris 3, 13 rue de Santeuil, 75005 Paris, France. Email: ellenor.shoemaker@univ-paris3.fr

Downloaded from slr.sagepub.com at BEIJING FOREIGN STUDIES UNIV on July 9, 2013

1992).166 Second Language Research 29(2) I Introduction The notion that running speech is comprised of individual lexical units is one based in a psychological reality rather than an acoustic one. 2007). As noted by Kuhl (2000: 11. Segmentation in English and Dutch has been Downloaded from slr. where an aspirated /p/ in the former signals a preceding word boundary. 1988) proposes that prosody-based segmentation is a language-universal processing procedure. Models of lexical competition such as TRACE (McClelland and Elman. Competition-based recognition centers on the notion that the segmentation of the speech signal emerges as a result of competition between candidates in the mental lexicon as they are activated by the acoustic input. ‘no speaker of any language perceives acoustic reality. The variable and continuous nature of speech requires listeners to apply languagespecific processing strategies in order to comprehend the speech stream. perception is altered in the service of language. unreduced vowels (Cutler and Butterfield. 2013 . but phonemes are also subject to myriad phonological processes that can modify their acoustic realization. models of spoken word recognition fall roughly into two categories: models that propose that recognition is the by-product of lexical competition.com at BEIJING FOREIGN STUDIES UNIV on July 9. spoken word recognition in one’s native language is not only efficient.1 The speech stream is segmented into non-overlapping words when the lexical competition process results in an optimal parse of the signal.852). An ever-growing body of work has established that listeners exploit the presence of phonetic variation that occurs at the edges of prosodic domains in processing the speech signal. Numerous studies have supported this hypothesis. The sounds of a language not only attach to one another without pause in a continuous acoustic signal.’ It is a listener’s linguistic experience rather than the inherent acoustic properties of the signal that allows him or her to identify and extract discrete lexical items from running speech. Despite this variability. and changes in fundamental frequency (Welby. however. thus rendering the mapping of lexical forms to acoustic input problematical. in each case. Nakatani and Dukes (1977) were among the first to show that (native) listeners can use the presence of aspiration in word-initial voiceless stops in English to distinguish between such potentially ambiguous pairs as loose pills and Lou spills. According to Cutler (1996). and phonotactic constraints (McQueen. 1986). and models that propose that recognition is aided by explicit acoustic and/or phonological cues to where word boundaries lie. This view hinges on the fact that listeners possess a well-stocked mental lexicon and consequently recognize the beginning of a word by identifying where the preceding word ends. and Shortlist (Norris. among others. The Metrical Segmentation Strategy (Cutler and Norris. in press). but that the rhythmic cues used in segmentation are particular to each language (or family of languages). A substantial body of research also suggests that listeners make use of the rhythmic characteristics of language to identify word boundaries in the speech stream. A second theory of segmentation and word recognition is based on the exploitation of phonetic and phonological detail in the identification of word and syllable boundaries.sagepub. the presence of full. 1994) do not therefore posit specialized mechanisms for the identification of word boundaries. Further research in this domain has established that native speakers make use of myriad acoustic and phonological cues to locate the edges of words including variation in segmental duration (Shoemaker. 1998). but effortless.

Conversely. however. and how they manage to improve perceptual strategies over time. A further study investigating the simultaneous exploitation of stress and coarticulation as boundary cues showed that coarticulation outweighed stress in clear speech. (for a review. 1980). while a great deal of research has been undertaken concerning the limitations of L2 speech processing at intermediate and advanced levels of proficiency. while stress outweighed coarticulation in a degraded signal (Mattys. however. a syllable-based processing routine employed in French would be rendered inefficient in English given the ambisyllabic status of word-medial intervocalic consonants (Kahn. what aids them in this process at first exposure. but more sensitivity to stress when stimuli were presented in noise. The exploitation of language-specific processing strategies such as those mentioned above renders (native) spoken word recognition effortless. For example. 1992.sagepub. 2013 . Tabossi et al. When the two cues were pitted against one another in clear speech participants showed more sensitivity to phonotactic constraints. 2001). is contrasted with the effort that can be required in the processing of a second language (L2). to the extent that it is impossible not to comprehend spoken input in the native language (L1).g.. One exception is the study of statistical learning using artificial language learning paradigms (e. More recent work has explored the dynamic nature of speech perception by examining the simultaneous exploitation of multiple cues at different processing levels in the segmentation of connected speech.Shoemaker and Rast 167 shown to be stress based (Cutler and Norris. which lacks lexical stress. Conversely. 1988. Research on processing by late learners has suggested that the processing of an L2 can in fact be constrained by the application of L1 segmentation routines. 1981. syllable-based segmentation routines have been demonstrated in the Romance languages (Mehler et al. (2010). a stressbased segmentation routine that aids in the segmentation of English would be ineffective in the segmentation of French. 1996). Mattys (2003) for example showed differential sensitivity to stress and phonotactic cues in Englishspeaking listeners when these cues were presented in clear speech as opposed to noise... only in recent years have studies based on natural language input been seriously devoted to the investigation of how learners approach segmentation at first exposure. Saffran et al. van Zon and de Gelder. Word recognition and segmentation strategies that are employed efficiently in the L1 may not be applicable to the L2. where syllables are more or less equally weighted and listeners assume that syllable boundaries coincide with word boundaries. 2000). for Downloaded from slr. Some recent studies have managed to find ways to surmount this obstacle in a non-instructional language acquisition setting. Sebastián-Gallés et al. very little research to date has specifically dealt with how learners initially break into the sound stream of a novel foreign language. 1993) in that listeners exploit the fact that most content words in these languages begin with a strong (stressed) syllable and subsequently assume that a word boundary will directly precede such a strong syllable. Extant models of natural L2 acquisition do not explicitly take into account the developmental aspects of word recognition at the very beginning stages of learning due to what is largely perceived as the insurmountable obstacle of controlling and measuring natural language input..com at BEIJING FOREIGN STUDIES UNIV on July 9. The ease with which the L1 is processed. thereby offering a hierarchy of cues based upon the saliency of each individual cue in different speech processing environments. Gullberg et al. However. see Cutler. 2004).

com at BEIJING FOREIGN STUDIES UNIV on July 9. is the sentence repetition task. A purely perceptual word recognition task was designed to focus more specifically on sentence segmentation and word recognition by alleviating the need for learners to (re)produce orally. Other studies have focused on instructed L2 acquisition at first exposure (e. we assume that learners at first exposure do not yet have knowledge of explicit phonetic and phonological cues to word boundaries in the TL that could aid in the localization of word boundaries. and therefore competition among lexical candidates in the TL cannot occur. word stress. the position of the word in the sentence. revealed a privileged role for the processing of information in initial and final positions. 2008). Therefore. transparency. which allows for a full control of the linguistic input and input treatments. word length (measured in syllables). Carroll (2012) suggests that position effects in particular may be task dependent. The current study expands on Rast and Dommergues’ work by testing fewer variables but with more control over the type of language activity by removing the production task. 2013 . No effect of word length was found. First. Rast.g. Results showed a significant effect of word stress. regardless of proficiency level. we assume that first exposure learners do not have complete access to lexically-based segmentation strategies in that the TL lexicon has not yet been acquired. The data were analysed in terms of correct repetitions of individual words relative to the following factors: hours of instruction.sagepub. Klein’s (1986) study on sentence repetitions. Conversely. and that with gestural support they were even able to extract sound–referent pairings. phonemic distance. and that (2) perceptual strategies can be employed to extract these items efficiently. and a frequency effect appeared only after 8 hours of exposure. Furthermore. found that participants were capable of extracting possible Mandarin word forms and phonotactic constraints after only 7–14 minutes of exposure to Mandarin Chinese by means of audio-visual material. for example. and sentence position on the ability of both first exposure participants and learners to repeat Polish words at all time intervals. and the frequency of the word in the Polish input.168 Second Language Research 29(2) instance. Sentence repetition tasks have traditionally been used to determine how a learner perceives and memorizes target language (TL) utterances in the short term. One methodology that has been used in L2 acquisition studies to capture learners’ perceptual ability. transparency (based on a French–Polish lexical comparison). Barcroft and VanPatten (1997) tested a position effect as well and found that beginning English learners of Spanish attended more to utterance-initial items than to items in medial or final positions. This line of inquiry is based on several assumptions. Downloaded from slr. Rast and Dommergues (2003) used this paradigm to investigate what elements of Polish could be perceived and repeated by first exposure participants and learners (L1 French) after 4 hours and again after 8 hours of Polish instruction. removing the production portion of the task may allow us to more precisely home in on comprehension strategies to which learners have access before aspects of the TL have been acquired. we assume that first exposure learners do have access to the implicit knowledge that (1) syllables and words make up the acoustic signal. phonemic distance (based on a French–Polish phonemic comparison). Participants listened to sentences in Polish recorded by a native speaker and were asked to repeat the sentences as best they could.

First. 2013 . a word recognition task will allow us to observe the effect of transparency on the ability of our learners to recognize lexical items without having to reproduce them. and because its phonological and morpho-syntactic systems differ significantly from those of the L1 of the study’s participants (French). One need also note that generally frequent items will become even more frequent over time.com at BEIJING FOREIGN STUDIES UNIV on July 9. however. The effect of position is not yet thoroughly understood. First exposure participants and learners were better able to repeat Polish words that shared formal (and possibly semantic) features with French words than those that did not. but not to reproduce it? Polish was chosen as the target language of the current study in order to compare results with those of Rast and Dommergues (2003). and the position of the item in an utterance. but rather is correlated with overall exposure. Downloaded from slr. Results also showed that increased exposure had a stronger influence on opaque words than transparent ones. they need to recognize that they have previously seen or heard a given item or structure in the input and make note of it. With regard to lexical transparency. we examine the contribution of three factors to a learner’s ability to extract words from the signal: the frequency of the item in the input. As mentioned above. the issue of relevant task must be addressed. correctly repeating more words in initial and final positions than in medial position.sagepub. First exposure participants and learners in Rast and Dommergues’ (2003) study also relied on the position of a word in the sentence. For comprehensive phonological accounts of French and Polish the reader is directed to Tranel (1987) and Gussman (2007). allowing for observation of the role of the L1 in the acquisition process. respectively. evidence of cross-linguistic influence in the results of the sentence repetition task in Rast and Dommergues (2003) adds yet more support to the well-established claim that learners use prior linguistic knowledge in L2 acquisition (Odlin. suggesting that frequency does not act alone. we briefly outline some surface phonological similarities and differences between French and Polish at both the segmental and suprasegmental levels.Shoemaker and Rast 169 Following the results of Rast and Dommergues (2003). will frequency effects be stronger for opaque words than for transparent ones in a purely perceptual task? Once again. and thereby providing support for Klein’s (1986) claim that learners tend to process items in initial and final positions before those in medial position. This implies that some sort of recognition or extraction must take place repeatedly. 1989). Can we then assume that transparent words are not ‘learned’ in the same way as opaque words because transparent words are mapped onto existing mental representations whereas opaque ones are not? Furthermore. To what degree will utterance-initial and utterance-final positions be favored if the task requires the learner to recognize a lexical item in the acoustic stream. in line with Barcroft and VanPatten’s (1997) findings that utterance-initial items were more acoustically salient than those in medial or final positions. the finding by Rast and Dommergues (2003) that frequency (measured as > 20 tokens in the input) only became a significant factor in participants’ ability to repeat words after 8 hours of exposure to Polish merits further investigation. These results are not. and studies show contradictory results. In other words. Slobin (1985) suggests that learners must take note of ‘sameness’ in order for frequency to come into play. In this section. an item that reached frequency at 4 hours will likely be even more frequent at 8 hours. transparency of the item with respect to the L1.

l/ Notes. ʒ. in that its rhythm is based on a regular distribution of roughly equally weighted syllables and a lack of vowel reduction. At the suprasegmental level. what Polish may lack in its limited vocalic inventory. French is considered to be a classic example of a syllable-timed language (Mehler et al. 2007). however. while stress-timed languages allow for more complex syllabic structure and vowel reduction in weak syllables.4 l/ /p. 2007. ɛ. ʤ.sagepub. while Polish stress (mainly signaled by F0) falls on the penultimate syllable of a word. ʥ.g. ŋ. o. however. As noted by Gussman (2007).  Phonemic inventories of French and Polish. they differ in where stress falls. ɲ. French and Polish share the prosodic characteristic of fixed stress. œ. n. Regarding the rhythmic classification of the two languages. 1981). ʃ. ɑ. ɥ. ʃ. ʑ. Concerning segmental inventories (see Table 1). u. some phonologists argue that palatalized consonants count as separate phonemes. French data adapted from Tranel. Syllable-timed languages tend to have relatively simple syllable structures and full. f.170 Table 1. m. 'fizyka ‘physics’ or re'publika ‘republic. s. other differences emerge. ɛ. however. k. 2013 . ʁ. ʧ. t. r. v. 1987.  Not all French speakers make a distinction between /a/ and /ɑ/. Stress accent in French (mainly signaled by duration) consistently falls on the last syllable of a word in isolation or on the last syllable of an utterance. w/ Second Language Research 29(2) Polish /a. are without exception regularized to conform to the French pattern. ɕ. d. b. Like French. v. ʨ. French has a relatively complex vocalic system that includes 12 oral vowels and four nasal vowels. In addition. Polish lacks vowel reduction.3 ʦ. Polish data adapted from Gussman. ʣ.3 The affricates /ʧ/ and /ʤ/ are attested in French. i. French Oral vowels Nasal vowels Glides Consonants /a. i. ɡ. but only in loan words. on the other hand. x. ɔ/ /ɛ̃. ɔ̃/2 /j. Polish consonants are systematically palatalized in certain environments. Polish. n. (1999) measured durational Downloaded from slr. e. ø. The consonantal inventories of the two languages differ largely as well. ɲ.2 It should be noted that not all phonologists agree that these two vowels are nasal. b. ɡ. k. Polish exhibits some leniency in the displacement of stress as evidenced by loan words from Greek and Latin that carry stress on the antepenultimate syllable. y/ /ɑ̃. research on the classification of Polish rhythm is mixed as to whether its rhythm is syllable-timed or stress-timed. it more than makes up for in its consonantal system. French comprises 17 consonantal segments and three glides. œ ̃/ /j. Exploiting this fact. ɔ̃. t. resulting in greater overall durational variability between consonantal and vocalic segments in stress-timed languages than in syllable-timed languages.. Sources. d. comprises six oral vowels and two nasal vowels. ɨ. ə.’ Words that are borrowed into French from other languages.1 e. ɔ. ɛ̃. f.4 Alveolar trill. while others argue that palatalization is merely allophonic variation. m. u.com at BEIJING FOREIGN STUDIES UNIV on July 9. While both languages share fixed stress. Ramus et al. including an extremely rich inventory of fricatives and affricates. w/ /p. ʒ. unreduced vowels. z. z. s. maintaining instead that they are realizations of /ɛ/ and /ɔ/ followed by a nasalized glide (for discussion see Gussman. which boasts 27 consonantal segments and two glides. most opting for the more centralized /a/.

The authors take this to indicate that Polish cannot be classified along the traditional distinction between syllable-based and stress-based rhythm. Ramus et al. Perceptual data confirmed this hypothesis. it was also easily discriminated from English (although somewhat less easily than from Spanish).Shoemaker and Rast 171 ratios of segments in eight languages from naturally produced corpora and found that.5 hours of instruction in Polish. grammar books. Following Downloaded from slr.2 None had any previous knowledge of Polish or other Slavic languages. (2003) explored whether naive listeners can differentiate languages based solely on the durational ratios of consonants and vowels in synthesized speech. Spanish. Polish patterned differently from both groups. 2  Materials A list of 16 words in Polish (see Appendix 1) was compiled according to two criteria: transparency with respect to the L1 (French) and the word’s frequency in the classroom input. and were remunerated for their participation.g. which researchers take to indicate that Polish is not a syllable-timed language. was the learners’ fourth language (L4). Italian). Participants easily discriminated Polish from Spanish. 2013 . languages in different rhythmic classes were easily discriminated from one another (e. The learning environment represented an authentic instructed languagelearning situation using a communication-based method that excluded all use of metalanguage as well as explicit explanations of grammar and pronunciation.sagepub. however. In order to control the input received by learners and the frequency of lexical items a teaching script was strictly followed by the instructor. The average age of participants was 21.com at BEIJING FOREIGN STUDIES UNIV on July 9. the target language of the study. As predicted. France. was conducted in Paris.3 Participants in the transparency test heard a list of 71 words presented aurally and were asked to give a translation in French to the best of their ability. leading the authors to conclude that Polish is neither stress-timed nor syllable-timed.g. French. filmed and subsequently transcribed in its entirety in CHAT format of the CHILDES programs (MacWhinney. All participants reported English as their L2 and a Romance language as their third language (L3). English and Spanish). All learners received a total of 6. or any outside input (spoken or written) for the duration of the data collection period. taught by a native speaker of Polish. English. based on the judgments of 13 native speakers of French who did not participate in the Polish course and who had no previous knowledge of Slavic languages.2 years (range 19–27). while languages traditionally considered to be stress-timed patterned together (e. The course was recorded. Dutch) and syllable-timed languages also patterned together (e. In addition.g. Polish. learners were asked not to consult Polish dictionaries. 1  Participants Eighteen native speakers of French were selected by means of a questionnaire and an interview with respect to specific criteria. II Method An intensive five-day Polish course. Transparency was measured independently. 2000).

37. 3  Procedures Participants were tested at two time intervals throughout the course: pre-exposure (T1.0001. Stimuli were presented in randomized order.. participants were instructed to respond quickly. MP.7 Testing at T1 revealed a significant effect of Transparency: F(1.5 hours of exposure (T2).com at BEIJING FOREIGN STUDIES UNIV on July 9. 0 hours of instruction) and after 6. All sentences and words were recorded by a female native speaker of Polish in a soundtreated booth. Words were recognized in IP 75. Post-hoc analyses (Scheffé) demonstrated that the difference was significant among all three positions. HT words were recognized better than LT words (88. Downloaded from slr. though not so quickly as to sacrifice accuracy. 2013 . and FP). In order to measure the possible effect of a word’s position in an utterance.4% versus 63.30. Stimuli were presented binaurally through headphones. medial (MP). 34) = 59. Their task was to report whether the word was present or not in the sentence they had heard by pressing on the computer keyboard either (1) or (2).sagepub. words with 0 correct translations across participants were classified as low transparency (LT) and words with more than 50% correct translations were classified as high transparency (HT). All test words were of two or three syllables and carry stress on the penultimate syllable. four words appeared in each combination of categories (HT/HF. Each test word was further classified as low frequency (LF) if the word was completely absent from the classroom input (0 tokens) and high frequency (HF) if the word appeared more than 20 times in the classroom input. 51) = 56. A test word’s Position in the utterance also had a significant effect on recognition accuracy at T1: F(2. which included each of the 16 test words in three different positions: initial (IP).3% of the time. participants heard a sentence in Polish followed immediately by the word ‘OK’.4% of the time. Each testing session lasted approximately 15 minutes.4 Test words were counterbalanced across the frequency and transparency categories. Thirty-three distracter sentences of 20–25 syllables were also created. respectively. p < . Items included in the training portion were not included in the experimental portion. Care was taken to avoid subordination or other syntactic structures that may introduce a pause before or after test words in the sentences.3% of the time. In each experimental trial.6 They then heard a Polish word in isolation. HT/LF. in MP 56. The experimental protocol was created using E-Prime experimental software (Schneider et al.0001. and were presented during the test along with 11 additional distracter words (presented three times each) that were not present in the test sentences. III Results Mean accuracy scores at T1 (0 hours of exposure) were examined using factorial analyses of variance (ANOVA) according to Transparency (LT and HT) and utterance Position (IP. Participants completed a training portion (10 trials) before beginning the experimental portion (81 trials) in order to familiarize them with the procedure.5 48 test sentences were created ranging from 20–25 syllables. LT/HF and LT/LF). 2002) and presented on either laptop or desktop computers. and final (FP). respectively). p < .7%. The experimental procedure was loosely based on Carroll (2006). and in FP 96. No response limit was set.172 Second Language Research 29(2) Rast and Dommergues (2003).

25. p < . Post-hoc analyses showed that sensitivity to LT words increased significantly from T1 to T2. 2013 . Further post-hoc analyses also showed that the difference was again significant between all positions. words were recognized in IP 88.1%. while sensitivity to HT words did not.97. 34) = . 34) = 18. in MP 75. Post-hoc analyses revealed that words in IP and MP were recognized significantly better at T2 than at T1.1%. There was additionally a significant interaction between Transparency and Position at T2: F(2.0001.0004.e. 34) = 7.38.8% of the time. a significant interaction was observed between Session and Position. Mean accuracy rates at T1 and T2 are presented in Table 2. Position. p = .0%) and T2 (87. Mean accuracy scores were then analysed for T2 (6. LF words and HF words were recognized equally well (87.5 hours of exposure).0026. p < .9% and 88.Shoemaker and Rast 173 There was also a significant interaction between Transparency and Position at T1: F(2. before the TL lexicon and phonological system have been acquired. indicating that the effect of transparency was not equivalent at the two test sessions. At T2. but not in FP. p = . No further significant interactions were observed. respectively after 6.com at BEIJING FOREIGN STUDIES UNIV on July 9. F(1. p = .14. A test word’s position in the utterance also had a significant effect on recognition: F(2. indicating that the effect of a word’s position in the utterance was also not equivalent at the two test sessions.5 hours of exposure. demonstrating that learners come to the table Downloaded from slr.0001. Participants significantly improved in the recognition of test items after 6. p = .34.0001. Further post-hoc analyses showed again that the effect of transparency was stronger in IP and in MP than in FP.9%) revealed a significant effect of Session: F(1. 102) = 8.05. F(2. there was a significant interaction between Transparency and Position. Additionally. No effect of a test word’s frequency in the input was observed. Frequency.5 hours of exposure). participants performed well above chance in the recognition of Polish words (76% mean accuracy).9% versus 82.002. treating Transparency. no doubt due to the fact that significant improvement was observed in LT words in IP and MP. and Session as within-participant variables.0002. i.0001.3% accuracy). This interaction suggests that the effect of Transparency was not equivalent in all utterance positions from T1 to T2. p < . HT words were again recognized better than LT words (93.2% of the time. and in FP 100% of the time. indicating that the effect of Transparency was not equivalent in all utterance positions. 34) = 31. Post-hoc analyses showed that the effect of transparency was stronger in IP and in MP than in FP (possibly due to a ceiling effect in that FP words were initially recognized with 96.91.59. while words in FP showed no significant improvement. No further significant interactions were revealed. n. before any exposure to Polish. F(1. 51) = 27. again suggesting that the effect of Transparency was not equivalent in all utterance positions at T2.sagepub. respectively). Word recognition performance at the two test Sessions (T1 and T2) was subsequently compared using a repeated-measures ANOVA.0001. p < . F(2. 17) = 21. IV Discussion The current experiments explore what information is available in the acoustic signal that will aid the adult learner in extracting lexical forms from running speech at first exposure. At T1. In addition. Testing at T2 again revealed a significant effect of Transparency: F(1. Overall mean accuracy scores at T1 (76. 102) = 14. A significant interaction was further observed between Session and Transparency.s.17) = 43.

T1 Frequency   Transparency   Position     Global accuracy High Low High Low Initial Medial Final 76.sagepub.1 (10.1 (7. In other words.5 hours of exposure to spoken Polish (T2). would lead us to conclude that participants are acquiring sensitivity to general phonological forms and/ or prosodic patterns of Polish rather than to specific lexical items that are acquired through repeated exposure. respectively) and therefore did not allow as much room for improvement as LT words. 2013 .0 (8. 2  Frequency The frequency of a word in the input did not play a significant role in the recognition of individual lexical items.9 (5.0) 87.2 (14. profesor ‘professeur’ / ‘professor’) were extracted more easily from the speech stream than LT words (e. but not in the recognition of HT words.8) 88.g.0) with efficient perceptual tools already in place. including those that were absent from the input) increased significantly. HT words were extracted better than LT words.5 hours of exposure). After 6. while recognition of HT words did not. lekarz ‘médecin’ / ‘doctor’) at both test times.9) 75.9%. Transparency. the phonetic forms of transparent words in Polish appear to be sufficient to activate L1 forms in the mental lexicon from the very first exposure.0) 63. This discrepancy could be attributed to a possible ceiling effect in that HT words were recognized extremely effectively at both test sessions (88.4% and 95.8 (8.5) 75. This hypothesis is discussed in further detail below.3 (15.com at BEIJING FOREIGN STUDIES UNIV on July 9.3) 76.174 Second Language Research 29(2) Table 2. suggesting that learners may be highly dependent on phonetic and lexical forms already established in the L1.6) 75. the fact that recognition of LT words. A further possibility is that the discrepancy is due to increased sensitivity on the part of participants to the phonological system of Polish.4 (11.0) 88.g.4 (7. 1  Transparency HT words (e.1) T2 87.9) 82. learners were able to use the transparency of items to recognize lexical forms in running speech.9 (7. words that were frequent in the input were not recognized significantly better than those heard only during the administration of Downloaded from slr.2 (8. and utterance Position (standard deviations are in parentheses).5) 95.7 (11.8 We discuss below the exploitation of each of the three factors examined here. Before any exposure to the Polish language. At T2 (6. What is particularly striking is the fact that significant improvement from T1 to T2 was observed in the recognition of LT words. Specifically.9 (8.8) 100.1) 88.2) 96.9 (10.3 (5.  Mean accuracy (percentage) in word recognition task at T1 and T2 according to Frequency.1) 56. the effect of Transparency was again seen.0 (0.

who showed that word forms can be segmented after only two minutes of exposure. the results presented here suggest that frequent exposure to test words at this early stage was not sufficient for participants to build lasting mental representations of these items. Furthermore. and when and under what conditions do they come into play? Likewise. 2013 . but that recognition accuracy is not specifically based in repeated exposure. Slobin (1985) proposes that frequency involves taking note of ‘sameness’. Gass and Mackey (2002: 257) address the complexity of frequency effects and highlight the importance of several central issues: How do frequency effects interact with other aspects of the second language acquisition process. This finding is further in line with research on artificial language learning by Endress and Bonatti (2007). However.sagepub. 2002). this finding may seem surprising. Given the large quantity of research that provides evidence for a strong role for frequency. This finding further refines the results of Rast and Dommergues (2003). Frequency effects have been studied extensively in psycholinguistics and second language acquisition (for a review. Our results. These authors suggest that there may in fact be two separate learning mechanisms: one that rapidly extracts structural information (such as boundary cues) from the speech signal. He further notes that ‘the organism keeps track of frequency of patterns in experience. providing evidence that prosodic boundaries are highly salient for first exposure learners. when and under what conditions do they not play a role? Our findings suggest that frequency effects – when measured in terms of repetitions of lexical items during intensive language instruction – did not play a role in the overall ability of participants to recognize words in the speech stream. but that more exposure is required to create representations of words. or rather ‘familiarity and unfamiliarity’. an effect which we again interpret to indicate increased perceptual sensitivity to the Polish phonological system. Several observations can be made. The effect of Position was consistent at T1 and T2. MP words were recognized worse than both IP and FP words. 3  Utterance position Recognition results with respect to a word’s position in an utterance clearly point to a learner’s reliance on the edges of prosodic domains in the recognition of TL lexical items. with some regularity. As mentioned above. see Ellis. recognition of MP words regardless of their transparency or frequency increased more than recognition of both IP and FP words from T1 to T2. words in final position were recognized better than words in both initial and medial position.Shoemaker and Rast 175 the test at T1.5 hours/week) is sufficient exposure for the learner to begin to extract lexical items from running speech.5 hours/day) or 8 hours of extensive instruction (1. however. and another that operates more slowly and that computes the distributional properties within the structure. Specifically. where a frequency effect was found after 8 hours of exposure but not after 4 hours.com at BEIJING FOREIGN STUDIES UNIV on July 9. which allowed participants to better break into the signal. suggest that 6.5 hours of intensive language instruction (1. with automatic capacities to strengthen the traces of repeated experience and to more readily retrieve frequent and recent information’ (Slobin. 1985: 1165–66). taken together with those of Rast and Dommergues (2003). Downloaded from slr. Words in both initial and final position were recognized more readily than words in medial position.

Given that all of the learners in the current study had previous experience with processing a novel acoustic stream. for example.0001. Downloaded from slr. unlike the current study. Post-hoc analyses confirmed that words produced in final position (mean 692 msec) were significantly longer than the same word in initial (mean 551 msec) and medial (mean 510  msec) positions: F(2. V  General discussion What elements can adult learners use to break into a novel acoustic signal.32. The test sentences employed in the current study contained 20–25 syllables. Second is a possible effect of phrase-final lengthening.com at BEIJING FOREIGN STUDIES UNIV on July 9. De Angelis. This would in effect give listeners a double cue in that not only is the right edge of the word marked by silence. 45) = 19. However. both because the time needed to retain the form before responding is much shorter and because working memory is not further encumbered by incoming material. they no doubt had certain strategies already in place. who found differential preference for the left edge of an utterance over the right edge of an utterance. 2007). all of the participants reported English as their L2 and a Romance language as their L3. Either or a combination of these two factors could render words in final position more easily recognizable than words in both initial and medial position. it is reasonable to assume that words in final position hold a privileged position. First is an effect of acoustic and/or working memory in that participants could more easily keep an acoustic trace of a word in final position in memory than a word in medial or initial position. and therefore they were all experienced language learners. which could render words in utterance-final position easier to recognize. transforming it from a stream of incomprehensible noise into a sequence of neatly segmented. we see two further potential reasons why words may be recognized better in final position. One possibility as to why the learners of this study may have developed sensitivity to the Polish phonetic system so rapidly and effectively is based in general language-learning strategies. 2013 . Analyses showed no significant durational differences between initial and medial words.38 sec. If participants are relying heavily on acoustic memory in the recognition of words. with an average duration of 4. It should also be noted that the current results concerning a word’s position in an utterance are contra those of Barcroft and VanPatten (1997). Rast and Dommergues found no significant difference between the repetition of words in initial and those in final position.sagepub. recognizable sound forms? The current study has demonstrated a clear role of both utterance position and lexical transparency in a learner’s ability to recognize words at first exposure.176 Second Language Research 29(2) The finding that words in initial and final position were better recognized than those in medial position is in line with Rast and Dommergues’ (2003) sentence repetition results. but the word itself is longer and therefore more acoustically salient than the same word in other positions. a relatively long sentence length in naturally produced speech. As noted above. A growing body of research has found this experience to be beneficial to general language learning (see. p < . which may be evidence of differing processing strategies in production as opposed to perception. The results demonstrate rapid improvement in the learners’ ability to break into running speech in the TL after limited exposure. While both initial and final utterance position can facilitate segmentation in that either the right or left edge of an item is necessarily marked by silence.

/ɨ. if participants gained sensitivity to these individual segments or clusters. but the length of the word aids learners in finding the left edge of the word in that it either immediately precedes the stressed syllable (in two-syllable words) or is located one-syllable to the left of the stressed syllable (in three-syllable words). and palatalized /p. LT words could feasibly be easier to extract from the sentences.com at BEIJING FOREIGN STUDIES UNIV on July 9. The LT words contain segments (e. increased sensitivity to the distribution of stress in Polish could also have played a role in word recognition. While improved recognition of words due to increased sensitivity to the Polish phonological system is plausible. However. If improvement in recognition from T1 to T2 were based on the acquisition of individual lexical entries through exposure to input. it would follow that they became sensitive to the fact that stress in Polish words falls almost exclusively on the penultimate syllable of a word. or both. Therefore. which would in effect signal to the listener the right edge of the word (the syllable following the stressed syllable). Given that input frequency did not have an effect on recognition. which holds that infants Downloaded from slr. If participants gained sensitivity to the overall rhythm of Polish. l/) as well as consonants clusters (e. ɕ. it is reasonable to assume that a segmentation strategy that exploits stress placement in the localization of word boundaries would be efficient.sagepub.g. Speech development in infants progresses in a similar fashion. and there was no possibility of secondary stress placement within words. 1992). Additionally. we posit that participants did not build lasting mental representations of individual lexical entries that they were exposed to. but rather that they acquired prosodic or segmental information specific to Polish. and could in fact be an efficient strategy in Polish in that stress is fixed and therefore even more regular than in English. ʐ/. This hypothesis is arguably in line with research concerning phonological acquisition in infants. not only does penultimate stress help learners locate the right edge of the word.Shoemaker and Rast 177 One further possibility is that participants gained sensitivity to the Polish phonological system.g. Specifically regarding the improved recognition of LT words. /ɕpʲ. At the suprasegmental level. which allowed them to better segment the signal. the recognition of stress would constitute an efficient strategy for the localization of word boundaries and the extraction of word forms in the current stimuli. given the regular distribution of stress. we would expect HF words to be recognized better than LF words at T2. and the mixed nature of Polish rhythmic structure means that any supposition should be approached with caution. the current results do not allow us to pinpoint whether phonological knowledge was acquired at the segmental or the suprasegmental level. Thus. all the test words used in the current study consisted of two or three syllables. one possibility is that participants gained sensitivity to the Polish phonemic inventory. which was not the case. f. 2013 . We are not aware of any research that specifically addresses rhythmically-based speech segmentation strategies in Polish (by either native or non-native speakers). A large body of work has focused on the notion of ‘prosodic boot-strapping’. Segmentation strategies based on the regular distribution of stress have been demonstrated in languages such as English (Cutler and Butterfield. This information could help them extract the test words used in the current study in that all 16 test words follow this stress pattern. This hypothesis is further supported by the fact that the recognition of LT words improved regardless of their frequency in the input. ɕfʲ/) that are not attested in French. All in all.

it seems that rhythmic information may be more useful to participants than segmental information in the signal.. we are not proposing that information acquired concerning the phonemic inventory in Polish did not also aid in recognition. infants learn to identify cues to prosodic words before they acquire the segmental content of words. It has been proposed that (monolingual) French speakers have never acquired this parameter in their native phonology and therefore have particular difficulty perceiving it in other languages (Dupoux et al. We emphasize that our conclusions concerning the acquisition of Polish prosodic structure by participants in this study are purely hypothetical. however. Crucially. This conclusion is also based on evidence that infants can discriminate rhythmic classes before they can discriminate lexical items. 1997). Polish. In response to this. further suggesting that they are attending to the rhythm of the languages and not individual segments or words. while the current data show a rapid increase in the learner’s ability to match and extract phonetic forms from the acoustic stream. does not have vowel reduction. It could be argued however that the particular language pairing employed in the current study. The current study was not specifically designed to test the acquisition of prosody in Polish. infants can discriminate between a stress-based language such as English and a mora-based language such as Japanese (Nazzi et al. and may render the perception of stress more accessible. Furthermore. Further research would be required to test whether participants at first exposure are specifically attending to general prosodic characteristics as they break into the TL signal. the implication being that infants can identify word boundaries before they acquire the words delineated by these boundaries. is problematic for a theory in which first exposure participants are using stress placement in the segmentation of running speech.. they may not be able to efficiently parse the signal based on stress placement in Polish. however. data from infant phonological acquisition would lead us to believe that this is plausible. Work by Jusczyk and Aslin (1995) has shown evidence of speech segmentation in infants at about 6–7 months. as explained above. Specifically. Polish shows characteristics of both syllable-timed languages and stress-timed languages. and thus our conclusions must remain conjectural at this point. French and Spanish (Mehler et al. for example. and finally the words and segments that make up these units. before the acquisition of individual lexical items is in place. they do not speak to the Downloaded from slr.. For example. However. 1998). therefore rendering the length and weight of syllables more regular than in a stress-timed language such as English. In other words. One could argue that if French speakers are transferring their L1 pattern of syllable-based segmentation (and lack of lexical stress). we would argue that the distribution of stress in Polish may be more accessible to speakers of French in that. L1 French and TL Polish. 1998). This fact may render the edges of syllables (and words) more available to speakers of French. 1988) or English and Dutch (Nazzi et al. Previous research on the perception of stress has suggested that speakers of French exhibit a certain ‘deafness’ to stress stemming from the lack of lexical stress in French. infants cannot discriminate between languages that exhibit similar metrical structures. however..178 Second Language Research 29(2) first acquire the prosodic structure of a language and then in turn use this knowledge to identify discrete prosodic units. 2013 . like French.sagepub. given the prosodic structure of Polish.com at BEIJING FOREIGN STUDIES UNIV on July 9.

5 hours of exposure. Participants were exposed to limited written input in the form of presentation slides and some further aural input in the form of recorded dialogues used in listening comprehension Downloaded from slr.Shoemaker and Rast 179 learner’s capacity to assign meaning to these forms. Université Paris 8. candidates whose language background differed significantly from the group profile were not retained for the study.sagepub. in which unknown words remain arbitrary (yet recognizable) strings of phonemes? Given that the meaning of HT words is (generally) equivalent in both the L1 and the TL. 1980). therefore. One further avenue for research could include introspective data collection in which participants are asked to reflect upon the particular strategies that they employed in the recognition of words in continuous speech. 4. Further research testing different source and target language combinations – as well as research specifically targeted at interactions between the acquisition of phonological systems and form–meaning associations – would be required to address these and other unanswered questions. Space limitations prevent us from entering into discussion about the details of learners’ background languages and thus from contributing more fully to research on multilingual speech processing. Frequency measures were calculated based solely on the Polish professor’s oral input. Notes 1. in particular Marzena Watorek. 2. and Sophie Wauquier. what the learner perceives as the same or different in the L1 and TL (see Kellerman. Acknowledgements We wish to express our thanks to those who made this study possible. Information about the learners’ languages was collected by means of a language questionnaire in which learners rated their own proficiency level in each of their languages. In other words.e. 3. one further question that emerges from the current results is whether participants are not only recognizing phonetic forms. This type of metaanalysis could contribute to our ability to pinpoint what strategies language learners may be employing during word recognition in the initial exposure to the target language. St-Denis. however. France. Our objective was to create a homogenous group. Therefore. we cannot make the same assumption for the LT words. We distinguish ‘transparency’ from ‘cognate’ in order to emphasize our focus on the learner. Funding This research project was supported by a grant from the Programme d’Aide à la Recherche Innovante (2011–12). We are not concerned here with etymology but rather with the ‘psychotypology’ of the learner. We also thank our anonymous reviewers for their insightful comments. ‘hearing words is merely a first step in a series of processes which take the speech signal as their input and culminate in an interpretation’ (2004: 228). are recognition and association of meaning two completely separate processes? Can there be segmentation without recognition. As Carroll notes. i. Later versions of Shortlist do incorporate metrically-based segmentation. Maya Hickmann. but also attaching meaning to these forms. 2013 . What is clear from the current results is that participants significantly improved in their ability to match Polish phonetic forms after just 6. Paulina Kurzepa.com at BEIJING FOREIGN STUDIES UNIV on July 9. Ewa Lenart. we assume that learners are able to map HT words onto existing lexical entries that include meaning associations.

180 Second Language Research 29(2) exercises. Carroll S (2004) Segmentation: Learning how to ‘hear’ words in the L2 speech stream. Transactions of the Philological Society 102: 227–54. Interpreting 5: 1–23. the fact that the learners in this study performed well above chance in the word recognition task before any exposure to the target language could be due to the fact that the participants were all experienced language learners. NJ: Erlbaum. 2013 . 6. OVS and others). Carroll S (2006) The ‘micro-structure’ of a learning problem: Prosodic prominence. Somerville. It should be noted that the input contained varied syntactic structures (SVO. attention. Polish represented a L4 for all of the participants and therefore these learners already possessed successful language learning strategies. Journal of Memory and Language 31: 218–36. and therefore target items were well distributed across sentence positions in the classroom input. generalizations to less experienced learners should be made with caution. Downloaded from slr. there was very limited exposure to the eight LF words in that participants were exposed to these items three times in each of the two test sessions (once in each utterance position). MA: Cascadilla Press. Mehler J. In: Glass WR and Pérez-Leroux AT (eds) Contemporary perspectives on the acquisition of Spanish: Volume 2. De Angelis G (2007) Third or additional language acquisition. they were comparable with respect to the two categories of transparency (high and low). Toronto. Clevedon: Multilingual Matters. We also point out that. In: Morgan J and Demuth K (eds) Signal to syntax: Bootstrapping from speech to grammar in early acquisition. Canada.e. Carroll S (2012) First exposure learners make use of top-down lexical knowledge when learning. As pointed out by a reviewer. stress. References Altenberg E (2005) The perception of word boundaries in a second language. For this reason. 87–99. Furthermore. Cutler A. Unpublished paper presented at the Annual Meeting of the Canadian Linguistic Association. acoustic) memory. Barcroft J and VanPatten B (1997) Acoustic salience of grammatical forms: The effect of location. Cutler A (2001) Listening to a second language through the ears of a first. Ontario. 7. Although frequency counts for each item differed. 109–21. Cutler A (1996) Prosody and the word boundary problem. Cutler A and Norris D (1988) The role of strong syllables in segmentation for lexical access. however these tokens were not included in the frequency count. since participants were tested twice using the same test materials. 23–46. segmentation and word learning in a second language. 5. In: Braunmüller K and Gabriel C (eds) Multilingual Individuals and Multilingual Societies. 8. The word ‘OK’ was included between the sentence and the test word in order to prevent participants from relying solely on echoic (i. Second Language Research 21: 325–58. Frequency measures were not analysed at T1 given that participants had not yet been exposed to the HF test words. Cutler A and Butterfield S (1992) Rhythmic cues to speech segmentation: Evidence from juncture misperception. and boundedness on Spanish L2 Input Processing. Journal of Experimental Psychology: Human Perception and Performance 14: 113–21.sagepub. With regard to the distribution of frequency over time. Norris D and Segui J (1989) Limits of bilingualism. York University. all forms of a word were counted regardless of case and gender inflection (e. Nature 340: 229–30. Mahwah.com at BEIJING FOREIGN STUDIES UNIV on July 9. all frequent words appeared in the input during at least three of the five class sessions.g. studentką was included in frequency counts for the target item studentem). Amsterdam: John Benjamins.

Shoemaker and Rast 181 Dupoux E. Cognitive Psychology 18: 1–86. Cognition 52: 189–234.sagepub. In: Proceedings of Eurospeech: The 8th annual Conference on Speech Communication and Technology. Mehler J. Halsted N. Encrages. Journal of Memory and Language 39: 21–46. Ramus F. McClelland J and Elman J (1986) The TRACE model of speech perception. Gass S and Mackey A (2002) Frequency effects and second language acquisition: A complex picture? Studies in Second Language Acquisition 24: 249–60. In: 15th International Congress of Phonetic Sciences. Gullberg M. Geneva. Mattys S (2004) Stress versus coarticulation: Towards an integrated approach to explicit speech segmentation. Sebastián-Gallés N and Mehler J (1997) A destressing ‘deafness’ in French? Journal of Memory and Language 36: 406–21. Proceedings of the National Academy of Science 97: 11. Mahwah. Ellis N (2002) Frequency effects in language processing: A review with implications for theories of implicit and explicit language acquisition. Lambertz G. Cognition 73: 265–92. Mehler J. Cognitive Psychology 28: 1–23. Veroude K and Indefrey P (2010) Adult language learning after minimal exposure to an unknown natural language. Mattys S (2003) Stress-based speech segmentation revisited. Journal of Verbal Learning and Verbal Behavior 20: 298–305. Dommergues J-Y. Jusczyk P and Aslin R (1995) Infants’ detection of the sound patterns of words in fluent speech. Cognition 105: 247–99. Norris D (1994) Shortlist: A connectionist model of continuous speech recognition. Nespor M and Mehler J (1999) Correlates of linguistic rhythm in the speech signal. Dupoux E and Mehler J (2003) The psychological reality of rhythm classes: Perceptual studies. Kuhl P (2000) A new view of language acquisition. Odlin T (1989) Language transfer: Cross-linguistic influence in language learning. Roberts L. Cambridge: Cambridge University Press. Journal of Experimental Psychology: Human Perception and Performance 24: 756–77. 2013 . Cambridge: Cambridge University Press. Kellerman E (1980) Œil pour œil. 337–42. Nazzi T. Gussman E (2007) The phonology of Polish. NJ: Lawrence Erlbaum Associates. Studies in Second Language Acquisition 24: 143–88. Cognition 29: 143–78.850–57. 121–24. McQueen J (1998) Segmentation of continuous speech using phonotactics. Nakatani L and Dukes K (1977) Locus of segmental cues to word juncture. Klein W (1986) Second language acquisition. New York: Garland. Frauenfelder U and Segui J (1981) The syllable’s role in speech segmentation. Jusczyk EW. Language Learning 60: 5–24. Journal of Experimental Psychology: Human Perception and Performance 30: 397–408. Bertoncini J and Amiel-Tison C (1988) A precursor of language acquisition in young infants. Pallier C. Oxford: Oxford University Press.com at BEIJING FOREIGN STUDIES UNIV on July 9. Downloaded from slr. Bertoncini J and Mehler J (1998) Language discrimination in newborns: Toward an understanding of the role of rhythm. Ramus F. Dimroth C. Clevedon: Multilingual Matters. Endress A and Bonatti LL (2007) Rapid learning of syllable classes from a perceptually continuous speech stream. Kahn D (1980) Syllable-based generalizations in English phonology. Journal of the Acoustical Society of America 62: 714–19. Barcelona. MacWhinney B (2000) The CHILDES Project: Tools for analysing talk. Rast R (2008) Foreign language input: Initial processing. Special issue: Acquisition d’une langue étrangère [Foreign language acquisition]: 54–63.

Berlin. Journal of Memory and Language 31: 18–32. Aslin RN and Newport EL (1996) Statistical learning by 8-month-olds. Science 274: 1926–28.182 Second Language Research 29(2) Rast R and Dommergues J-Y (2003) Towards a characterisation of saliency on first exposure to a second language. Saffran JR. Schneider W. In: Slobin D (ed. 689–92. Applied Psycholinguistics. Shoemaker E (in press) Durational cues to word recognition in Spoken French. Mazzetti M and Zoppello M (2000) Syllables in the processing of spoken Italian. Dupoux E. 1157–1256. 2013 . Tranel B (1987) The sounds of French: An introduction.com at BEIJING FOREIGN STUDIES UNIV on July 9. Eschman A and Zuccolotto A (2002) E-Prime user’s guide. Collina S. Sebastián-Gallés N.) The crosslinguistic study of language acquisition: Volume II. Speech Communication 49: 28–48. EUROSLA Yearbook 3: 131–56. Tabossi P. Hillsdale. Welby P (2007) The role of early fundamental frequency rises and elbows in French word segmentation. In: Proceedings of the 3rd European Conference on Speech Communication and Technology. Journal of Experimental Psychology: Human Perception and Performance 26: 758–75. NJ: Lawrence Erlbaum. Slobin D (1985) Crosslinguistic evidence for the language-making capacity. Pittsburgh.sagepub. Segui J and Mehler J (1992) Contrasting syllabic effects in Catalan and Spanish: The role of Stress. PA: Psychology Software Tools. Cambridge: Cambridge University Press. Downloaded from slr. Van Zon M and de Gelder B (1993) Perception of word boundaries by Dutch listeners.

2013 . 3rd person singular) Niemcem /'ɲɛmtsɛm/ ‘allemand’ (German) Low frequency (LF) dokument /dɔ'kumɛnt/ ‘document’ (document) lampa /'lampa/ ‘lampe’ (lamp) plastyk /'plastɨk/ ‘plastique’ (plastic) ananas /a'nanas/ ‘ananas’ (pineapple) śpiewak /'ɕpʲɛvak/ ‘chanteur’ (singer) świetnie /'ɕfʲɛtɲɛ/ ‘bien’ (well/ adverb) Litwinem /lʲit'finɛm/ ‘lituanien’ (Lithuanian) lodówka /lɔ'dufka/ ‘frigo’ (refrigerator) 183 Low Transparency (LT) Downloaded from slr.Shoemaker and Rast Appendix 1.com at BEIJING FOREIGN STUDIES UNIV on July 9.  Polish test words (with French and English translations). High frequency (HF) High Transparency (HT) francuski /fran'tsuski/ ‘français’ (French) profesor /prɔ'fɛsɔr/ ‘professeur’ (professor) studentem /stu'dɛntɛm/ ‘étudiant’ (student) fotograf / fɔ'tɔgraf/ 'photographe' (photographer) lekarz /'lɛkaʐ/ ‘médecin’ (doctor) język /'jɛ̃zɨk/ ‘langue’ (language) pracuje /pra'tsujɛ/ ‘travaille’ (works.sagepub.