Bilingual processing of ASL–English code-blends: The consequences of accessing two lexical representations simultaneously
Highlights
► We examined the ability of bimodal bilinguals to simultaneously process signs and words. ► No processing costs for production suggests lexical access is non-competitive. ► Comprehension facilitation indicates cross-linguistic/cross-modal lexical integration.
Introduction
Bimodal bilinguals who are fluent in American Sign Language (ASL) and English rarely switch languages, but frequently code-blend, producing ASL signs and English words at the same time (Baker and van den Bogaerde, 2008, Bishop, 2006, Emmorey et al., 2005). For the vast majority of code-blends (>80%), the ASL sign and the English word are translation equivalents; for example, a bimodal bilingual may manually produce the sign CAT1 while simultaneously saying “cat” (Emmorey et al., 2008, Petitto et al., 2001). In contrast, articulatory constraints force unimodal bilinguals to either speak just one language or switch between their languages because it is physically impossible to say two words at the same time (e.g., simultaneously saying mesa in Spanish and table in English). The ability to produce both languages simultaneously introduces a unique opportunity to investigate the mechanisms of language production and comprehension, and raises questions about the possible costs or benefits associated with accessing two lexical representations at the same time.
Bimodal bilinguals’ strong preference for code-blending over code-switching provides some insight into the relative processing costs of lexical inhibition versus lexical selection, implying that inhibition is more effortful than selection. For a code-blend, two lexical representations must be selected (an English word and an ASL sign), whereas for a code-switch, only one lexical item is selected, and production of the translation equivalent must be suppressed (Green, 1998, Meuter and Allport, 1999). If lexical selection were more costly than inhibition, then one would expect bimodal bilinguals to prefer to code-switch. Instead, bimodal bilinguals prefer to code-blend, which suggests that dual lexical selection is less difficult than single lexical selection plus inhibition.
No prior studies have investigated the psycholinguistic mechanisms underlying bimodal bilinguals’ ability to simultaneously produce two lexical items and to comprehend two lexical items simultaneously, and this unique capacity can provide a new lens into mechanisms of lexical access. For unimodal bilinguals, many studies have documented processing costs associated with switching between languages for both production (e.g., Costa and Santesteban, 2004, Meuter and Allport, 1999) and comprehension (e.g., Grainger and Beauvillian, 1987, Thomas and Allport, 2000). For bimodal bilinguals, however, the potential costs associated with code-blending must involve controlling simultaneous, rather than serial production or comprehension of lexical representations in two languages. By establishing of the costs or advantages of code-blending, we can begin to characterize how bilinguals control two languages that are instantiated within two distinct sensory-motor systems.
To examine the potential processing costs/benefits of code-blending for both language production and comprehension, we conducted two experiments. For production (Experiment 1), we used a picture-naming task and compared naming latencies for ASL signs and English words produced in a code-blend with naming latencies for signs or words produced in isolation. Naming latencies for ASL were measured with a manual key-release and for English by a voice-key response. For language comprehension (Experiment 2), we used a semantic categorization task (is the item edible?) and compared semantic decision latencies for ASL–English code-blends with those for ASL signs and audiovisual English words presented alone.
Code-blend production might incur a processing cost because the retrieval of two lexical representations may take longer than retrieval of a single representation – particularly if dual lexical retrieval cannot occur completely in parallel (i.e., is at least partially serial). On the other hand, production of an English word or an ASL sign could facilitate retrieval of its translation equivalent, thus speeding picture-naming latencies or reducing error rates for signs or words in a code-blend. Translation priming effects have been reported for unimodal bilinguals in picture naming tasks (e.g., Costa & Caramazza, 1999).
For comprehension, a processing cost for code-blending is less likely because the perception of visual and auditory translation equivalents converge on a single semantic concept, which could facilitate rapid comprehension. However, there is some evidence that simultaneous perception of translation equivalents across modalities can interfere with processing (e.g., Duran, 2005, Mitterer and McQueen, 2009). For example, Duran (2005) assessed the simultaneous comprehension of auditory and visual words presented in Spanish and English. Proficient Spanish–English bilinguals heard and read two words simultaneously (e.g., hearing “apple” while seeing the written translation manzana), and then were cued unpredictably to report either what they had heard or what they had read. Bilinguals were slower to respond and made more errors for the simultaneous condition than when processing one language alone. However, such cross-language interference effects may only occur for unimodal bilinguals because written words in one language provide misleading phonological information about the words being spoken in the other language. Thus, while code-blend production might incur a processing cost related to dual lexical retrieval, code-blend comprehension might facilitate lexical access because the phonological representations of signed and spoken languages do not compete. Further, unlike bimodal bilinguals, unimodal bilinguals do not habitually process both languages simultaneously, and thus results from cross-modal comprehension experiments cannot speak to the cognitive mechanisms that might develop to support simultaneous dual language comprehension when it occurs regularly and spontaneously.
An important phenotypic characteristic of natural code-blend production is that the articulation of ASL signs and English words appears to be highly synchronized. For the majority (89%) of code-blends in a sentence context, the onset of an ASL sign was articulated simultaneously with the onset of the associated English word (Emmorey et al., 2008). When naming pictures in an experimental task, participants may also synchronize the production of ASL signs and English words. If so, then bilinguals must wait for the onset of the ASL sign before producing the English word – even if both lexical items are retrieved at the same time. This is because the hand is a larger and slower articulator than the vocal cords, lips, and tongue (the speech articulators), and it takes longer for the hand to reach the target sign onset than for the speech articulators to reach the target word onset. For example, it will take longer for the hand to move from a rest position (e.g., on the response box) to a location on the face than for the tongue to move from a resting position (e.g., the floor of the mouth) to the alveolar ridge. To produce a synchronized code-blend the onset of speech must be delayed while the hand moves to the location of the sign. Delayed speech during picture-naming would indicate a type of “language coordination cost” for English that reflects the articulatory dependency between the vocal and manual elements of a code-blend.
If lexical onsets are synchronized, then only ASL responses can provide insight into whether dual lexical retrieval incurs a processing cost and more specifically, whether lexical retrieval is serial or simultaneous during code-blend production. For example, if lexical access is at least partially serial and the English word is retrieved before accessing the ASL sign, then RTs should be slower for ASL signs within a code-blend than for signs produced alone. The English word is expected to be retrieved more quickly than the ASL sign because English is the dominant language for hearing ASL–English bilinguals (see Emmorey et al., 2008).
The possible patterns of results for Experiment 1 (code-blend production) and their corresponding implications are as follows:
- 1.
Longer RTs and/or increased error rates for ASL signs produced in a code-blend than for signs produced alone would indicate a processing cost for code-blending. Longer RTs would also imply that it is not possible to retrieve and select two lexical representations fully in parallel.
- 2.
Equal RTs and equal error rates for ASL alone and in a code-blend would indicate that dual lexical retrieval can occur in parallel during code-blend production with no associated processing cost.
- 3.
Longer RTs for English in a code-blend would signal a dual lexical retrieval cost but may also reflect a language coordination cost, i.e., speech is delayed – perhaps held in a buffer – in order to coordinate lexical onsets within a code-blend.
- 4.
Increased error rates for English words produced in a code-blend than for English words produced alone would indicate a cost for dual lexical retrieval.
Finally, we included a frequency manipulation within the stimuli, which will allow us to consider the possible locus of any observed code-blending effects. Specifically, if these effects are modulated by lexical frequency, it would imply they have a lexical locus (Almeida, Knobel, Finkbeiner, & Caramazza, 2007).
Section snippets
Participants
Forty ASL–English bilinguals (27 female) participated in Experiment 1. We included both early ASL–English bilinguals, often referred to as Codas (children of deaf adults), and late ASL–English bilinguals who learned ASL through instruction and immersion in the Deaf community. Two participants (one early and one late bilingual) were eliminated from the analyses because of a high rate of sign omissions (>20%).
Table 1 provides participant characteristics obtained from a language history and
Experiment 2: Code-blend perception
To assess code-blend comprehension, we chose a semantic categorization task in which participants determined whether or not a sign, word, or code-blend referred to an item that was edible. We selected the edible/non-edible semantic category because it is a natural and early acquired category that requires lexical access and semantic processing.
General discussion
Together Experiments 1 and 2 represent the first experimental investigation of a behavior that is ubiquitous in communication between speaking–signing bilinguals. In prior work (Emmorey et al., 2008), we suggested that when mixing languages, bimodal bilinguals prefer to code-blend because doing so is easier than code-switching which involves suppressing the production of one language. The results of our code-blend production study (Experiment 1) support this hypothesis. Although code-blending
Acknowledgments
This research was supported by NIH Grant HD047736 awarded to Karen Emmorey and San Diego State University and NIH Grant HD050287 awarded to Tamar Gollan and the University of California San Diego. The authors thank Lucinda Batch, Helsa Borinstein, Shannon Casey, Rachael Colvin, Ashley Engle, Mary Kane, Franco Korpics, Heather Larrabee, Danielle Lucien, Lindsay Nemeth, Erica Parker, Danielle Pearson, Dustin Pelloni, and Jennie Pyers for assistance with stimuli development, data coding,
References (48)
- et al.
Lexical access in Catalan Signed Language (LSC) production
Cognition
(2008) - et al.
Lexical access in bilingual speech production: Evidence from language switching in highly proficient bilinguals and L2 learners
Journal of Memory and Language
(2004) - et al.
More use almost always means a smaller frequency effect: Aging, bilingualism, and the weaker links hypothesis
Journal of Memory and Language
(2008) - et al.
Category interference in translation and picture naming: Evidence for asymmetric connections between bilingual memory representations
Journal of Memory and Language
(1994) - et al.
Bilingual language switching in naming: Asymmetrical costs of language selection
Journal of Memory and Language
(1999) Divided attention: Evidence for coactivation with redundant signals
Cognitive Psychology
(1982)- et al.
Simple reaction time and statistical facilitation: A parallel grains model
Cognitive Psychology
(2003) - et al.
Bimodal bilinguals reveal the source of tip-of-the-tongue states
Cognition
(2009) - et al.
Gesture and the process of speech production: We think, therefore we gesture
Language and Cognitive Processes
(2000) - et al.
The locus of the frequency effect in picture naming: When recognizing is not enough
Psychonomic Bulletin & Review
(2007)
Codemixing in signs and words in the input to and output from children
Timed picture naming in seven languages
Psychonomic Bulletin & Review
A lexicon of multiple origins: Native and foreign vocabulary in American Sign Language
PsyScope: A new graphic interactive environment for designing psychology experiments
Behavioral Research Methods, Instruments, and Computers
Is lexical selection in bilingual speech production language-specific? Further evidence from Spanish–English and English–Spanish bilinguals
Bilingualism: Language and Cognition
Bilingual visual word recognition and lexical access
Interlingual homograph recognition: Effects of task demands and language intermixing
Bilingualism: Language and Cognition
Translation priming between the native language and a second language: New evidence from Dutch–French bilinguals
Experimental Psychology
Bimodal bilingualism: Code-blending between spoken English and American Sign Language
Bimodal bilingualism
Bilingualism: Language and Cognition
Lexical recognition in sign language: Effects of phonetic structure and morphology
Perceptual and Motor Skills
The effects of restricting hand gesture production on lexical retrieval and free recall
The American Journal of Psychology
Cited by (52)
Iconicity in sign language production: Task matters
2022, NeuropsychologiaCitation Excerpt :When lexical frequency was manipulated, both tasks revealed an effect: high-frequency items, pictures and words, were signed faster than low-frequency items. These results replicated the well-established phenomenon reported in the signed modality (Baus and Costa, 2015; Emmorey et al., 2012, 2013, 2020) as well as in the oral modality (Oldfield and Wingfield, 1964; see Brysbaert et al., 2018, for a review). In addition, frequency effects were obtained at the ERP level in the two tasks.
Bimodal code-mixing: Dutch spoken language elements in NGT discourse
2018, BilingualismCharacterizing language production across modalities
2024, Cognitive Neuropsychology