Language Learning in Deaf Children: Integrating Research on Speech, Gesture, and Sign

Sunday, February 21, 2010: 3:30 PM-5:00 PM
Room 2 (San Diego Convention Center)
How do infants who cannot hear learn language? The two dominant approaches to this problem are typically considered in isolation. On the one hand, infants exposed to signed languages such as American Sign Language readily learn a native sign language via the visual modality. On the other hand, infants exposed to spoken languages such as English are often provided with devices such as cochlear implants, which facilitate the acquisition of a native spoken language. However, there has been remarkably little cross-talk between investigators focused on these two different modes of language learning. This symposium considers the nature of the language input to deaf infants and young children from both the visual and auditory perspective. Based on results from studies of deaf infants with cochlear implants, how do these infants learn new spoken words and recognize familiar spoken words, given the degraded acoustic signal provided by cochlear implant devices? The types of visual input provided to deaf infants in the form of gesture across cultures also will be considered along with the implications of early spoken and signed language input for later language acquisition. By bringing together new research findings on the nature of the input to language learning in deaf infants and young children, this symposium will suggest important dimensions of experience that lead to successful language outcomes.
Organizer:
Jenny Saffran, University of Wisconsin
Moderator:
Jenny Saffran, University of Wisconsin
Discussant:
Rachel Mayberry, University of California
Speakers:
Derek Houston, Indiana University School of Medicine
Word Learning in Deaf Children with Cochlear Implants
Tina Grieco-Calub, Northern Illinois University
Processing of Spoken Words by 2-Year-Old Children Who Use Cochlear Implants
See more of: Cognitive Function and Development
See more of: Symposia