Friday, February 15, 2013
Room 304 (Hynes Convention Center)
Signed languages provide a powerful tool for investigating the neurobiology and cognitive architecture of human language. Signed languages differ dramatically from spoken languages with respect to the linguistic articulators (the hands vs. the vocal tract) and the primary perceptual system required for comprehension (vision vs. audition). Despite these biological differences, research over the past 50 years has identified striking parallels between sign and speech, including a level of form-based (phonological) structure, the same developmental milestones (including manual babbling), and left-hemisphere control of both signing and speaking. These similarities provide a strong basis for comparison and serve to highlight universal properties of human language. In addition, differences between sign and speech can be exploited to discover how the input-output systems of language impact the psycholinguistic and neurocognitive underpinnings of language. We show that signed and spoken languages differ with respect to how language output is monitored (differences in visual vs. auditory feedback) and how the brain processes spatial language (e.g., descriptions of locations or movements through space). In general, we find that modality-specific differences are observed at the interface between sensory systems and language.