5910 Capturing the Structure of American Sign Language

Sunday, February 19, 2012: 8:30 AM
Room 110 (VCC West Building)
Martha Tyrone , Long Island University, Brooklyn, NY
American Sign Language (ASL) is a natural, signed language used by Deaf people in the United States and Canada. (The term ‘Deaf’ refers to the community of ASL users rather than to clinical hearing loss.) Unlike spoken languages, signed languages use the hands and arms as primary articulators, and signs are perceived visually rather than auditorily. While researchers have been studying the linguistic structure of ASL for several decades, investigation of the physical/articulatory structure of the language has been extremely limited. This study examines ASL using the theoretical framework of articulatory phonology, which proposes that the basic units of speech are articulatory gestures. Thus, according to this theory, the articulatory and linguistic structure of spoken language are inter-related. We hypothesize that articulatory gestures are also the structural primitives of signed language, and we are investigating what the gestures are and how they are timed. For this study, sign production data were collected using an optical motion capture system that tracked the positions of the arms, head, and body over time as Deaf signers produced ASL phrases. The signers were asked to produce specific target signs occurring in various phrase positions. The target signs included movements either toward or away from the body, allowing us to compare superficially-similar but linguistically-distinct movement phases: as the arm moves toward a location on the body, spends some time at that location, and then moves away from the body. Our findings suggest that signs, like spoken words, are lengthened at phrase boundaries in a manner consistent with the predictions of a task-dynamic model of prosodically induced slowing. In the long run, these findings could assist with the automatic parsing of American Sign Language.