Date of Award
Winter 11-9-2011
Degree Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
School
School of Computing
First Advisor
Rosalee Wolfe
Second Advisor
John McDonald
Third Advisor
Glenn Lancaster
Abstract
Computer-generated three-dimensional animation holds great promise for synthesizing utterances in American Sign Language (ASL) that are not only grammatical, but believable by members of the Deaf community. Animation poses several challenges stemming from the massive amounts of data necessary to specify the movement of three-dimensional geometry, and there is no current system that facilitates the synthesis of nonmanual signals. However, the linguistics of ASL can aid in surmounting the challenge by providing structure and rules for organizing the data.
This work presents a first method for representing ASL linguistic and extralinguistic processes that involve the face. Any such representation must be capable of expressing the subtle nuances of ASL. Further, it must be able to represent co-occurrences because many ASL signs require that two or more nonmanual signals be used simultaneously. In fact simultaneity of multiple nonmanual signals can occur on the same facial feature. Additionally, such a system should allow both binary and incremental nonmanual signals to display the full range of adjectival and adverbial modifiers.
Validating such a representation requires both the affirmation that nonmanual signals are indeed necessary in the animation of ASL, and the evaluation of the effectiveness of the new representation in synthesizing nonmanual signals. In this study, members of the Deaf community viewed animations created with the new representation and answered questions concerning the influence of selected nonmanual signals on the perceived meaning of the synthesized utterances.
Results reveal that, not only is the representation capable of effectively portraying nonmanual signals, but also that it can be used to combine various nonmanual signals in the synthesis of complete ASL sentences. In a study with Deaf users, participants viewing synthesized animations consistently identified the intended nonmanual signals correctly.
Recommended Citation
Schnepp, Jerry C., "A Representation of Selected Nonmanual Signals in American Sign Language" (2011). College of Computing and Digital Media Dissertations. 4.
https://via.library.depaul.edu/cdm_etd/4
Included in
Graphics and Human Computer Interfaces Commons, Other Social and Behavioral Sciences Commons