| Description |
American Sign Language (ASL) is the prominent form of communication used by deaf communities throughout the United States. Despite the prevalence of its use, accessing means of learning ASL can be difficult, inconvenient, and costly. People who wish to communicate via ASL, such as those with deaf family members, coworkers, or friends, often requires hiring a professional interpreter or enrolling in formal lessons to gain a basic understanding of the language. The web application Signable addresses this problem by providing intuitive online lessons designed to teach users the fundamentals of ASL using real-time feedback from computer vision (CV) models trained to detect gestures from webcam input. Several CV models designed to detect static hand gestures already exist and are readily available however, ASL often requires complex hand motions to convey meaning properly. This thesis explores methods for training more robust CV models capable of classifying complex, movement-based hand gestures that perform at a high enough efficiency rate to provide real-time user feedback. |