There are two primary approaches to sign recogition: glove-based and vision-based. Glove-based systems require special, often expensive, sensor-equipped gloves and are typically disfavored by the Deaf community due to bulk and inconvenience. Vision-based approaches rely on cameras and computer vision techniques to track and recognize the user's signs. This approach often relies on standard video since it is the most commonly available source of signed content, but accurate, real-time tracking that works across different users is an ongoing challenge. Our approach takes advantage of innovations in commercially available depth cameras and unobtrusive wristwatch sensors to greatly reduce the difficulty of tracking the user's hands and body so that it can be done in real-time. We apply handtracking techniques that are being developed to for use with AR and VR applications and tailor them specifically for the set of meaningful handshapes used in ASL. The cameras we use also provide a wide enough field of view to extract facial expressions and body postures necessary to fully communicate in ASL, unlike approaches narrowly focused on the hands.
The technology is designed to assist Deaf individuals (and other users of American Sign Language)