For years, engineers have tried to find ways to translate sign language into spoken words, and vice versa. Now, a new project may have cracked the code with a prototype device called MyVoice–a concept device that reads sign language and translates its motions into audible words.
There are more than 1 million people in the U.S. who are functionally deaf, and another 10 million who are hard of hearing, according to a 2005 study. With such an enormous population comes a need for better methods of translating sign language to spoken language.
The MyVoice prototype is based around a handheld device with a built-in microphone, speaker, soundboard, video camera, and screen. The device can be placed on a hard surface, where it reads a user’s sign language movements. MyVoice processes sign language motions and can translate them using specific algorithms into an electronic voice. It can also monitor a person’s voice and translate words into sign language, which is projected on an LCD display.
MyVoice processes sign language motions and can translate them using specific algorithms into an electronic voice.
According to the developers, the most difficult part of the project was sampling a database of sign language images, which included about 200 to 300 images per sign. The team is hoping to continue the project and is looking for partners to turn it from a prototype into a fully functioning product. The prototype earned first place among student projects at the American Society of Engineering Education annual conference.
Other projects have tried to translate sign language. Some have just translated speech-to-text to deliver to hearing-impaired students in a lab or class. Other apps use video calls to bring in a third-party translator who interfaces between the deaf caller and the hearing-able callee.
One project from a spinoff company at the University of Aberdeen uses a smartphone or front-facing camera on a laptop to translate signs to text. The company expects to have an app available at the end of 2013, and it says that the device could have other purposes, including allowing a person with limited mobility to control appliances in their home, potentially opening curtains or changing the channel on their television.
Instead of creating a different device, it might be easier to harness the processing power and constantly-improving camera abilities built into smartphones to improve deaf communication. The tools to move from signs to text to electronic voice are closer in phones than in other types of technology, but translating the opposite way, from voice to signs, may be more challenging in a smartphone than on an independent system like MyVoice. In another translation project, Microsoft is working on a universal translator that will turn your spoken words into other languages–yes, Star Trek keeps coming true.