Friday, November 01, 2013

Microsoft Research uses Kinect to translate between spoken and sign languages in real time




As everyone know about  Kinect.It  is a motion sensing device developed by Microsoft for  Xbox 360.Microsoft Research is now using this piece of technology to fill the gap between folks who don’t speak the same language, whether they can hear or not.

As you can see in the video below, the Kinect Sign Language Translator is a research prototype that can translate sign language into spoken language and vice versa and It does it all in real time.



Brief story is that Kinect captures the gestures, while machine learning and pattern recognition programming help to interpret the meaning. The system is capable of capturing a given conversation from both sides: a deaf person who is showing signs and a person who is speaking. Visual signs are converted to written and spoken translation rendered in real-time while spoken words are turned into accurate visual signs.

Clearly this is a big achievement but still a huge amount of work and  development is ahead.Right now only 300 Chinese sign language words have been added in the database out of a total of 4,000.

Guobin Wu, the program manager of the Kinect Sign Language Translator project, explains that  There are more than 20 million people in China who are hard of hearing, and an estimated 360 million such people around the world, so this project has immense potential to generate positive social impact worldwide. recognition is by far the most challenging part of the project. After trying data gloves and webcams, however, the Kinect was picked as the clear winner.

Wu says there are more than 20 million people in China who are hard of hearing, and an estimated 360 million such people around the world. As a result, this project could be a huge boom for millions around the globe, if it ever makes it out of the lab.


0 comments:

Post a Comment