Skip to main content

See also:

Kinect Sign Language Translator expands communication and eyes-free Yoga gadgets

There's a Microsoft Kinect device that translates sign language into American English and English back into sign language so the hearing-impaired person can communicate quickly with those who can hear but don't know sign language. The efficient prototype translates sign language into spoken language—and spoken language into sign language—in real time. You can program such a device with almost any language, not only American English. For example, you also can communicate between American sign language and Chinese sign language—or potentially any sign languages to any other natural language,according to the October 30, 2013 article, "Kinect Sign Language Translator expands communication possibilities."

An incorrect Warrior II yoga pose is outlined showing angles and measurements. Using geometry, the Kinect reads the angles and responds with a verbal command to raise the arms to the proper height.
An incorrect Warrior II yoga pose is outlined showing angles and measurements. Using geometry, the Kinect reads the angles and responds with a verbal command to raise the arms to the proper height.
Photo credit: Kyle Rector, University of Washington. Yoga accessible for the blind with new Microsoft Kinect-based program.

Dedicated researchers in China have created the Kinect Sign Language Translator, a prototype system that understands the gestures of sign language and converts them to spoken and written language—and vice versa. The translator came about as a result of collaboration, facilitated by Microsoft Research Connections, between the Chinese Academy of Sciences, Beijing Union University, and Microsoft Research Asia, each of which made crucial contributions.

Sign language is about gesture

Now machine learning technology and pattern recognition technology can be used to translate gesture into spoken words and speech into gestures. But will the meaning of the words translated into gestures or gestures into words be understood with the same meaning or intent similar or differently if one is a life-long sign language speaker compared to a life-long vocal speaker relying on hearing? In addition to Microsoft's 2013 prototype machine, Google also has an app that translates sign language gestures to words the hearing community can listen to, according to the June 21, 2014 ScienceDaily article, "Google App Gesture Translates Sign Language." A new Google app called Gesture was announced the week of June 21, 2014. It allows sign language to translate into speech in realtime.

As far as Microsoft's Kinect sign language translation machine, what's behind the development of gestures into spoken words and speech into gestures without requiring the person with hearing and speech to learn any sign language is the concept, which came from the world of gaming. Kinect for Xbox held the solution to this translation problem to solve with a new machine. Originally developed for gaming, the Kinect's sensors read a user’s body position and movements and, with the help of a computer, translate them into commands.

That focused the key concept...reading a user's body movements or position. Then the gestures could be translated into commands. The machine has the potential for understanding the complex gestures that make up sign language and for translating the signs into spoken or written words and sentences. Can it also translate languages for travelers with an accurate translation, especially if the traveler is deaf or speech impaired and wants to translate from one language to another to communicate with someone in another country speaking a different language or using sign language in a different language? Thank gaming for this invention.

An avatar on the screen represents the non-signer and makes the appropriate sign languages gestures. Remember when avatars first came on the Internet scene in the 1990s? Read the complete case study (PDF file).

Is there an eyes-free yoga class near you yet?

In a typical yoga class, students watch an instructor to learn how to properly hold a position. But for people who are blind or can't see well, it can be frustrating to participate in these types of exercises. Now, a team of University of Washington computer scientists has created a software program that watches a user's movements and gives spoken feedback on what to change to accurately complete a yoga pose. An incorrect Warrior II yoga pose is outlined in a demonstration showing angles and measurements that blind people can touch. Using geometry, the Kinect reads the angles and responds with a verbal command to raise the arms to the proper height. People who are visually impaired need a touchable device that shows the order of priority for correcting a person's alignment.

"My hope for this technology is for people who are blind or low-vision to be able to try it out, and help give a basic understanding of yoga in a more comfortable setting," said project lead Kyle Rector, according to an October 17, 2013 news release, "Yoga accessible for the blind with new Microsoft Kinect-based program." Rector is a University of Washington (UW) doctoral student in computer science and engineering. You also can check out the video demonstration on yoga accessible for the blind with new Microsoft Kinect-based program.

The program, called Eyes-Free Yoga, uses Microsoft Kinect software to track body movements and offer auditory feedback in real time for six yoga poses, including Warrior I and II, Tree and Chair poses

Rector and her collaborators published their methodology in the conference proceedings of the Association for Computing Machinery's SIGACCESS International Conference on Computers and Accessibility in Bellevue, Washington Oct. 21-23, 2013. Rector wrote programming code that instructs the Kinect to read a user's body angles, then gives verbal feedback on how to adjust his or her arms, legs, neck or back to complete the pose. For example, the program might say: "Rotate your shoulders left," or "Lean sideways toward your left."

The result is an accessible yoga "exergame" – a video game used for exercise – that allows people without sight to interact verbally with a simulated yoga instructor. Rector and collaborators Julie Kientz, a UW assistant professor in Computer Science & Engineering and in Human Centered Design & Engineering, and Cynthia Bennett, a research assistant in computer science and engineering, believe this can transform a typically visual activity into something that blind people can also enjoy.

"I see this as a good way of helping people who may not know much about yoga to try something on their own and feel comfortable and confident doing it," Kientz said, according to the news release. "We hope this acts as a gateway to encouraging people with visual impairments to try exercise on a broader scale."

Each of the six poses has about 30 different commands for improvement based on a dozen rules deemed essential for each yoga position

Rector worked with a number of yoga instructors to put together the criteria for reaching the correct alignment in each pose. The Kinect first checks a person's core and suggests alignment changes, then moves to the head and neck area, and finally the arms and legs. It also gives positive feedback when a person is holding a pose correctly.

Rector practiced a lot of yoga as she developed this technology. She tested and tweaked each aspect by deliberately making mistakes while performing the exercises. The result is a program that she believes is robust and useful for people who are blind. "I tested it all on myself so I felt comfortable having someone else try it," she said, according to the news release.

Rector worked with 16 blind and low-vision people around Washington to test the program and get feedback

Several of the participants had never done yoga before, while others had tried it a few times or took yoga classes regularly. Thirteen of the 16 people said they would recommend the program and nearly everyone would use it again.

The technology uses simple geometry and the law of cosines to calculate angles created during yoga. For example, in some poses a bent leg must be at a 90-degree angle, while the arm spread must form a 160-degree angle. The Kinect reads the angle of the pose using cameras and skeletal-tracking technology, then tells the user how to move to reach the desired angle.

Rector opted to use Kinect software because it's open source and easily accessible on the market, but she said it does have some limitations in the level of detail with which it tracks movement. Rector and collaborators plan to make this technology available online so users could download the program, plug in their Kinect and start doing yoga. The team also is pursuing other projects that help with fitness.

Funders of the research are the National Science Foundation, a Kynamatrix Innovation through Collaboration grant and the Achievement Rewards for College Scientists Foundation. There also are various news reports of different Kinect devices such as, "Kinect teleport for remote medicine " and "Kinect-based virtual reality training promotes brain reorganization after stroke." Also noteworthy is the article, "Computer Helps Deaf Children To Learn Sign Language." The computer is making itself more useful to those who need various device to communicate among different communities.