CMU (Carnegie-Mellon University) computer science graduate student Stan Jou, 34, of Shadyside, stood before the audience yesterday morning with 11 tiny electrodes affixed to the muscles of his cheeks, neck and throat.
The Taiwan native then mouthed -- without speaking aloud -- the following phrase in Mandarin Chinese: "Let me introduce our new prototype."
The sensors captured electrical signals from Jou's facial muscles when they moved to form the silent Chinese words. In a matter of seconds, this information traveled to a computer that recognized the words and translated them into English and Spanish. The phrase was then displayed on a screen and spoken by the computer in both languages.
This is InterAct, a translation system right out of Hitchhiker's Guide, which, in real world application, would display translations on the inside of a set of "translation goggles" that, according to Engaget:
lipread other languages and subtitle your field of vision with translated text, or focused-sound translation “beams” that can make a room of internationals like a wireless, computerized session of the UN.
Do they have a male<---> female algorithm in the works? The world could really use one of those...
Posted by: Scott P | October 31, 2005 at 02:59 PM
It's neat that it can understand tonal languages. Wow.
Posted by: SeanH | October 31, 2005 at 04:54 PM