Project Date: 
May 2014

Millions of individuals with language and speech challenges require additional support for language understanding and learning. Lip-reading, for example, allows deaf and hard of hearing individuals to perceive and understand oral language and event to speak. However, lip-reading does not allow for a full understanding for all of the spoken input and other techniques are necessary to allow a richer input.

The proposed activity will develop a real-time system to automatically detect robust characteristics of auditory speech and to transform these continuous acoustic features into continuous supplementary visible features. This information combined with watching the speaker’s face provides enough information for a person with limited hearing to perceive and understand what is being said.

 

This new technology will allow the design of a wearable computing device that would transform these continuous acoustic features into continuous supplementary visible features and display them on a pair of eyeglasses. 

Year: