I am helping a team develop new a Augmented Reality visual aid to help lip read.
Please could this community help me out with a few questions I have. Anyone can answer; any input would be greatly appreciated.
1.) When communicating with someone, how much information do you retrieve from just the mouth area? Do you focus on more areas? Which areas would you say contribute to understanding speech?
2.) When trying to talk to someone what would you say is the biggest problem is when trying to understand speech? Eg If the person mumbles?
3.) Do deaf people have different techniques in which to read lips?
4.) Would a device that reads lips for you and consolidates the information into text sound appealing or useful? The device would be built to work within sunglasses and contact lenses.
5.) If you could imagine a technology like this, what would your one user suggestion be as a deaf person.
Please could this community help me out with a few questions I have. Anyone can answer; any input would be greatly appreciated.
1.) When communicating with someone, how much information do you retrieve from just the mouth area? Do you focus on more areas? Which areas would you say contribute to understanding speech?
2.) When trying to talk to someone what would you say is the biggest problem is when trying to understand speech? Eg If the person mumbles?
3.) Do deaf people have different techniques in which to read lips?
4.) Would a device that reads lips for you and consolidates the information into text sound appealing or useful? The device would be built to work within sunglasses and contact lenses.
5.) If you could imagine a technology like this, what would your one user suggestion be as a deaf person.