Hey everyone! I want to work close with you on a project.


New Member
Some time ago i stumbled across a video which showed a project in which an avatar on your smartphone translates words to sign language.


I thought this was amazing and decided to work on something similiar. After some time, i had a prototype which converts mouth movements into words, which can then be used by a speaker. I was totally happy and excited at this time because i thought i would invent something meaningful and which could help lots of people.

However, one day i read this article: https://www.theatlantic.com/technol...language-gloves-dont-help-deaf-people/545441/
It's about another technology but with a similiar approach as mine, the sign language glove, and how it's actually an insult to the very group it claims to help. I was shocked and sad because my project didn't seem so cool anymore.

I stopped working on the prototype and decided it would be best to get in touch with the deaf community first before i continue. So yeah, that's why i'm here.


New Member
I developed an app to translate just fingerspelling using the RealSense RGBD camera. It worked reasonably well for a C++ project I threw together in 45 days, but it was not as accurate as would be needed for satisfactory fingerspelling translation in a real-world situation. That's typically what I've found with most of those glove translators I've seen.
The key word in the article that turns Deaf people off is "Help". It's a paternalistic view, seeing the Deaf as a group needing hearing people to take care of them. Hearing people may not think of that, but it's frustrating for us to be treated as helpless.
My goal is to first create accurate fingerspelling recognition using an app and a single camera. It's for ASL instruction - intended to serve as an aid to ASL teachers.
After that, I would love to use something like Wrnch.ai or OpenPose to recognize isolated vocabulary to be sure that ASL students are forming the words right. ASL teachers can't check every student on every word covered in a unit. We would die from exhaustion.
Your experience was not wasted. First, you learned something about Deaf Culture. Second, you learned something about the limitations of technology for translating from an auditory-verbal language to the visual-spatial language of ASL.
KUDOS for pausing and then taking steps to meet the Deaf Community.
What programming languages do you know?


New Member
Hey whitneyscottasl :). Sounds like you have an amazing project in the making. Do you use machine learning techniques for your C++ project? You would need a lot of video footage of different people signing words and from different angles as well, but in the end it would help you enormously.
I used OpenPose in the past, also i did some projects involving Action Recognition in videos. You can always message me for machine learning code. Since you asked, the languages i use the most are Python, C/C++, C# and PHP.

The additional problem with my prototype i didn't even mention yet is that it needs to apply sensors to your cheeks. This would end up with the person having to constantely wear some sort of headset in order to speak and i can imagine most people wouldn't want to use it then.

Are you an ASL teacher? I live in the middle of nowhere, the next sign language class is just too far away, so it would be cool to have a contact person for technology related questions.

Old Analog

Active Member
You want do something helpful? When I was young cartoons when had a song would have words scroll and say follow the bouncing ball, now in church they put words up on wall that don't even break the lines in to singable phrases, talked to operator, thats how program works:mad: give us a program that if not scrolls has a bouncing ball to folow, you may have to talk to some old timers to under stand, may be an American thing:hmm: