Translating asl gestures to text

Waitbird

Member
Joined
Nov 9, 2017
Messages
41
Reaction score
22
I'm in awe of how complex and unique of a language system ASL is, primarily because it is a signed, not spoken, language. So, let's say I see someone sign a particular sign, and I want to know what it means. Uhhh...big problem! Unlike written languages, I can't simply "type" the gesture as text into a translator. Even if I "describe" the gesture, I'm still apt to be wrong. Without a signer to interpret for me, I'm pretty much out of luck. But wait! A tech company is developing an ASL gesture-recognition translator. From the article:

KinTrans’ tech relies on a 3D camera which tracks the movement of a signer’s hands and body when they sign out words. When requested, it can then translate the signed words into written English (or, currently, Arabic, although additional languages will follow in the future). Alternatively, voice can be translated into signed words communicated by an animated avatar on the screen. According to its creators, the system is already able to recognize thousands of signed words with an accuracy of around 98 percent.

What do you think of this technology? Would you use it for communication purposes? Or is it probably rife with margin for error? They've also developed an ASL "glove" that supposedly can translate the gestures of the person wearing it. Pretty cool, if it works!
 
I'm in awe of how complex and unique of a language system ASL is, primarily because it is a signed, not spoken, language. So, let's say I see someone sign a particular sign, and I want to know what it means. Uhhh...big problem! Unlike written languages, I can't simply "type" the gesture as text into a translator. Even if I "describe" the gesture, I'm still apt to be wrong. Without a signer to interpret for me, I'm pretty much out of luck. But wait! A tech company is developing an ASL gesture-recognition translator. From the article:

KinTrans’ tech relies on a 3D camera which tracks the movement of a signer’s hands and body when they sign out words. When requested, it can then translate the signed words into written English (or, currently, Arabic, although additional languages will follow in the future). Alternatively, voice can be translated into signed words communicated by an animated avatar on the screen. According to its creators, the system is already able to recognize thousands of signed words with an accuracy of around 98 percent.

What do you think of this technology? Would you use it for communication purposes? Or is it probably rife with margin for error? They've also developed an ASL "glove" that supposedly can translate the gestures of the person wearing it. Pretty cool, if it works!
Not all ASL is the same. For example, I first learned ASL in Texas. After moving from Texas to Florida to Colorado to Washington state, I noticed some of the signs were different in each state I and had to learn ASL again. There is also something called "home signs" that many deaf/hoh people incorporate into their signed conversations. I do hope that organization is taking this into consideration.
 
I think the literal translation of ASL would never work for a direct English translation. As an example, think of the number of classifiers that fluent ASL signers use. How is a computer going to interpret the very visual and conceptual nature of ASL storytelling into a translation. It would never work. The computer might be able to get the hand movements and the orientation and the motion, but it's never going to be able to use context and reference to accurately convey the idea.

I could see this technology usable to get an ASL Gloss of something, but I think we're far from ever having technology that can interpret intent.
 
s go
I think the literal translation of ASL would never work for a direct English translation. As an example, think of the number of classifiers that fluent ASL signers use. How is a computer going to interpret the very visual and conceptual nature of ASL storytelling into a translation. It would never work. The computer might be able to get the hand movements and the orientation and the motion, but it's never going to be able to use context and reference to accurately convey the idea.

I could see this technology usable to get an ASL Gloss of something, but I think we're far from ever having technology that can interpret intent.
Agreed. Written English is a series of words that doesn't fully convey emotion. ASL is more of an expression of ideas or story telling and conveys emotion in the form of body language.
 
Sometimes I use this site:

https://www.handspeak.com/word/asl-eng/


Its not perfect and sometimes you get no results, lol! But I have wondered what the sign for "discuss" meant and I'd seen it so much and I was able to play around with this reverse dictionary and found out the sign meant "discuss".
 
There is another company started by 3 NTID grads that created a similar app. There app is being tested at Rochester airport.

http://www.motionsavvy.com/

They originally planned to market it to the public but the cost of development was to high for consumers to purchase. They have changed their business model to market to companies to communicate with customers. The videos look promising but every signer is unique and it may not be able to be completely accurate.

As far as understanding someone in person, are you not able to ask the signer what the sign meant? If not or if the signer is not patient, try to use the context of the signs you did understand to infer the meaning.
 
Back
Top