labyrinthian
New Member
- Joined
- Jan 29, 2016
- Messages
- 3
- Reaction score
- 1
Sorry if I'm wrong, but it seems to me that a lot of people in Deaf community have negative or (realistically) cynical views on new assistive technologies like sign-interpreting gloves or video remote interpreting.
I do understand the reasons. As many people have already pointed out, just using "gloves" doesn't make sense because ASL involves so much more than hands. Unless they develop ways to catch facial expressions and body movements, it won't work. (An alternative is to invent a new "dialect" of ASL to "translate" everything into hands movement, but I don't think most Deaf people will be willing to do that, so that's out of the question, too.)
As for VRI, I can think of at least three problems. First, it will be quite labor-intensive and probably cost a lot as a result (I mean, every time you use it, you're hiring someone--not just anyone, but a trained interpreter). Second, the lack of privacy will a problem--the interpreter will be there, listening to the whole thing. And last, I imagine that there will be misunderstandings caused by the impersonal setting, since you'll probably get a different interpreter each time.
(By the way, I got a lot of these ideas by reading comments in this forum.)
My question is: do you think a more developed technology will work?
For example, if someone somehow develops the technology that "reads" the entire ASL "speech" and mechanically translate it with an acceptable level of accuracy (losing nuances but keeping basic meanings), while keeping the whole process encrypted (thereby keeping a certain level of privacy), will it work?
Of course, the whole thing may be unnecessary, since deaf and HH people can just write or type.
P.S. I am neither deaf/hard-of-hearing nor technological expert--just inexplicably interested in the topic. Correct me if I'm too clueless or disrespectful.
I do understand the reasons. As many people have already pointed out, just using "gloves" doesn't make sense because ASL involves so much more than hands. Unless they develop ways to catch facial expressions and body movements, it won't work. (An alternative is to invent a new "dialect" of ASL to "translate" everything into hands movement, but I don't think most Deaf people will be willing to do that, so that's out of the question, too.)
As for VRI, I can think of at least three problems. First, it will be quite labor-intensive and probably cost a lot as a result (I mean, every time you use it, you're hiring someone--not just anyone, but a trained interpreter). Second, the lack of privacy will a problem--the interpreter will be there, listening to the whole thing. And last, I imagine that there will be misunderstandings caused by the impersonal setting, since you'll probably get a different interpreter each time.
(By the way, I got a lot of these ideas by reading comments in this forum.)
My question is: do you think a more developed technology will work?
For example, if someone somehow develops the technology that "reads" the entire ASL "speech" and mechanically translate it with an acceptable level of accuracy (losing nuances but keeping basic meanings), while keeping the whole process encrypted (thereby keeping a certain level of privacy), will it work?
Of course, the whole thing may be unnecessary, since deaf and HH people can just write or type.
P.S. I am neither deaf/hard-of-hearing nor technological expert--just inexplicably interested in the topic. Correct me if I'm too clueless or disrespectful.