New MobilASL

Phillips

Lets ride horses!
Premium Member
Joined
Aug 10, 2006
Messages
11,162
Reaction score
1
MobileASL is a video compression project at the University of Washington with the goal of making wireless cell phone communication through sign language a reality in the U.S. They do have a working prototype, and it is probably a few months away before they will do a real field study. It needs to increase the frame rate and make the software more robust.





MobileASL | DeafNation
 
Look at video in deafnation.com, press 1:02 in that video, and it says Actors.... that makes me :rofl:
 
Awesome! I know that this mobile brand name is HTC Ty-TN II. It look like AT&T Tilt as mine but don't have front cam.
 
Wow ! That's really neat ! I have the same like this one on that video... I mean, AT&T Tilt. Lol It has a cam - but it is on the rear. I am looking forward to seeing it coming on the market! :D
 
Someone in Australia have own Nokia N95 with video calls. He may show image or/and vlog of it to us. he said that video calling charge was too costly.
 
Hey that's cool! wish my AT&T tilt would have camera on the front screen instead of being on the rear..... I wonder how much service fee would it be?...
 
yea that is what i have been talking about two way video calling for some times. it is cool to see deaf on mobile to deaf on mobile. i hope that program wud be used through internet because if it is voice/video feature then that wud be 25 min for 4.99 or 60 min for 9.99(one way video calling). i prefer any program running through internet like jivetalk because it wud costs nothing - only data plan like what data plan we have on our mobiles.

at&t hasnt released two way video calling feature yet. i have this feeling it will release this late summer or so. at&t mobility provides one way video calling for a year.
 
yea that is what i have been talking about two way video calling for some times. it is cool to see deaf on mobile to deaf on mobile. i hope that program wud be used through internet because if it is voice/video feature then that wud be 25 min for 4.99 or 60 min for 9.99(one way video calling). i prefer any program running through internet like jivetalk because it wud costs nothing - only data plan like what data plan we have on our mobiles.

at&t hasnt released two way video calling feature yet. i have this feeling it will release this late summer or so. at&t mobility provides one way video calling for a year.


a rumor came at a corner that AT&T will release 2 way video calling in Q4 of 2008. i hope it is true.

too bad i hear nothing inside my co.:Oops:
 
Some cellphones do have 3G -- which allows for live two-way video conferencing. But 'slow'.. you know.. saw that in action in Norway.

Yeah, the Tilt is nice...
 
I am fascinated by this technology. Unbelievable features. That is what I am looking for. I am riding technology fads until hit Videophone feature built in cell phone.

The day it comes out, that day I will get it. No question of doubt. That only if I know when it comes out to public.
 
It was old, I am aware of that begin in 2002, I research of their develop. It is very interesting how the device grid (chart), to able focus which is important and which it not important of animation. One negative condition they develop to struggle make compression like Sorenson software. I wouldn't be surprises if they are using Sorenson's software for compression video.

Again --- they did good job on their theory but my question, it that what machine need be reading our hand.. they do test with nail finger to shoulder by motion. It is interesting.. for example, it has green box will read the motion while the red box doesn't make much motion is what they are research on make machine or programmer writing compression. Anything impossible become possible.
 
'Can You See Me Now?' Sign Language Over Cell Phones Comes To United States

'Can You See Me Now?' Sign Language Over Cell Phones Comes To United States

A group at the University of Washington has developed software that for the first time enables deaf and hard-of-hearing Americans to use sign language over a mobile phone. UW engineers got the phones working together this spring, and recently received a National Science Foundation grant for a 20-person field project that will begin next year in Seattle.

This is the first time two-way real-time video communication has been demonstrated over cell phones in the United States. Since posting a video of the working prototype on YouTube, deaf people around the country have been writing on a daily basis.

"A lot of people are excited about this," said principal investigator Eve Riskin, a UW professor of electrical engineering.

For mobile communication, deaf people now communicate by cell phone using text messages. "But the point is you want to be able to communicate in your native language," Riskin said. "For deaf people that's American Sign Language."

Video is much better than text-messaging because it's faster and it's better at conveying emotion, said Jessica DeWitt, a UW undergraduate in psychology who is deaf and is a collaborator on the MobileASL project. She says a large part of her communication is with facial expressions, which are transmitted over the video phones.

Low data transmission rates on U.S. cellular networks, combined with limited processing power on mobile devices, have so far prevented real-time video transmission with enough frames per second that it could be used to transmit sign language. Communication rates on United States cellular networks allow about one tenth of the data rates common in places such as Europe and Asia (sign language over cell phones is already possible in Sweden and Japan).

Even as faster networks are becoming more common in the United States, there is still a need for phones that would operate on the slower systems.

"The faster networks are not available everywhere," said doctoral student Anna Cavender. "They also cost more. We don't think it's fair for someone who's deaf to have to pay more for his or her cell phone than someone who's hearing."

The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person's face while they are signing.

The current version of MobileASL uses a standard video compression tool to stay within the data transmission limit. Future versions will incorporate custom tools to get better quality. The team developed a scheme to transmit the person's face and hands in high resolution, and the background in lower resolution. Now they are working on another feature that identifies when people are moving their hands, to reduce battery consumption and processing power when the person is not signing.

The team is currently using phones imported from Europe, which are the only ones they could find that would be compatible with the software and have a camera and video screen located on the same side of the phone so that people can film themselves while watching the screen.

Mobile video sign language won't be widely available until the service is provided through a commercial cell-phone manufacturer, Riskin said. The team has already been in discussion with a major cellular network provider that has expressed interest in the project.

The MobileASL team includes Richard Ladner, a UW professor of computer science and engineering; Sheila Hemami, a professor of electrical engineering at Cornell University; Jacob Wobbrock, an assistant professor in the UW's Information School; UW graduate students Neva Cherniavsky, Jaehong Chon and Rahul Vanam; and Cornell graduate student Frank Ciaramello.

More details on the MobileASL project are at MobileASL University of Washington. A video demonstration is posted at http://youtube.com/watch?v=FaE1PvJwI8E.
 
Mod's Note

Thread's been merged and moved to it's proper location.
 
Back
Top