Making Webcasts accessible to the deaf

Miss-Delectable

New Member
Joined
Apr 18, 2004
Messages
17,160
Reaction score
7
http://www.thejournalnews.com/apps/...20051017/BUSINESS01/510170302/1066/BUSINESS01

Sending a friend a link to a humorous or quirky Web site is one of the Internet's social pleasures.

So when Dimitri Kanevsky learned his colleagues with the Human Language Technologies team at IBM Research in Yorktown Heights were circulating a humorous Honda ad, he was eager to watch it, particularly since he helped invent an interactive speech recognition system for the carmaker.

But Kanevsky said he didn't enjoy the ad as much as he hoped he would. Because he is deaf, he couldn't hear the sound.

If he had been watching television, Kanevsky could have simply turned on closed captioning to read the words on the screen. But captioning on the Internet is lagging behind a boom in audiovisual content that's expanding the Web beyond its roots as a text-based medium.

Today's Web is much more visually stimulating than it was a decade ago. It's also a lot noisier, thanks to everything from advertising jingles to music videos to Webcasts of news events, college lectures and corporate meetings.

That's terrific, unless you're deaf or hard of hearing.

That includes 28 million Americans, said Bill Stark, project director of the National Association of the Deaf's Captioned Media Program, who would like the hearing to try watching a Webcast or TV show with the sound off. "You don't get a lot out of it. That's what the deaf get," Stark said. "We're talking about people being deprived of equal opportunity to learn, enjoy and appreciate."

Because so little audible Web content is accessible to the deaf, Kanevsky rarely seeks it out. "I don't even know what I'm losing," he said.

Unlike most of his deaf peers, Kanevsky has the skills and opportunity to do something about the problem. A Russian immigrant who became deaf at age 1, Kanevsky has spent much of his career as a mathematician inventing speech-related technologies.

Kanevsky, who holds 78 patents and has the title of master inventor at IBM, today is part of a team researching ways to make captioning easier, cheaper and faster by automating the process using voice recognition software.

That's critical for the Web because online audiences generally aren't large enough to justify the high labor costs associated with TV captioning, which is performed by specially trained stenographers who can transcribe spoken language at a rate of 225 words a minute.

It's expensive to transcribe every Webcast, especially when there is no guarantee a deaf person might want to view it, said Sara Basson, IBM's program manager for accessibility services.

"Imagine an agency has 500 hours of Webcasts. Sending that off for stenographic transcription and getting it realigned with the video and so on is something that will cost between, depending on where you go, $500 to $1,000 per finished hour. That is daunting for an agency and even a corporation," she said.

The IBM research team started a project called CaptionMeNow to create a tool that would caption a Webcast only when a deaf person asks for it. "When someone who is deaf or hard of hearing comes across that Webcast, and wants it captioned, they click a CaptionMeNow button," she said.

The video is processed through IBM's speech recognition system and automatically captioned. Because speech recognition software still isn't perfect, the transcript could then be routed to a human editor for quick fixes before posting online.

Alexander Faisman, a software engineer who works closely with Kanevsky, said a human translator needs to be in the loop in part because the English language has so many words that sound exactly the same.

"We know that the technology is not perfect. There are many situations when you need a very accurate transcript, or not that accurate, and it can all be translated into cost and time of the editing," he said.

The CaptionMeNow project builds on IBM's earlier work to help a Canadian university caption lectures only when deaf and hard-of-hearing students are present in the classroom.

IBM is using the technology today to caption Web lectures for company employees — and it's popular not only among the deaf.

"It turns out hearing employees are also using the technology," Kanevsky said. "We found that even hearing people sometimes prefer to read a lecture, because they can read very quickly."

Technology created for the disabled has migrated into the mainstream in the past.

Everyone calls their bank today using an automated-voice system that grew out of voice synthesizers developed for the blind, Basson said.

Closed captioning is often turned on for the benefit of the hearing as well as the deaf in noisy environments such as airports.

IBM is even morphing the CaptionMeNow technology to aid the hearing in a project called TransformMeNow that translates captions into different languages.

The text of captioned Webcasts can be searched for a particular topic, say all government addresses that talk about global warming, Basson said.

"Once you have text, you can have all these sophisticated search mechanisms," she said.

As audiovisual content blooms on the Web, there's a sense of deja vu in the deaf community.

When "talkies" replaced silent films, deaf and hard-of-hearing audiences no longer could rely on the dialog that appeared on the screen.

Similarly, the early Web was easily accessible to people with an array of disabilities. The blind could use text readers to listen to Web pages spoken aloud. The deaf could communicate silently through text.

But as more graphics, photos and videos began to populate the Web, the disabled were edged out.

That's why the World Wide Web Consortium started an initiative to make sure software developers could easily build sites that are accessible to everyone.

For the blind, that means making sure there are alternative text versions of charts and tables as well as descriptions of photographs. Aiding the deaf means captioning audio content.

Judy Brewer, director of the Web Accessibility Initiative at the World Wide Web Consortium, said more tools are needed to make captioning cost-efficient.

"Streamlining the captioning process would be an important step forward because being able to create a draft of caption text would reduce the amount a human captioner would have to type. Then it becomes more of an issue of correcting a transcript," she said. "You want to make it easy for someone who is developing a Webcast to incorporate captions."

Jack Gates, president and chief executive of the National Captioning Institute, said his organization has seen demand for Web captions rise in the past three years as wide adoption of high-speed connections to the Internet make it possible to enjoyably view large video files at home.

Even so, much Web content that's captioned was transcribed first for broadcast on TV, Gates said.

"The issue with Webcasting, as with most of the Internet, is the dollars behind it," Gates said. "Who is going to pay for this? Does it expand the viewership by having the captions?"

The government is actually ahead of the private sector on Web captioning because the 1998 Workforce Investment Act requires all federal agencies to make their technology accessible to people with disabilities.

There's still debate over whether the Americans with Disabilities Act of 1990 applies to the Web. TV captioning is mandated by law and regulated by the Federal Communications Commission, which will require 95 percent of new TV programming to be captioned by Jan. 1, 2006.

Stark, of the Captioned Media Program, thinks the Web will one day be as friendly to the deaf and hard of hearing as TV is now.

"One day, TV and the Internet will all be one machine," he said.
 
Back
Top