Deaf to sign via video handsets
Deaf people could soon be using video mobiles to chat with their friends using sign language.
Video compression tools made by US researchers make it possible to send live pictures of people signing across low bandwidth mobile networks.
The system cuts down on the bandwidth needed by only sending data about which parts of each frame have changed.
The researchers are talking to mobile firms about how to get the technology in to the hands of deaf people.
Many American deaf people preferred to communicate via sign language but this was impossible over current mobile networks, said University of Washington computer scientist Richard Ladner, who is one of the principal investigators on the project.
Chatting via signing across mobile networks was impossible, Prof Ladner explained, because the bandwidth available meant video was of too low a quality to accurately depict the arm, finger and face movements of sign language.
While video compression techniques could ease this problem there were other barriers too, said Prof Ladner.
"To do all this calculation and video compression runs down your battery pretty fast," he added.
The team is working on ways to get the software on to handsets
Prof Ladner and his co-researchers, Professor Eve Riskin and Professor Sheila Hemami, have overcome these problems by creating compression software that looks for the parts of each video frame important to signers.
To cut down on the amount of data that has to be sent, video compression systems typically only send information about what elements of a scene change from frame to frame.
By contrast, the system developed by Prof Ladner and his co-workers only looks for hand, arm and face movements. In addition, it ensures that the face of a signer, where movements during signing are quite subtle, is presented in more detail.
"The large, slower movements of hands and arms can be picked up at low fidelity," said Prof Ladner. "The face needs higher fidelity because the movements are much smaller."
This approach also made sense, he said, because people interpreting sign language looked at the face of the signer 95% of the time.
This lets the peripheral vision pick up the gross movements of arms and hands while the fovea, the part of the retina capable of picking out detail, concentrates on the smaller facial actions.
The system developed by the team can work across networks that only had 10-20 kilobits per second of bandwidth available, said Prof Ladner. In the UK, most people are on mobile networks that offer them about 40kbps download speed but much less than this to upload.
The research has gone so well that the team is in talks with handset makers and operators to put it on phones.
"We realised that the technology is close enough that we can deploy it," he said.