Researchers at the University of Washington and Cornell University are working on the mobileASL project, which is developing a mobile phone that would enable deaf users to have real-time conversations in American Sign Language (ASL). A major hurdle the project faced was the low bandwidth available on wireless networks, which forced the researchers to find a balance between speed and quality.
Most compression algorithms do not focus on the aspects of video that are needed to make ASL easily understandable. Cornell professor Sheila Hemami studied how the human visual system understands video, and has been working on integrating an intelligibility metric into mobileASL's video-compression software to allow the phones to maximize compression. The intelligibility metric recognizes which areas of an image need to be in high resolution, such as the signer's hands, and which can be in low resolution. The researchers also had to find a way of preserving battery power despite the high power demands of video compression and decompression, which was solved by implementing a variable frame-rate system that oscillates between high and low frame rates depending on whether the user is signing or watching the other person sign. After four years of work, the researchers are close to a functional prototype.
From IEEE Spectrum
View Full Article
hello everyone,
my name is xavier bonilla and am deaf. i sugget to ask you that i heard about new asl cellphone new so i like it cool and i want buy it but let me know when sign language by cellphone?
Displaying 1 comment