acm-header
Sign In

Communications of the ACM

ACM Careers

Researchers Develop Magnifying Smartphone Screen Application For Visually Impaired


View as: Print Mobile App Share:
Google Glass magnification application

A user of the head-motion application sees a magnified smartphone screen (top right) in Google Glass.

Credit: Massachusetts Eye and Ear

Researchers from the Schepens Eye Research Institute of Massachusetts Eye and Ear/Harvard Medical School have developed a smartphone application that projects a magnified smartphone screen to Google Glass, which users can navigate using head movements to view a corresponding portion of the magnified screen. They have shown that the technology can potentially benefit low-vision users, many of whom find the smartphone's built-in zoom feature to be difficult to use due to the loss of context. They describe their results in "Magnifying Smartphone Screen using Google Glass for Low-Vision Users," published in the journal IEEE Transactions on Neural Systems and Rehabilitation Engineering.

"When people with low visual acuity zoom in on their smartphones, they see only a small portion of the screen, and it's difficult for them to navigate around — they don't know whether the current position is in the center of the screen or in the corner of the screen," says senior author Gang Luo, associate scientist at Schepens Eye Research Institute of Mass. Eye and Ear and an associate professor of ophthalmology at Harvard Medical School. "This application transfers the image of smartphone screens to Google Glass and allows users to control the portion of the screen they see by moving their heads to scan, which gives them a very good sense of orientation."

An estimated 1.5 million Americans over the age of 45 suffer from low vision — severe visual impairment caused by a variety of conditions. People with low vision often have great difficulty reading and discerning fine details. Magnification is considered the most effective method of compensating for visual loss. The researchers developed the head-motion application to address the limitations of conventional smartphone screen zooming, which does not provide sufficient context and can be painstaking to navigate.

In an evaluation of their new technology, the researchers observed two groups of research subjects (one group that used the head-motion Google Glass application and the other using the built-in zoom feature on a smart phone) and measured the time it took for them to complete certain tasks. The researchers showed that the head-based navigation method reduced the average trial time compared to conventional manual scrolling by about 28 percent.

As next steps for the project, the researchers would like to incorporate more gestures on the Google Glass to interact with smartphones. They would also like to study the effectiveness of head-motion based navigation compared to other commonly-used smartphone accessibility features, such as voice-based navigation.

"Given the current heightened interest in smart glasses, such as Microsoft's Hololens and Epson's Moverio, it is conceivable to think of a smart glass working independently without requiring a paired mobile device in near future." says first author Shrinivas Pundlik. "The concept of head-controlled screen navigation can be useful in such glasses even for people who are not visually impaired."

Authors on the IEEE Transactions on Neural Systems and Rehabilitation Engineering paper include last author Gang Luo, first author Shrinivas Pundlik, Huaqi Yi, Rui Liu, M.D., and Eli Peli, O.D., of Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School.

This research study was supported by an unrestricted gift from Google Inc., and the Eleanor and Miles Shore Fellowship Award of Harvard Medical School.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account