acm-header
Sign In

Communications of the ACM

ACM Careers

Vision-Correcting Displays


View as: Print Mobile App Share:
vision-correcting display

The MIT-developed vision-correcting display technology incurs only a modest loss in resolution.

Credit: Christine Daniloff / MIT

Researchers at the MIT Media Laboratory and the University of California at Berkeley have developed a new display technology that automatically corrects for vision defects — no glasses (or contact lenses) required.

The technique could lead to dashboard-mounted GPS displays that farsighted drivers can consult without putting their glasses on, or electronic readers that eliminate the need for reading glasses, among other applications.

"The first spectacles were invented in the 13th century," says Gordon Wetzstein, a research scientist at the Media Lab and one of the display's co-creators. "Today, of course, we have contact lenses and surgery, but it's all invasive in the sense that you either have to put something in your eye, wear something on your head, or undergo surgery. We have a different solution that basically puts the glasses on the display, rather than on your head. It will not be able to help you see the rest of the world more sharply, but today, we spend a huge portion of our time interacting with the digital world."

Wetzstein and his colleagues describe their display in "Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays," a paper they're presenting in August at ACM SIGGRAPH 2014, the premier graphics conference. Joining him on the paper are Ramesh Raskar, the NEC Career Development Professor of Media Arts and Sciences and director of the Media Lab's Camera Culture group, and Berkeley's Fu-Chung Huang and Brian Barsky.

Knowing the Angles

The display is a variation on a glasses-free 3-D technology also developed by the Camera Culture group. But where the 3-D display projects slightly different images to the viewer's left and right eyes, the vision-correcting display projects slightly different images to different parts of the viewer's pupil.

A vision defect is a mismatch between the eye's focal distance — the range at which it can actually bring objects into focus — and the distance of the object it's trying to focus on. Essentially, the new display simulates an image at the correct focal distance — somewhere between the display and the viewer's eye.

The difficulty with this approach is that simulating a single pixel in the virtual image requires multiple pixels of the physical display. The angle at which light should seem to arrive from the simulated image is sharper than the angle at which light would arrive from the same image displayed on the screen. So the physical pixels projecting light to the right side of the pupil have to be offset to the left, and the pixels projecting light to the left side of the pupil have to be offset to the right.

The use of multiple on-screen pixels to simulate a single virtual pixel would drastically reduce the image resolution. But this problem turns out to be very similar to a problem that Wetzstein, Raskar, and colleagues solved in their 3-D displays, which also had to project different images at different angles.

The researchers discovered that there is, in fact, a great deal of redundancy between the images required to simulate different viewing angles. The algorithm that computes the image to be displayed onscreen can exploit that redundancy, allowing individual screen pixels to participate simultaneously in the projection of different viewing angles. The MIT and Berkeley researchers were able to adapt that algorithm to the problem of vision correction, so the new display incurs only a modest loss in resolution.

In the researchers' prototype, however, display pixels do have to be masked from the parts of the pupil for which they're not intended. That requires that a transparency patterned with an array of pinholes be laid over the screen, blocking more than half the light it emits.

Facing the Problem

But early versions of the 3-D display faced the same problem, and the MIT researchers solved it by instead using two liquid-crystal displays (LCDs) in parallel. Carefully tailoring the images displayed on the LCDs to each other allows the system to mask perspectives while letting much more light pass through. Wetzstein envisions that commercial versions of a vision-correcting screen would use the same technique.

Indeed, he says, the same screens could both display 3-D content and correct for vision defects, all glasses-free. They could also reproduce another Camera Culture project, which diagnoses vision defects. So the same device could, in effect, determine the user's prescription and automatically correct for it.

"Most people in mainstream optics would have said, 'Oh, this is impossible,'" says Chris Dainty, a professor at the University College London Institute of Ophthalmology and Moorfields Eye Hospital. "But Ramesh's group has the art of making the apparently impossible possible."

"The key thing is they seem to have cracked the contrast problem," Dainty adds. "In image-processing schemes with incoherent light — normal light that we have around us, nonlaser light — you're always dealing with intensities. And intensity is always positive (or zero). Because of that, you're always adding positive things, so the background just gets bigger and bigger and bigger. And the signal-to-background, which is contrast, therefore gets smaller as you do more processing. It's a fundamental problem."

Dainty believes that the most intriguing application of the technology is in dashboard displays. "Most people over 50, 55, quite probably see in the distance fine, but can't read a book," Dainty says. "In the car, you can wear varifocals, but varifocals distort the geometry of the outside world, so if you don't wear them all the time, you have a bit of a problem. There, [the MIT and Berkeley researchers] have a great solution."


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account