Progress

This page provides a sampling of recent results illustrating the benefits of using photometrically correct modeling to evaluate visual accessibility and perception experiments evaluating low vision hazard detection and spatial updating.

Using photometrically correct modeling to evaluate visual accessibility

Preliminary trip hazard study

A computer graphics model of the laboratory space used for this project at the University of Minnesota was created. The model included both accurate geometry and accurate materials modeling and was rendered using Radiance. This was used to study how the arrangement of simulated dowlights can affect the visibility of a 1” trip hazard on the ramp/platform apparatus used for perception experiments. Simulated 5 x 50w PAR30 NFL downlights were located on 5’ centers at ceiling height and moved, as a group, to optimize and then to minimize the visibility of the potential trip hazard.

 

simulation of step hazard viewed with well located spot lighting

In the picture above, a 1" stair is visible towards the back of the ramp on the right side of the room.

simulation of step hazard viewed with poorly located spot lighting

When the position of the downlights is moved 2' in the simulation, the step becomes much less visible. This is an example of the importance of using photometrically accurate simulations to predict potential impediments to visual accessibility, rather than relying just on general design guidelines.

Empirical testing of low vision hazard detection and spatial updating

Detection and recognition of environmental objects

An experimental facility has been constructed allowing perceptual experiments involving the detection and identification of objects and mobility hazards under controlled lighting conditions. This includes an indoor sidewalk, built from wooden staging material, interrupted midway by a step up or down, or a ramp up or down. (The images above are from a computer graphics simulation of this walkway.)

In a recent study, participants wearing goggles that simulated low vision were asked to detect and recognize deviations from a flat sidewalk—a step up or down, or a ramp up or down. Lighting (diffuse overhead vs. side lighting from windows) and viewing distance were varied. The background for the sidewalk was also varied (adjacent areas matched in gray or contrasting black). We have identified three key stimulus cues for recognizing steps and ramps—luminance contrast at the transition point, a geometrical shape cue (L-junction) on the bounding contour of a down step, and height in the image plane associated with upward or downward transitions. The visibility of these cues interacts with lighting and depends on the subject’s viewing distance and extent of artificial blur

Distance judgments under simulated low vision

Those with normal sight can accurately judge distances to environmental locations tens of meters away, as evidenced by their ability to perform distance dependent actions to such locations without visual feedback beyond an initial view. Anecdotal evidence suggests that distance judgments are much harder for those with most forms of low vision, though few if any controlled studies have been done.

In one of the first studies to quantitatively evaluate distance judgments under simulated low vision, participants wore goggles with theatrical lighting diffusers, resulting in a tested acuity between 20/381 and 20/1261, and a contrast sensitivity between 0.0 and 0.75. In a second block of trials, participants wore goggles with clear flat lenses. Group-averaged data exhibited no difference in walked distances between the two conditions, though variability with simulated low vision was greater. This is a surprising result, given that under the simulated low vision condition the targets were barely visible blobs. Work is underway aimed at understanding the mechanisms that allow reasonably accurate distance judgments, at least as reflected in mobility-based actions, even with severely degraded visual input.


This is a multi-disciplinary project involving personnel from the University of Minnesota, the University of Utah, and Indiana University, and supported by the National Eye Institute of the National Institutes of Health grant 2 R01 EY017835-06A1.