Background and Approach

This page provides information about low vision that provide the background for this project, along with an overview of the approach being taken to develop design tools that can facilitate the creation of visually accessible spaces.


Visual accessibility refers to the use of vision to travel efficiently and safely through an environment, to perceive the spatial layout of key features in the environment, and to keep track of one's location in the environment.  There are several million people in the United States with visual impairments serious enough to restrict their reading and mobility.  Our aim is to explore mechanisms for increasing visual accessibility for such low vision individuals.

Low vision comes in many forms, involving combinations of loss of acuity, contrast sensitivity, and visual field (examples).  To construct a public space that facilitates visual accessibility, it is necessary predict how well individuals with low vision can perform critical actions within that space.

bench that is difficult to see in diffuse lightingOne of the main problems in designing for visual accessibility arises from the difficulty of predicting the photometric appearance of real spaces. Shown to the left is a photograph of a bench that was until recently outside the Psychology building at the University of Minnesota.  Under direct sunlight, the bench was visible even to those with limited acuity or contrast sensitivity.  Under diffuse lighting such as a cloudy day, however, the bench is visually indistinct from the walkway and thus represents a serious hazard for low-vision pedestrians.

University of Minnesota Gateway CenterWhile our focus is on visual accessibility, we are more broadly interested in universal design maximizing the utility of spaces for all people, regardless of age or disability. Universal design is a broad, integrated solution to help everyone, rather than the use of separate solutions for people with disabilities. It involves the exploitation of key features important to function, safety, and mobility. In the case of visual accessibility this requires an understanding of the complex interactions between geometry, lighting, and surface properties of objects as they relate to visual performance. The image at the upper right shows the interior of the Gateway Center at the University of Minnesota, a space in which the conflicts between aesthetics and the requirements of universal design are quite stark.


The project has four, closely integrated parts:

project goals, also explained in text below

  • Engineering: Develop methods for predicting the physical levels of light reaching the eye for a given viewpoint, geometric configuration, mix of surface materials, and lighting situation. While calibrated, high dynamic range (HDR) photometric measurements can be made of existing spaces, spaces under design will require state-of-the art computer graphics tools that can reliably predict light levels for detailed models of physical spaces.
  • Empirical: Develop methods for investigating perceptual capabilities critical to visually-based mobility, including detection and classification distances for functionally significant targets in real environments for a range of lighting conditions and restricted viewing conditions, and the ability to use these targets for successful spatial orientation. Utilize these methods to acquire performance data for normally sighted subjects with visual restrictions and people with low vision in controlled spaces that are nevertheless representative of the visual accessibility problems likely to be encountered in real spaces.
  • Computational: Develop models that can predict perceptual competence on tasks critical to visually-based mobility, given photometrically accurate information about a particular visual environment. Extend these models to account for characteristic forms of low vision, including severe peripheral field loss, central loss, and depressed contrast sensitivity. Development of these models will be informed by the results of empirical testing and will be validated by testing predicted performance in real-world environments.
  • Application: Demonstrate a proof-of-concept tool that operates on design models from one or more existing architectural design systems and is able to highlight potential obstacles to visual accessibility by predicting the quantitative photometry that would result from the physical instantiation of the design model and using this as input to the perceptual model.

Photometrically correct imaging

Predicting the visual accessibility of a space requires knowing the distribution of light seen from any vantage point of interest.  In computer graphics and photography, relative distributions of light matter much more than the actual metric values.  For this project, calibrated values are required in order to correctly determine contrast and account for scotopic (low light) effects.  To capture the variability of lighting that can occur in real spaces, high dynamic range (HDR) imaging techniques as representations are required.

photometrically correct computer graphics rendering of a real space

For existing spaces, multiple exposures can be combined to produce an HDR image using a variety of tools, including Photoshop, pfstools, and several others. For most purposes, calibration is required to correctly scale values. This involves specialized light measuring devices and at least some amount of tedious hand processing. Our main emphasis is in identifying potential hazards to visual accessibility in the design process, before new architectural spaces are constructed. This requires that light intensities be accurately predicted based on design models. The image above was generated by the Radiance rendering system, and accurately reflects both the geometry and photometry of one of our laboratory spaces.

Empirical Testing

Empirical testing will be done to explore obstacle avoidance and mobility in low vision over a range of environmental spaces, lighting conditions, and visual deficits.  The intent is to balance the need for experimental controls with the need to evaluate conditions that are meaningful to the design of real spaces.

For obstacle detection and classification, the first step is to construct prototypical obstacles in a room in which we can completely control the lighting.  Four classes of obstacles will be considered:  curb-like steps up or down, objects extending up from the floor, holes in the floor, and objects extending down from the ceiling (see figure below).  Lighting will simulate diffuse lighting typical of a standard office environment, spot lighting as is often found in public spaces, and daylight from a side window.  Subjects will include both those with low vision and normal vision individuals wearing devices intended to simulate low vision.  One open question we hope to contribute to is whether  or not using normal vision subjects wearing visual restrictors provides useful insights into actual low vision performance.four classes of obstacles to be used in percpetion experiments

Perceptual modeling

possible curb hazards

Although we are still far from a complete model of human object recognition, there is growing consensus regarding its overall computational architecture. Evidence from computational, behavioral, and neural studies suggests the following picture. Visual recognition begins with a fast feedforward process that extracts features. These features serve to rapidly index or propose candidate object or scene categories, such as “post“, “curb”, “car”, “sign”, etc.  Then depending on the confidence level required for specific task goals, the decision can be refined by verification through feedback. For example, there may be sufficient information in the initial feedforward pass to hypothesize the existence of an object at an approximate location in the image, but more iterations and/or more fixations may be required to increase the confidence of the classification or to accurately localize the object and determine its precise shape and extent. 

Our initial approach to perceptual modeling will focus on feedforward, image-based theories for several reasons: 1) The first feedforward pass is likely to carry the most critical information for obstacle avoidance; 2) Image-based methods are computationally more tractable with natural image input; 3) There are intriguing correspondences between image-based approaches and biological vision; 4) It will be straightforward to measure performance as a function of key input variables characterizing human low-vision (e.g. loss of spatial resolution, contrast, visual field); 5) The methods we propose are extensible to adaptive perceptual learning of the important features for a specific task; 6) This approach extends naturally to using information in a motion sequence of images; 7) Objects of a particular class can be localized in the image, providing information for obstacle search; 8) Of particular relevance to the current project is a fragment-based scheme which selects fragments that are important areas in the image. Thus after learning we can use the selected fragments to predict which features are important for detection. This knowledge could be useful for designers, since they can focus on improving detectability of important features.

This is a multi-disciplinary project involving personnel from the University of Minnesota, the University of Utah, and Indiana University, and supported by the National Eye Institute of the National Institutes of Health grant 2 R01 EY017835-06A1.