Designing Visually Accessible Spaces

wooden block with surfaces of 80% and 20% albedo plus bench that is difficult to see in diffuse lightingThe long-term goal of this project is to provide tools to enable the design of safe environments for the mobility of low-vision individuals and to enhance safety for others, including the elderly, who may need to operate under low luminance and other visually challenging conditions. One of the main problems in designing for visual accessibility arises from the difficulty of predicting the photometric appearance of real spaces. The photograph of a child's block shows how lighting and geometry can dominate reflectance in generating high and low contrast visible edges. The bench is easily visible to many with low vision under direct sunlight, but becomes a serious hazard under lighting such as a cloudy day. This is a joint effort with the University of Minnesota and Indiana University. For more information ...

Perceptual Issues in Computer Graphics

An affordance judgment, involving passibility between two polesa similar affordance judgment, but with the poles at a different distanceThe ultimate purpose of computer graphics is to produce images for viewing by people. Thus, the success of a computer graphics system depends on how well it conveys relevant information to a human observer. In applications such as scientific visualization, simulation and training, education, rehabilitation, and visual analytics, it is often important that the perception of computer graphics be veridical: users should correctly “see” the model being rendered. The goal of this project is to develop a method for quantifying perceptual fidelity using the concept of perceived affordances, defined as the perception of one's own action capabilities in a particular situation. Affordance judgments can be used to probe how accurately viewers are able to perceive action-relevant spatial information in a variety of rendering and display conditions. (The images shown are used to probe judgements of passibility between apertures at different distances from the viewer.)

Space Perception in Virtual Environments

Virtual Environment (VE) systems are computer interfaces that provide users with the sensory experience of being in a simulated space. These systems frequently consist of a Head-mounted Display (HMD) that allows users to view and move within a computer generated environment. A consistent finding over several laboratories has been that absolute egocentric distance judgments within action space (2-30 meters) are underestimated in HMD-VEs, but accurate in real environments. Our work examines the potential causes of this underestimation and aims to influence and improve users' spatial performance in several ways. One way is to examine how calibration of HMDs affects the distortion of visual cues presented to the observer. Another way is to provide feedback to an observer within the HMD to calibrate their responses and change their subsequent behavior. A third approach examines how experience viewing a realistic human avatar affects spatial judgments.

Perceptual coupling between perceived self motion and locomotion

Humans calibrate their visually-directed actions to changing circumstances in their environment. Both head-mounted displays (HMDs) and well designed treadmill-based virtual environments (treadmill-VE) can evoke a similar effect, allowing the investigation of a number of open questions in perception-action coupling that would be difficult or impossible to investigate using real-world experimental apparatus. We have been able to show a visual influence on both gait and “natural” walking speed and have been able to probe questions such as whether the dominant visual cue involved was based solely on 2-D optic flow or if instead a 3-D reconstruction of the speed of self motion was also involved. More recently, we have explored how different categories of feedback affect subsequent actions and more cognitive responses, both in the virtual world and in the real world.

Embodied Perception

avatar arms tracked to match user's armsThe notion that perception is body-based motivates several lines of research. We are interested in whether a person’s body serves as a metric with which to scale the environment in spaces beyond the immediate body (distances, heights, sizes, and affordances). To conduct this research, we gather data outdoors in natural settings, indoors, and in virtual environments, manipulating body representations with external objects, mirrors, and virtual avatars. We are also pursuing the question of how non-optical variables such as internal states contribute to space perception. Other related research includes the role of motor representations and object concepts in tool use.

Spatial Cognition

Humans have the remarkable ability to represent and keep track of spatial locations with their own movement. Our spatial cognition projects examine the processes of spatial updating given real and imagined movement. We use both behavioral and functional neuroimaging methods to examine the processes of spatial and motor imagery, including self and object transformations, simple and more complex path integration tasks, and imagined locomotion.

Distance Perception

A fundamental problem in space perception is to determine what a person actually “sees” in their environment. Our distance perception projects examine what cues may be used for absolute egocentric distance perception in different circumstances, how different response measures are used to indicate perception, and how these measures of distance perception may influence perceptual representations. These questions are approached using both real and virtual environments.

The work of the Visual Perception and Spatial Cognition research group is made possible by the generous support of the National Science Foundation, the National Institutes of Health, and the University of Utah.