APE: Active Prototyping Environment
The APE project is part of both the Geometric Design and Computation Group and
Prototyping Group. Perhaps the highest level goal of this project
is to provide a sense of realism within a virtual environment by
producing the sense of contact with virtual models. The main thrust
therefore is in efficient computation that facilitates the maximum
update rates of both the haptic and visual displays.
- Evaluation: In such a high speed, real-time, system geometric
evaluation can quickly become a bottle neck. Methods are required to
evaluate models that will produce the evaluation point, surface
tangents, surface normal, and second derivative information.
- Representation: During tracing, the contact point slides along the
surface boundary of the model. The model is made up of multiple
surfaces connected by trimming data structures. Efficient
computation requires special data structures that can store the
model compactly, but in a form easily queried.
- Global Computation: Some information required is global in
scope. For instance, the future point of contact is always the
global closest point. There are currently no real-time methods for
finding this point so hybrid real-time approaches are investigated.
- Local Computation: During tracing, local closest points are required
that shadow the users movement along the models surface
boundary. Further, these local closest points also supplement the
- Speed: The display of the environment needs to be kept at an
interactive rate. We use optimized OGL but are looking into other
display methods as well. There is a lot of work into point clouds
for example. Somewhat faster than showing polygons in some cases.
- Stereo: We also need to have proper display characteristics to
produced head tracked stereoscopic displays. This greatly aids the
sense of immersion within the environment.
- Multi-machine: The viewer can be run on one machine while each
individual device controller can be run on its own machine.
- Multi-process: In the case of the Phantom controller there are two
distinct processes: one that talks to the APE viewer, and one that
talks to the Phantom. Shared memory is used to allow the two to
communicate with each other.
- Multi-thread: Everything is multi-threaded. The viewer has a thread
for the graphics and for each of the devices it communicates
with. Each device has a thread to do its communication with APE as
well as the device. Also, there are various other threads to do
assorted global tasks, like tracking a globally closest point.
- Networking: In order to talk across multiple machines, as well as
multiple architectures, a machine independent network protocol is
- Shared memory: Fast communication between processes.
- Zero-wait buffering: Data needs to be both written and read as fast
as possible. Avoiding any locking is a huge win. We use an approach
that allows zero wait on both read and write while also providing
the most recent data to readers.
- Thompson II, T.V., and Cohen, E.,
"Direct Haptic Rendering of Complex Trimmed NURBS Models,"
Proc. 8th Annual Symp. on Haptic Interfaces for Virtual
Environment and Teleoperator Systems,
(Nashville, TN), ASME, November 14-16, 1999.
- Johnson, D.E., Thompson II, T.V., Kaplan, M., Nelson, D., and Cohen, E.,
"Painting Textures with a Haptic Interface,"
in Proc. Virtual Reality '99,
(Houston, TX), pp. 282-285, IEEE, March 13-17, 1999.
- Johnson, David E., and Cohen, Elaine,
"An Improved Method for Haptic Tracing of Sculptured Surfaces,"
Symp. Haptic Interfaces,
Proc. ASME Dynamic Systems and Control Division, DSC-Vol. 64,
Anaheim, CA, Nov. 15-20, 1998, pp. 243-248.
- Thompson II, T.V., Nelson, D.D., Cohen, E., and Hollerbach, J.M.,
"Manueverable Models Within A Haptic Virtual Environment,"
in Proc. 6th Annual Symp. on Haptic Interfaces for Virtual
Environment and Teleoperator Systems,
(Dallas, TX), pp. 37-44, ASME, Nov. 15-21, 1997.
- Thompson II, T.V., Johnson, D.E., and Cohen, E.,
"Direct Haptic Rendering Of Sculptured Models,"
in Proc. Symposium on Interactive 3D Graphics,
(Providence, RI), pp. 167-176, ACM, April, 1997.
Support for this research was provided by NSF Grant
MIP-9420352, by DARPA grant F33615-96-C-5621, and by the NSF and DARPA
Science and Technology Center for Computer Graphics and Scientific
Last update: September 21, 2000