With the increasing realism of interactive applications, there is a growing need for harnessing additional sensory modalities such as hearing. While the synthesis and propagation of sounds in virtual environments has been explored, there has been little work that addresses sound localization and its integration into behaviors for autonomous virtual agents. This paper develops a framework that enables autonomous virtual agents to localize sounds in dynamic virtual environments, subject to distortion effects due to attenuation, reflection and diffraction from obstacles, as well as interference between multiple audio signals. We additionally integrate hearing into standard predictive collision avoidance techniques and couple it with vision to allow agents to react to what they see and hear, while navigating in virtual environments.
Yu Wang, Mubbasir Kapadia, Pengfei Huang, Ladislav Kavan, Norman Badler. Sound Localization and Multi-Modal Steering for Autonomous Virtual Agents. Symposium on Interactive 3D Graphics and Games, 2014.
Links and Downloads
We thank Alexander Shoulson for the ADAPT system, and Brian Gygi and the Hollywood Edge company for providing the environmental sound data. This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement # W911NF-10-2-0016. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. The first author thanks Tsinghua University where he was an undergraduate, for supporting his preliminary visit to the University of Pennsylvania where this work was initiated.