Refreshments 3:20 p.m.
Abstract
Recent developments in sensor technology have made it feasible to equip mobile robots with high-fidelity sensors and deploy
them in real-world applications. However, the ability to accurately sense and interact with a dynamic environment is still missing.
Widespread deployment of mobile robots requires the ability to autonomously learn environmental models based on sensory inputs, detect environmental changes and revise the learned models in response to these changes. In addition, the mobile robot needs the ability to autonomously tailor sensing and information processing to the task at hand. This talk will present examples of such learning and planning on mobile robots, based on visual cues.
First, I shall describe a probabilistic approach that enables a mobile robot to utilize the structure inherent in its environment to extract
information from different sensory inputs (e.g., vision, range finders etc). As a result, the robot is able to autonomously model and track
the desired objects in dynamic environments. Next, I shall describe a probabilistic hierarchical decision-making approach whose layers match the cognitive requirements of visual planning. This approach enables a mobile robot to jointly decide: where to look? what to look for? and how to process? based on the task at hand.
All algorithms are implemented and evaluated on humanoid and wheeled robot platforms. The talk will include several videos of experimental trials in the robot soccer framework and other indoor/outdoor environments.
BIO
Mohan Sridharan is an Assistant Professor of Computer Science at Texas Tech University. Prior to his current
appointment, he was a Research Fellow in the School of Computer Science at University of Birmingham (UK). He
received his Ph.D. in Electrical and Computer Engineering at The University of Texas at Austin.
His research interests include robotics, machine vision, cognitive science, multiagent systems and stochastic
machine learning.