My research focuses on autonomous learning and perception for robot manipulation. I am particularly interested in enabling robots to autonomously discover and manipulate objects with which they have no previous knowledge or experience. For the most up-to-date information on my research see the Utah LL4MA Lab website.
In order for robots to leave sterile factory floors and precisely calibrated laboratories, they must be endowed with sophisticated manipulation capabilities, robust to the levels of uncertainty present in the real-world settings. These robots will be deployed as assistants in homes and offices, as coworkers alongside humans on crowded factory floors, and as surrogates for humans in dangerous environments. A newly available class of reliable tactile sensors and affordable 3D cameras need to be leveraged to accomplish this goal. My research centers on developing the techniques and algorithms required to make use of these sensors in improving robot manipulation for use in the real world. My work to date has shown how robots can autonomously discover objects, improve manipulation skills, and generalize these skills to previously unseen objects — crucial abilities missing in currently deployed robots. A coordinated interplay of perception and manipulation fundamentally enables these methods' success. Perception not only guides the robot during manipulation, but also provides a learning signal to the robot by analyzing the results of its actions. Further, manipulation provides the means to improve perception: a robot can change its environment to reduce uncertainty and remove ambiguity present in the scene. By jointly considering perception and manipulation, I am able to design straightforward, efficient algorithms, which produce robust and reliable results.
My research on unifying perception and manipulation focuses on answering the following questions:
If you are a student interested in working with me, please see this page.