Robots are used extensively in structured industrial environments for various pre-programmed tasks. Unfortunately, it is time-consuming and expensive to program these robots for every single possible task. We want robots to operate in unstructured environments while learning many tasks on the fly, either by themselves or with the assistance of a human teacher.
This project explores an approach towards teaching manipulation tasks to robots via human demonstrations. A human demonstrates the desired task (say, carrying a cup of water without spilling) by physically moving the robot. Given many such kinesthetic demonstrations, the robot applies a learning algorithm to learn a model of the underlying task. In a new scene, the robot uses this task-model to plan a path that satisfies the task requirements. We present results on two 7-Degrees of Freedom robots performing tabletop manipulation tasks.
Graph-Based Inverse Optimal Control for Robot Manipulation, Arunkumar Byravan, Mathew Monfort, Brian Ziebart, Byron Boots and Dieter Fox, IJCAI 2015
This material is based upon work supported by the National Science Foundation under Grant Number 1227234.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.