Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics

Published in Conference on Robot Learning (CoRL), 2017

Recommended citation: Shukla, N., He, Y., Chen, F. & Zhu, S.. (2017). Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics. Proceedings of the 1st Annual Conference on Robot Learning, in PMLR 78:448-457 http://proceedings.mlr.press/v78/shukla17a.html

We uncouple three components of autonomous behavior (utilitarian value, causal reasoning, and fine motion control) to design an interpretable model of tasks from video demonstrations. Utilitarian value is learned from aggregating human preferences to understand the implicit goal of a task, explaining why an action sequence was performed. Causal reasoning is seeded from observations and grows from robot experiences to explain how to deductively accomplish sub-goals. And lastly, fine motion control describes what actuators to move. In our experiments, a robot learns how to fold t-shirts from visual demonstrations, and proposes a plan (by answering why, how, and what) when folding never-before-seen articles of clothing.

Download paper here

Recommended citation: Shukla, N., He, Y., Chen, F. & Zhu, S.. (2017). Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics. Proceedings of the 1st Annual Conference on Robot Learning, in PMLR 78:448-457

Leave a Comment