RI Logo


SegwayThe target application domain for my dissertation research was low-level motion control on a mobile robot. Within the LfD literature, approaches that provide policy corrections have the teacher indicate the correct policy prediction from a discrete set of actions with significant time duration. Further challenging however is to provide corrections within continuous spaces sampled at a rapid rate: both characteristics of low-level motion control.

 My dissertation research [1] developed techniques to address both of these challenges. Advice-operators were introduced as a corrective feedback form suitable for providing continuous-valued corrections, and Focused Feedback for Mobile Robot Policies (F3MRP) as a framework suitable for providing feedback on policies sampled at a high frequency [3]. Concretely defined, an advice-operator is a mathematical computation performed on an observation input or action output. Operators are applied over a learner execution segment, indicated through the F3MRP interface, and pairing a modified observation (or action) with the executed action (or observation) represents a corrected mapping. Teacher selection of a single advice-operator and execution segment thus translates into multiple continuous-valued corrections, and therefore is suitable for modifying low-level motion control policies sampled at high frequency.

Corrective feedback provided through advice-operators and the F3MRP framework has been used in multiple capacities. Initial work used corrections to refine policies learned from demonstration, with empirical validation on a Segway RMP robot performing a spatial positioning task [9]. Corrective feedback was then used to scaffold simpler policies learned from demonstration into a policy able to execute a novel, undemonstrated, task within a simulated racetrack driving domain [4,6]. An algorithm also was developed that learns a weighting to reflect the respective performance abilities of different data sources, such as demonstrations and feedback-modified student executions [7].
Ph.D. Dissertation

[1]   B. D. Argall. Learning Mobile Robot Motion Control from Demonstration and Corrective Feedback. Ph.D. Thesis, Robotics Institute, Carnegie Mellon University, March 2009. Technical report CMU-RI-TR-09-13.   [pdf]

Book Chapters

[2]   B. D. Argall, B. Browning and M. M. Veloso. Mobile Robot Motion Control from Demonstration and Corrective Feedback.  In From Motor to Interaction Learning in Robots. J. Peters and O. Sigaud, editors. Springer, New York, NY, 2009.   [pdf]

Journal Publications

[3]   B. D. Argall, M. Veloso, and B. Browning. Feedback for the Refinement of Learned Motion Control on a Mobile Robot. International Journal of Social Robotics, 4(4), 383-395, 2012.   [pdf]

[4]   B. D. Argall, M. Veloso, and B. Browning. Teacher Feedback to Scaffold and Refine Demonstrated Motion Primitives on a Mobile Robot.
Robotics and Autonomous Systems, 59(3-4), 243-255, 2011. 

[5]   B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A Survey of Robot Learning from Demonstration. Robotics and Autonomous Systems. 57(5): 469-483, 2009.   [pdf]

Referreed Conference Publications

[6]   B. Argall, B. Browning, and M. Veloso. Learning Mobile Robot Motion Control from Demonstrated Primitives and Human Feedback. In Proceedings of the 14th International Symposium on Robotics Research (ISRR '09), Luzern, Switzerland, August 2009.   [pdf]

B. Argall, B. Browning, and M. Veloso. Automatic Weight Learning for Multiple Data Sources when Learning from Demonstration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '09), Kobe, Japan, May 2009.   [pdf]

B. Argall, B. Browning and M. Veloso. Learning Robot Motion Control from Demonstration and Human Advice. In Proceedings fo the AAAI Spring Symposium on Agents that Learn from Human Teachers, Stanford, California, March 2009.   [pdf]

B. Argall, B. Browning, and M. Veloso. Learning Robot Motion Control with Demonstration and Advice-Operators. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '08), Nice, France, September 2008.   [pdf]

B. Argall, B. Browning, and M. Veloso. Learning by Demonstration with Critique from a Human Teacher. In Proceedings of the Second Annual Conference on Human-Robot Interactions (HRI '07), Washington D.C., March 2007.   [pdf]