Robot Perception enables Complex Navigation Behavior via Self-Supervised Learning

Marvin Chancán    Michael Milford
Queensland University of Technology

Robotics: Science and Systems XVI Workshop on Self-Supervised Robot Learning



Abstract

Learning visuomotor control policies in robotic systems is a fundamental problem when aiming for long-term behavioral autonomy. Recent supervised-learning-based vision and motion perception systems, however, are often separately built with limited capabilities, while being restricted to few behavioral skills such as passive visual odometry (VO) or mobile robot visual localization. Here we propose an approach to unify those successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL). Our method temporally incorporates compact motion and visual perception data - directly obtained using self-supervision from a single image sequence - to enable complex goal-oriented navigation skills. We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework. The results show that our method can accurately generalize to extreme environmental changes such as day to night cycles with up to an 80% success rate, compared to 30% for a vision-only navigation systems.

Preprint: [PDF]       arXiv: [ABS]       CityLearn environment: [GitHub]

Video



Bibtex

@article{chancan2020rss20ssrl,
	author = {M. {Chanc\'an} and M. {Milford}},
	title = {Robot Perception enables Complex Navigation Behavior via Self-Supervised Learning},
	journal = {arXiv preprint arXiv:2006.08967},
	year = {2020}
}

Copyright 2022 © Marvin Chancán