An Educational Software to Develop Robot Mapping and Localization Practices Using Visual Information
Advances in Control Education, Volume # 10 | Part# 1
Paya, Luis; Amoros, Francisco; Fernandez, Lorenzo; Reinoso, Oscar
Digital Object Identifier (DOI)
Teaching aids for control engineering; Virtual and remote labs
In this work, we present a software tool we have developed to be used in a computer vision and mobile robotics subject whose main objective consists in designing algorithms to control an autonomous robot. In applications that require the robot to move through an unknown environment, it is very important to build a model or map of this environment and to estimate the position of the robot in this map with enough accuracy. Map building and localization are two topics in constant innovation as new methods are continuously appearing, and some of these methods may be mathematically complex. Taking this fact into account, we have designed a platform that provides students all the necessary tools to understand the algorithms and that allows students to configure them to optimize the mapping and localization processes. We have added some databases, composed of several sets of indoor images, captured in real environments under realistic lighting conditions, so students will face the problems that would outcome in a real application. In this paper we present some details of implementation of the platform and how the students could use it.
Amorós, F., Paya, L., Reinoso O., Fernández, L., and Marin, J. M. (2010). Visual Map Building and Localization with an Appearance-based Approach. Proceedings of 7th Internacional Conference on Informatics, in Control, Automation and Robotics, ICINCO 2010. 2, 423-426 Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for human detection. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Diego, USA. de Queiroz, L.R.; Bergerman, M.; Campos Machado, R.; Bueno, S.S.; Elfes, A. (1998). A robotics and computer vision virtual laboratory, Advanced Motion Control, 1998. AMC '98-Coimbra. Jara, C., Candelas, F., Puente, F., Torres, F. (2011). Hands-on experiences of undergraduate students in Automatics and Robotics using a virtual and remote laboratory, Computers & Education, 57(4). Krose, B., Bunschoten, R., Hagen, S., Terwijn, B., and Vlassis, N. (2004). Household robots look and learn: environment modelling and localization from an omnidirectional vision system. Robotics Automation Magazine, IEEE, 11(4), 45 – 52. Menegatti, E., Maeda, T., and Ishiguro, H. (2004). Image based memory for robot navigation using properties of omnidirectional images. Robotics and Autonomous Systems. 47(4), 251-276. Oliva, A. and Torralba, A. (2001). Modelling the shape of the scene: a holistic representation of the spatial envelope. Int. Journal of Computer Vision, 42(3), 145-175. Ramasundaram, V., Grunwald, S., Mangeot, A., Comerford, N.B., Bliss, C.M. (2005). Development of an environmental virtual field laboratory. Computers & Education, 45(1). Siegwart, R., and Nourbakhsh, I.R. (2004). Introduction to autonomous mobile robots. The MIT Press, Cambridge, Massachusetts. Torralba, A. (2003). Contextual priming for object detection. Int. Journal of Computer Vision, 53(2), 169-191.