Reinforcement learning for vision based mobile robots using the Hough Transform
Vision-based perception gives autonomous robots the ability to perform a varied set of tasks, due to the great amount and quality of information it procures. Although Reinforcement Learning (RL) is a learning model that has made a great impact in the autonomous robots field, its application to visio...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | CONF |
Materias: | |
Acceso en línea: | http://hdl.handle.net/20.500.12110/paper_NIS12835_v216_n_p161_Pedrc |
Aporte de: |
Sumario: | Vision-based perception gives autonomous robots the ability to perform a varied set of tasks, due to the great amount and quality of information it procures. Although Reinforcement Learning (RL) is a learning model that has made a great impact in the autonomous robots field, its application to vision-based perception has been limited. One of the main reasons for this fact is the size of the state space: raw images are usually simply too big to be used as states for the direct application of RL techniques. In this work, we present a method that uses the linear Hough Transform to detect straight lines in captured images. Using a state representation based on small number of straight lines inferred from images, we can reduce the size of state space, making it possible to use standard RL algorithms, such as Q-Learning. As a part of the method, we also present a. model-free exploration technique based on e-greedy action selection strategy. We carry out a series of experiments in order to verify the method for the task of navigating through a corridor with a vision-based mobile robot, either on a robot simulator and on a real vision-based minirobot called FenBot. |
---|