Dynamic update of the reinforcement function during learning

During the last decade, numerous contributions have been made to the use of reinforcement learning in the robot learning field. They have focused mainly on the generalization, memorization and exploration issues - mandatory for dealing with real robots. However, it is our opinion that the most diffi...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autor principal: Santos, Juan Miguel
Publicado: 1999
Materias:
Acceso en línea:https://bibliotecadigital.exactas.uba.ar/collection/paper/document/paper_09540091_v11_n3-4_p267_Santos
http://hdl.handle.net/20.500.12110/paper_09540091_v11_n3-4_p267_Santos
Aporte de:
Descripción
Sumario:During the last decade, numerous contributions have been made to the use of reinforcement learning in the robot learning field. They have focused mainly on the generalization, memorization and exploration issues - mandatory for dealing with real robots. However, it is our opinion that the most difficult task today is to obtain the definition of the reinforcement function (RF). A first attempt in this direction was made by introducing a method - the update parameters algorithm (UPA) - for tuning a RF in such a way that it would be optimal during the exploration phase. The only requirement is to conform to a particular expression of RF. In this article, we propose Dynamic-UPA, an algorithm able to tune the RF parameters during the whole learning phase (exploration and exploitation). It allows one to undertake the so-called exploration versus exploitation dilemma through careful computation of the RF parameter values by controlling the ratio between positive and negative reinforcement during learning. Experiments with the mobile robot Khepera in tasks of synthesis of obstacle avoidance and wall-following behaviors validate our proposals.