Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning

Interactive reinforcement learning (IRL) extends traditional reinforcement learning (RL) by allowing an agent to interact with parentlike trainers during a task. In this paper, we present an IRL approach using dynamic audio-visual input in terms of vocal commands and hand gestures as feedback. Our a...

Descripción completa

Detalles Bibliográficos
Autores principales: Cruz, Francisco, Parisi, Germán, Wermter, Stefan
Formato: Objeto de conferencia Resumen
Lenguaje:Inglés
Publicado: 2018
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/70693
http://47jaiio.sadio.org.ar/sites/default/files/ASAI-06.pdf
Aporte de:
id I19-R120-10915-70693
record_format dspace
institution Universidad Nacional de La Plata
institution_str I-19
repository_str R-120
collection SEDICI (UNLP)
language Inglés
topic Ciencias Informáticas
interactive reinforcement learning
affordances
audio-visual feedback
parent-like trainer
spellingShingle Ciencias Informáticas
interactive reinforcement learning
affordances
audio-visual feedback
parent-like trainer
Cruz, Francisco
Parisi, Germán
Wermter, Stefan
Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning
topic_facet Ciencias Informáticas
interactive reinforcement learning
affordances
audio-visual feedback
parent-like trainer
description Interactive reinforcement learning (IRL) extends traditional reinforcement learning (RL) by allowing an agent to interact with parentlike trainers during a task. In this paper, we present an IRL approach using dynamic audio-visual input in terms of vocal commands and hand gestures as feedback. Our architecture integrates multi-modal information to provide robust commands from multiple sensory cues along with a confidence value indicating the trustworthiness of the feedback. The integration process also considers the case in which the two modalities convey incongruent information. Additionally, we modulate the influence of sensory-driven feedback in the IRL task using goal-oriented knowledge in terms of contextual affordances.We implement a neural network architecture to predict the effect of performed actions with different objects to avoid failed-states, i.e., states from which it is not possible to accomplish the task. In our experimental setup, we explore the interplay of multi-modal feedback and task-specific affordances in a robot cleaning scenario. We compare the learning performance of the agent under four different conditions: traditional RL, multi-modal IRL, and each of these two setups with the use of contextual affordances. Our experiments show that the best performance is obtained by using audio-visual feedback with affordance-modulated IRL. The obtained results demonstrate the importance of multi-modal sensory processing integrated with goaloriented knowledge in IRL tasks.
format Objeto de conferencia
Resumen
author Cruz, Francisco
Parisi, Germán
Wermter, Stefan
author_facet Cruz, Francisco
Parisi, Germán
Wermter, Stefan
author_sort Cruz, Francisco
title Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning
title_short Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning
title_full Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning
title_fullStr Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning
title_full_unstemmed Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning
title_sort multi-modal feedback for affordance-driven interactive reinforcement learning
publishDate 2018
url http://sedici.unlp.edu.ar/handle/10915/70693
http://47jaiio.sadio.org.ar/sites/default/files/ASAI-06.pdf
work_keys_str_mv AT cruzfrancisco multimodalfeedbackforaffordancedriveninteractivereinforcementlearning
AT parisigerman multimodalfeedbackforaffordancedriveninteractivereinforcementlearning
AT wermterstefan multimodalfeedbackforaffordancedriveninteractivereinforcementlearning
bdutipo_str Repositorios
_version_ 1764820481693712384