A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning

Optimization of hyper-parameters in reinforcement learning (RL) algorithms is a key task, because they determine how the agent will learn its policy by interacting with its environment, and thus what data is gathered. In this work, an approach that uses Bayesian optimization to perform a two-step op...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Barsce, Juan Cruz, Palombarini, Jorge, Martínez, Ernesto
Formato: Objeto de conferencia
Lenguaje:Inglés
Publicado: 2019
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/87851
Aporte de:
id I19-R120-10915-87851
record_format dspace
institution Universidad Nacional de La Plata
institution_str I-19
repository_str R-120
collection SEDICI (UNLP)
language Inglés
topic Ciencias Informáticas
Reinforcement learning
Hyper-parameter optimization
Bayesian optimization, Bayesian optimization of combinatorial structures (BOCS)
spellingShingle Ciencias Informáticas
Reinforcement learning
Hyper-parameter optimization
Bayesian optimization, Bayesian optimization of combinatorial structures (BOCS)
Barsce, Juan Cruz
Palombarini, Jorge
Martínez, Ernesto
A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
topic_facet Ciencias Informáticas
Reinforcement learning
Hyper-parameter optimization
Bayesian optimization, Bayesian optimization of combinatorial structures (BOCS)
description Optimization of hyper-parameters in reinforcement learning (RL) algorithms is a key task, because they determine how the agent will learn its policy by interacting with its environment, and thus what data is gathered. In this work, an approach that uses Bayesian optimization to perform a two-step optimization is proposed: rst, categorical RL structure hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, while using the best categorical hyper-parameters found in the optimization at the upper-level of abstraction. This two-tier approach is validated in a simulated control task. Results obtained are promising and open the way for more user-independent applications of reinforcement learning.
format Objeto de conferencia
Objeto de conferencia
author Barsce, Juan Cruz
Palombarini, Jorge
Martínez, Ernesto
author_facet Barsce, Juan Cruz
Palombarini, Jorge
Martínez, Ernesto
author_sort Barsce, Juan Cruz
title A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_short A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_full A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_fullStr A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_full_unstemmed A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_sort hierarchical two-tier approach to hyper-parameter optimization in reinforcement learning
publishDate 2019
url http://sedici.unlp.edu.ar/handle/10915/87851
work_keys_str_mv AT barscejuancruz ahierarchicaltwotierapproachtohyperparameteroptimizationinreinforcementlearning
AT palombarinijorge ahierarchicaltwotierapproachtohyperparameteroptimizationinreinforcementlearning
AT martinezernesto ahierarchicaltwotierapproachtohyperparameteroptimizationinreinforcementlearning
AT barscejuancruz hierarchicaltwotierapproachtohyperparameteroptimizationinreinforcementlearning
AT palombarinijorge hierarchicaltwotierapproachtohyperparameteroptimizationinreinforcementlearning
AT martinezernesto hierarchicaltwotierapproachtohyperparameteroptimizationinreinforcementlearning
bdutipo_str Repositorios
_version_ 1764820489443737600