Dynamic programming for variable discounted Markov decision problems
We study the existence of optimal strategies and value function of non stationary Markov decision processes under variable discounted criteria, when the action space is assumed to be Borel and the action space to be compact. With this new way of defining the value of a policy, we show existence of M...
Guardado en:
| Autores principales: | , , |
|---|---|
| Formato: | Objeto de conferencia |
| Lenguaje: | Inglés |
| Publicado: |
2014
|
| Materias: | |
| Acceso en línea: | http://sedici.unlp.edu.ar/handle/10915/41704 http://43jaiio.sadio.org.ar/proceedings/SIO/17.pdf |
| Aporte de: |
| Sumario: | We study the existence of optimal strategies and value function of non stationary Markov decision processes under variable discounted criteria, when the action space is assumed to be Borel and the action space to be compact. With this new way of defining the value of a policy, we show existence of Markov deterministic optimal policies in the finite-horizon case, and a recursive method to obtain such ones. For the infinite horizon problem we characterize the value function and show existence of stationary deterministic policies. The approach presented is based on the use of adequate dynamic programming operators. |
|---|