Previous |  Up |  Next

Article

Keywords:
discrete-time Markov decision processes; average reward criterion; optimal stationary policy; Lyapunov-type condition; unbounded reward/cost function
Summary:
In this paper we give a new set of verifiable conditions for the existence of average optimal stationary policies in discrete-time Markov decision processes with Borel spaces and unbounded reward/cost functions. More precisely, we provide another set of conditions, which only consists of a Lyapunov-type condition and the common continuity-compactness conditions. These conditions are imposed on the primitive data of the model of Markov decision processes and thus easy to verify. We also give two examples for which all our conditions are satisfied, but some of conditions in the related literature fail to hold.
References:
[1] Arapostathis, A., al, et: Discrete time controlled Markov processes with average cost criterion: a survey. SIAM J. Control Optim. 31 (1993), 282-344. DOI 10.1137/0331018 | MR 1205981
[2] Casella, G., Berger, R. L.: Statistical Inference. Second edition. Duxbury Thomson Learning 2002.
[3] Dynkin, E. B., Yushkevich, A. A.: Controlled Markov Processes. Springer, New York 1979. MR 0554083
[4] Gordienko, E., Hernández-Lerma, O.: Average cost Markov control processes with weighted norms: existence of canonical policies. Appl. Math. (Warsaw) 23 (1995), 2, 199-218. MR 1341223 | Zbl 0829.93067
[5] Guo, X. P., Shi, P.: Limiting average criteria for nonstationary Markov decision processes. SIAM J. Optim. 11 (2001), 4, 1037-1053. DOI 10.1137/s1052623499355235 | MR 1855220 | Zbl 1010.90092
[6] Guo, X. P., Zhu, Q. X.: Average optimality for Markov decision processes in Borel spaces: A new condition and approach. J. Appl. Probab. 43 (2006), 318-334. DOI 10.1239/jap/1152413725 | MR 2248567 | Zbl 1121.90122
[7] Hernández-Lerma, O., Lasserre, J. B.: Discrete-Time Markov Control Processes. Springer, New York 1996. DOI 10.1007/978-1-4612-0729-0 | MR 1363487 | Zbl 0928.93002
[8] Hernández-Lerma, O., Lasserre, J. B.: Further Topics on Discrete-Time Markov Control Processes. Springer, New York 1999. DOI 10.1007/978-1-4612-0561-6 | MR 1697198 | Zbl 0928.93002
[9] Kakumanu, M.: Nondiscounted continuous time Markov decision process with countable state space. SIAM J. Control Optim. 10 (1972), 1, 210-220. DOI 10.1137/0310016 | MR 0307785
[10] Lund, R. B., Tweedie, R. L.: Geometric convergence rates for stochastically ordered Markov chains. Math. Oper. Res. 21 (1996), 1, 182-194. DOI 10.1287/moor.21.1.182 | MR 1385873 | Zbl 0847.60053
[11] Meyn, S. P., Tweedie, R. L.: Markov Chains and Stochastic Stability. Cambridge Univ. Press, New York 2009. DOI 10.1017/cbo9780511626630 | MR 2509253 | Zbl 1165.60001
[12] Puterman, M. L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley, New York 1994. DOI 10.1002/9780470316887 | MR 1270015 | Zbl 1184.90170
[13] Sennott, L. I.: Average reward optimization theory for denumerable state spaces. In: Handbook of Markov Decision Processes (Int. Ser. Operat. Res. Manag. Sci. 40) (E. A. Feinberg and A. Shwartz Kluwer, eds.), Boston, pp. 153-172. DOI 10.1007/978-1-4615-0805-2_5 | MR 1887202 | Zbl 1008.90068
[14] Sennott, L. I.: Stochastic Dynamic Programming and the Control of Queueing Systems. Wiley, New York 1999. DOI 10.1002/9780470317037 | MR 1645435 | Zbl 0997.93503
[15] Zhu, Q. X.: Average optimality for continuous-time jump Markov decision processes with a policy iteration approach. J. Math. Anal. Appl. 339 (2008), 1, 691-704. DOI 10.1016/j.jmaa.2007.06.071 | MR 2370686
Partner of
EuDML logo