Previous |  Up |  Next

Article

Keywords:
finite state Markov decision processes; discounted and average costs; elimination of suboptimal policies
Summary:
In this note we focus attention on identifying optimal policies and on elimination suboptimal policies minimizing optimality criteria in discrete-time Markov decision processes with finite state space and compact action set. We present unified approach to value iteration algorithms that enables to generate lower and upper bounds on optimal values, as well as on the current policy. Using the modified value iterations it is possible to eliminate suboptimal actions and to identify an optimal policy or nearly optimal policies in a finite number of steps without knowing precise values of the performance function.
References:
[1] Cruz-Suárez, D., Montes-de-Oca, R.: Uniform convergence of the value iteration policies for discounted Markov decision processes. Bol. de la Soc. Mat. Mexicana 12 (2006), 133–148. MR 2301750
[2] Cruz-Suárez, D., Montes-de-Oca, R., Salem-Silva, F.: Uniform approximations of discounted Markov decision processes to optimal policies. Proceedings of Prague Stochastics 2006 (M. Hušková and M. Janžura, eds.), Matfyzpress, Prague 2006, pp. 278–287.
[3] Grinold, J.: Elimination of suboptimal actions in Markov decision problems. Oper. Res. 21 (1973), 848–851. DOI 10.1287/opre.21.3.848 | MR 0359828 | Zbl 0259.90053
[4] Hastings, N. A. J.: Bounds on the gain of a Markov decision processes. Oper. Res. 19 (1971), 240–243. DOI 10.1287/opre.19.1.240
[5] Hastings, N. A. J., Mello, J.: Tests for suboptimal actions in discounted Markov programming. Manag. Sci. 19 (1971), 1019–1022. DOI 10.1287/mnsc.19.9.1019 | MR 0334959 | Zbl 0304.90116
[6] Hastings, N. A. J., Mello, J.: Tests for suboptimal actions in undiscounted Markov decision chains. Manag. Sci. 23 (1976), 87–91. DOI 10.1287/mnsc.23.1.87 | MR 0439034
[7] MacQueen, J.: A modified dynamic programming method for Markov decision problems. J. Math. Anal. Appl. 14 (1966), 38–43. DOI 10.1016/0022-247X(66)90060-6 | MR 0191707
[8] MacQueen, J.: A test of suboptimal actions in Markovian decision problems. Oper. Res. 15 (1967), 559–561. DOI 10.1287/opre.15.3.559
[9] Odoni, A. R.: On finding the maximal gain for Markov decision processes. Oper. Res. 17 (1969), 857–860. DOI 10.1287/opre.17.5.857 | MR 0284192 | Zbl 0184.23202
[10] Puterman, M. L., Shin, M. C.: Modified policy iteration algorithms for discounted Markov decision problems. Manag. Sci. 24 (1978), 1127–1137. DOI 10.1287/mnsc.24.11.1127 | MR 0521502 | Zbl 0391.90093
[11] Puterman, M. L., Shin, M. C.: Action elimination procedures for modified policy iteration algorithm. Oper. Res. 30 (1982), 301–318. DOI 10.1287/opre.30.2.301 | MR 0653253
[12] Puterman, M. L.: Markov Decision Processes – Discrete Stochastic Dynamic Programming. Wiley, New York 1994. MR 1270015 | Zbl 1184.90170
[13] Sladký, K.: O metodě postupných aproximací pro nalezení optimálního řízení markovského řetězce (On successive approximation method for finding optimal control of a Markov chain). Kybernetika 4 (1969), 2, 167–176.
[14] White, D. J.: Dynamic programming, Markov chains and the method of successive approximation. J. Math. Anal. Appl. 6 (1963), 296–306. DOI 10.1016/0022-247X(63)90017-9 | MR 0148480
Partner of
EuDML logo