Previous |  Up |  Next

Article

Title: Handling a Kullback-Leibler divergence random walk for scheduling effective patrol strategies in Stackelberg security games (English)
Author: Solis, César U. S.
Author: Clempner, Julio B.
Author: Poznyak, Alexander S.
Language: English
Journal: Kybernetika
ISSN: 0023-5954 (print)
ISSN: 1805-949X (online)
Volume: 55
Issue: 4
Year: 2019
Pages: 618-640
Summary lang: English
.
Category: math
.
Summary: This paper presents a new model for computing optimal randomized security policies in non-cooperative Stackelberg Security Games (SSGs) for multiple players. Our framework rests upon the extraproximal method and its extension to Markov chains, within which we explicitly compute the unique Stackelberg/Nash equilibrium of the game by employing the Lagrange method and introducing the Tikhonov regularization method. We also consider a game-theory realization of the problem that involves defenders and attackers performing a discrete-time random walk over a finite state space. Following the Kullback-Leibler divergence the players' actions are fixed and, then the next-state distribution is computed. The player's goal at each time step is to specify the probability distribution for the next state. We present an explicit construction of a computationally efficient strategy under mild defenders and attackers conditions and demonstrate the performance of the proposed method on a simulated target tracking problem. (English)
Keyword: Stackelberg games
Keyword: security
Keyword: patrolling
Keyword: Markov chains
MSC: 91A10
MSC: 91A35
MSC: 91A80
MSC: 91B06
MSC: 91B70
MSC: 91B74
idZBL: Zbl 07177907
idMR: MR4043539
DOI: 10.14736/kyb-2019-4-0618
.
Date available: 2020-01-10T14:21:17Z
Last updated: 2020-04-02
Stable URL: http://hdl.handle.net/10338.dmlcz/147960
.
Reference: [1] Agmon, N., Kaminka, G. A., Kraus, S.: Multi-robot adversarial patrolling: facing a full-knowledge opponent..J. Artif. Intell. Res. 42 (2011), 1, 887-916. MR 2874818
Reference: [2] Albarran, S., Clempner, J. B.: A Stackelberg security Markov game based on partial information for strategic decision making against unexpected attacks..Engrg. Appl. Artif. Intell. 81 (2019), 408-419. 10.1016/j.engappai.2019.03.010
Reference: [3] Antipin, A. S.: An extraproximal method for solving equilibrium programming problems and games..Comput. Mathematics and Math. Phys. 45 (2005), 11, 1893-1914. MR 2203222
Reference: [4] Blum, A., Haghtalab, N and;, Procaccia, A. D.: Lazy defenders are almost optimal against diligent attackers..In: Proc. 28th AAAI Conference on Artificial Intelligence, Québec 2014, pp. 573-579.
Reference: [5] Clempner, J. B.: A continuous-time Markov Stackelberg security game approach for reasoning about real patrol strategies..Int. J. Control 91 (2018), 2494-2510. MR 3866732, 10.1080/00207179.2017.1371853
Reference: [6] Clempner, J. B., Poznyak, A. S.: Simple computing of the customer lifetime value: A fixed local-optimal policy approach..J. Systems Sci. Systems Engrg. 23 (2014), 4, 439-459. 10.1007/s11518-014-5260-y
Reference: [7] Clempner, J. B., Poznyak, A. S.: Stackelberg security games: Computing the shortest-path equilibrium..Expert Syst. Appl. 42 (2015), 8, 3967-3979. 10.1016/j.eswa.2014.12.034
Reference: [8] Clempner, J. B., Poznyak, A. S.: Analyzing an optimistic attitude for the leader firm in duopoly models: A strong Stackelberg equilibrium based on a lyapunov game theory approach..Econ. Comput. Econ. Cybern. Stud. Res. 4 (2016), 50, 41-60.
Reference: [9] Clempner, J. B., Poznyak, A. S.: Conforming coalitions in Stackelberg security games: Setting max cooperative defenders vs. non-cooperative attackers..Appl. Soft Comput. 47 (2016), 1-11. 10.1016/j.asoc.2016.05.037
Reference: [10] Clempner, J. B., Poznyak, A. S.: Conforming coalitions in Stackelberg security games: Setting max cooperative defenders vs. non-cooperative attackers..Appl. Soft Comput. 47 (2016), 1-11. 10.1016/j.asoc.2016.05.037
Reference: [11] Clempner, J. B., Poznyak, A. S.: Convergence analysis for pure and stationary strategies in repeated potential games: Nash, lyapunov and correlated equilibria..Expert Systems Appl. 46 (2016), 474-484. 10.1016/j.eswa.2015.11.006
Reference: [12] Clempner, J. B., Poznyak, A. S.: Using the extraproximal method for computing the shortest-path mixed lyapunov equilibrium in Stackelberg security games..Math. Comput. Simul. 138 (2017), 14-30. MR 3638268, 10.1016/j.matcom.2016.12.010
Reference: [13] Clempner, J. B., Poznyak, A. S.: A Tikhonov regularization parameter approach for solving lagrange constrained optimization problems..Engrg. Optim. 50 (2018), 11, 1996-2012. MR 3851177, 10.1080/0305215x.2017.1418866
Reference: [14] Clempner, J. B., Poznyak, A. S.: A Tikhonov regularized penalty function approach for solving polylinear programming problems..J. Comput. Appl. Math. 328 (2018), 267-286. MR 3697102, 10.1016/j.cam.2017.07.032
Reference: [15] Conitzer, V., Sandholm, T.: Computing the optimal strategy to commit to..In: Seventh ACM Conference on Electronic Commerce, Ann Arbor 2006, pp. 82-90.
Reference: [16] Guerrero, D., Carsteanu, A. A., Huerta, R., Clempner, J. B.: An iterative method for solving Stackelberg security games: A Markov games approach..In: 14th International Conference on Electrical Engineering, Computing Science and Automatic Control, Mexico City 2017, pp. 1-6. 10.1109/iceee.2017.8108857
Reference: [17] Guerrero, D., Carsteanu, A. A., Huerta, R., Clempner, J. B.: Solving Stackelberg security Markov games employing the bargaining nash approach: Convergence analysis..Computers Security 74 (2018), 240-257. 10.1016/j.cose.2018.01.005
Reference: [18] Jain, M., Kardes, E., Kiekintveld, C., Ordoñez, F., Tambe, M.: Security games with arbitrary schedules: A branch and price approach..In: Proc. National Conference on Artificial Intelligence (AAAI), Atlanta 2010. 10.1016/j.cose.2018.01.005
Reference: [19] Kiekintveld, C., Jain, M., Tsai, J., Pita, J., Ordñez, F., Tambe, M.: Computing optimal randomized resource allocations for massive security games..In: Proc. Eighth International Conference on Autonomous Agents and Multiagent Systems, volume 1, Budapest 2009, pp. 689-696. 10.1017/cbo9780511973031.008
Reference: [20] Korzhyk, D., Yin, Z., Kiekintveld, C., Conitzer, V., Tambe, M.: Stackelberg vs. nash in security games: An extended investigation of interchangeability equivalence, and uniqueness..J. Artif. Intell. Res. 41 (2011), 297-327. MR 2863313, 10.1613/jair.3269
Reference: [21] Letchford, J., MacDermed, L., Conitzer, V., Parr, R., Isbell, C. L.: Computing optimal strategies to commit to in stochastic games..In: Proc. Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI), Toronto 2012, pp. 1380-1386. 10.1145/2509002.2509011
Reference: [22] Letchford, J., Vorobeychik, Y.: Optimal interdiction of attack plans..In: Proc. Twelfth International Conference of Autonomous Agents and Multi-agent Systems (AAMAS)
Reference: [23] Paruchuri, P., Pearce, J. P., Marecki, J., Tambe, M., Ordoñez, F., Kraus, S.: Playing games with security: An efficient exact algorithm for bayesian stackelberg games..In: Proc. Seventh International Conference on Autonomous Agents and Multiagent Systems, Estoril 2008, pp. 895-902.
Reference: [24] Poznyak, A. S.: Advance Mathematical Tools for Automatic Control Engineers. Vol. 2 Deterministic Techniques..Elsevier, Amsterdam 2008. MR 2374025, 10.1016/b978-008044674-5.50015-8
Reference: [25] Poznyak, A. S., Najim, K., Gomez-Ramirez, E.: Self-learning Control of Finite Markov Chains..Marcel Dekker, New York 2000. MR 1760540
Reference: [26] Salgado, M., Clempner, J. B.: Measuring the emotional distance using game theory via reinforcement learning: A kullback-leibler divergence approach..Expert Systems Appl. 97 (2018), 266-275. 10.1016/j.eswa.2017.12.036
Reference: [27] Shieh, E., An, B., Yang, R., Tambe, M., Baldwin, C., DiRenzo, J., Maule, B., Meyer, G.: Protect: A deployed game theoretic system to protect the ports of the united states..In: Proc. 11th International Conference on Autonomous Agents and Multiagent Systems, 2012. 10.1609/aimag.v33i4.2401
Reference: [28] Skerker, M.: Binary Bullets: The Ethics of Cyberwarfare, chapter Moral Concerns with Cyberspionage: Automated Keyword Searches and Data Mining, pp. 251-276..Oxford University Press, NY 2016.
Reference: [29] Solis, C., Clempner, J. B., Poznyak, A. S.: Modeling multi-leader-follower non-cooperative Stackelberg games..Cybernetics Systems 47 (2016), 8, 650-673. 10.1080/01969722.2016.1232121
Reference: [30] Trejo, K. K., Clempner, J. B., Poznyak, A. S.: Computing the stackelberg/nash equilibria using the extraproximal method: Convergence analysis and implementation details for Markov chains games..Int. J. Appl. Math. Computer Sci. 25 (2015), 2, 337-351. MR 3363520, 10.1515/amcs-2015-0026
Reference: [31] Trejo, K. K., Clempner, J. B., Poznyak, A. S.: A Stackelberg security game with random strategies based on the extraproximal theoretic approach..Engrg. Appl. Artif. Intell. 37 (2015), 145-153. 10.1016/j.engappai.2014.09.002
Reference: [32] Trejo, K. K., Clempner, J. B., Poznyak, A. S.: Adapting strategies to dynamic environments in controllable stackelberg security games..In: IEEE 55th Conference on Decision and Control (CDC), Las Vegas 2016, pp. 5484-5489. 10.1109/cdc.2016.7799111
Reference: [33] Trejo, K. K., Clempner, J. B., Poznyak, A. S.: An optimal strong equilibrium solution for cooperative multi-leader-follower Stackelberg Markov chains games..Kybernetika 52 (2016), 2, 258-279. MR 3501161, 10.14736/kyb-2016-2-0258
Reference: [34] Trejo, K. K., Clempner, J. B., Poznyak, A. S.: Computing the lp-strong nash equilibrium for Markov chains games..Appl. Math. Modell. 41 (2017), 399-418. MR 3580575, 10.1016/j.apm.2016.09.001
Reference: [35] Trejo, K. K., Clempner, J. B., Poznyak, A. S.: Adapting attackers and defenders preferred strategies: A reinforcement learning approach in stackelberg security games..J. Comput. System Sci. 95 (2018), 35-54. MR 3787575, 10.1016/j.jcss.2017.12.004
Reference: [36] Yang, R., Kiekintveld, C., Ordonez, F., Tambe, M., John, R.: Improving resource allocation strategy against human adversaries in security games..In: Proc. International Joint Conference on Artificial Intelligence (IJCAI), Barcelona 2011, pp. 458-464. MR 3024211
Reference: [37] Yin, Z., Jain, M., Tambe, M., Ordonez, F.: Risk-averse strategies for security games with execution and observational uncertainty..In: Proc. AAAI Conference on Artificial Intelligence (AAAI), San Francisco 2011, pp. 758-763.
Reference: [38] Yin, Z., Tambe, M.: A unified method for handling discrete and continuous uncertainty in bayesian stackelberg games..In: Proc. Eleventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Valencia 2012, pp. 234-242.
.

Files

Files Size Format View
Kybernetika_55-2019-4_2.pdf 694.0Kb application/pdf View/Open
Back to standard record
Partner of
EuDML logo