Previous |  Up |  Next

Article

Title: ET-DMGing: Event-triggered distributed momentum-gradient tracking optimization algorithm for multi-agent systems (English)
Author: Wang, Aijuan
Author: Tan, Xingmeng
Author: Nan, Hai
Language: English
Journal: Kybernetika
ISSN: 0023-5954 (print)
ISSN: 1805-949X (online)
Volume: 61
Issue: 6
Year: 2025
Pages: 762-788
Summary lang: English
.
Category: math
.
Summary: This paper proposes an event-triggered distributed momentum-gradient tracking optimization algorithm (ET-DMGing) for the collaborative optimization problem of minimizing the sum of all agents' local objective functions in multi-agent systems. Firstly, gradient tracking is employed to precisely track the average momentum gradient for updating agent states, which effectively reduces their dwell time in flat and oscillatory regions. The proposed ET-DMGing exhibits enhanced directional consistency and dynamic stability during optimization by leveraging momentum accumulation effects, achieving a linear convergence rate. Secondly, a new event-triggered condition is proposed, which considers the dual metrics of state error and momentum gradient error. This allows for a more comprehensive assessment of the agents' triggering needs, avoiding instability caused by single-dimensional triggering, and improving the triggering threshold. This event-triggered condition reduces the communication frequency among agents. Thirdly, we rigorously proved that the proposed ET-DMGing converges to the global optimum at a linear convergence rate by employing the small-gain theorem. Furthermore, explicit convergence conditions have been derived for parameter selection, including step size parameters and event-triggered weighting coefficients. Finally, numerical simulations are performed to verify the effectiveness and accuracy of the theoretical results. (English)
Keyword: gradient tracking
Keyword: event-triggered mechanism
Keyword: multi-agent systems
Keyword: distributed optimization
MSC: 68W15
MSC: 93D05
MSC: 93D21
DOI: 10.14736/kyb-2025-6-0762
.
Date available: 2026-01-07T12:56:01Z
Last updated: 2026-01-07
Stable URL: http://hdl.handle.net/10338.dmlcz/153264
.
Reference: [1] Carnevale, G., Farina, F., Notarnicolam, I., Notarstefano, G.: GTAdam: Gradient tracking with adaptive momentum for distributed online optimization..IEEE Trans. Control Network Systems 10 (2022), 3, 1436-1448.
Reference: [2] Chen, W., Ren, W.: Event-triggered zero-gradient-sum distributed consensus optimization over directed networks..Automatica 65 (2016), 90-97. Zbl 1328.93167, MR 3447697,
Reference: [3] Chen, C., Shen, L., Liu, W., Luo, Z.-Q.: Efficient-Adam: Communication-Efficient Distributed Adam..IEEE Trans. Signal Process. (2023).
Reference: [4] Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives..Adv. Neural Inform. Process. Systems 27 (2014).
Reference: [5] Gao, L., Deng, S., Li, H., Li, Ch.: An event-triggered approach for gradient tracking in consensus-based distributed optimization..IEEE Trans. Network Sci. Engrg. 9 (2021), 2, 510-523.
Reference: [6] Huang, H.-Ch., Lee, J.: A new variable step-size NLMS algorithm and its performance analysis..IEEE Trans. Signal Process. 60 (2011), 4, 2055-2060.
Reference: [7] Huang, K., Pu, S., Nedić, A.: An accelerated distributed stochastic gradient method with momentum..arXiv preprint arXiv:2402.09714 (2024).
Reference: [8] Jiang, X., Zeng, X., Sun, J., Chen, Jie: Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization..IEEE Trans. Neural Networks Learning Systems 34 (2022), 9, 5310-5321.
Reference: [9] Lee, H., Lee, S. H., Quek, T. Q. S.: Deep learning for distributed optimization: Applications to wireless resource management..IEEE J. Select. Areas Commun. 37 (2019), 10, 2251-2266.
Reference: [10] Lee, H.-S., Kim, S.-E., Lee, J.-W., Song, W.-J.: A variable step-size diffusion LMS algorithm for distributed estimation..IEEE Trans. Signal Process. 63 (2015), 7, 1808-1820.
Reference: [11] Lederer, A., Yang, Z., Jiao, J., Hirche, S.: Cooperative control of uncertain multiagent systems via distributed Gaussian processes..IEEE Trans. Automat. Control 68 (2022), 5, 3091-3098.
Reference: [12] Li, Q., Liao, Y., Wu, K., Zhang, L., Lin, J., Chen, M., Guerrero, J. M., Abbott, D.: Parallel and distributed optimization method with constraint decomposition for energy management of microgrids..IEEE Trans. Smart Grid 12 (2021), 6, 4627-4640.
Reference: [13] Li, H., Liao, X., Chen, G., Hill, D. J., Dong, Z., Huang, T.: Event-triggered asynchronous intermittent communication strategy for synchronization in complex dynamical networks..Neural Networks 66 (2015), 1-10.
Reference: [14] Li, H., Liu, S., Soh, Y. Ch., Xie, L., Xia, D.: Achieving linear convergence for distributed optimization with zeno-like-free event-triggered communication scheme..In: Proc. 29th Chinese Control And Decision Conference 2017, pp. 6224-6229.
Reference: [15] Li, H., Zheng, L., Wang, Z., Yan, Y., Feng, L., Guo, J.: S-DIGing: A stochastic gradient tracking algorithm for distributed optimization..IEEE Trans. Emerging Topics Comput. Intell. 6 (2020), no. 1, 53-65.
Reference: [16] Li, J., Su, H.: Gradient tracking: A unified approach to smooth distributed optimization..arXiv preprint arXiv:2202.09804 (2022).
Reference: [17] Liu, X., Miao, Ch., Fiumara, G., Meo, P. De: Information propagation prediction based on spatial-temporal attention and heterogeneous graph convolutional networks..IEEE Trans. Comput. Social Systems 11 (2024), 1, 945-958.
Reference: [18] Liu, Ch., Dou, X., Fan, Y., Cheng, S.: A penalty ADMM with quantized communication for distributed optimization over multi-agent systems..Kybernetika 59 (2023), 3, 392-417.
Reference: [19] Liu, S., Xie, L., Quevedo, D. E.: Event-triggered quantized communication-based distributed convex optimization..IEEE Trans. Control Network Systems 5 (2016), 1, 167-178.
Reference: [20] Lu, K., Zhu, Q.: Distributed algorithms involving fixed step size for mixed equilibrium problems with multiple set constraints..IEEE Trans. Neural Networks Learn. Systems 32 (2020), 11, 5254-5260.
Reference: [21] Morral, G., Bianchi, P., Fort, G.: Success and failure of adaptation-diffusion algorithms with decaying step size in multiagent networks..IEEE Trans. Signal Process. 65 (2017), 11, 2798-2813.
Reference: [22] Nedic, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs..SIAM J. Optim. 27 (2017), 4, 2597-2633.
Reference: [23] Qian, N.: On the momentum term in gradient descent learning algorithms..Neural Networks 12 (1999), 1, 145-151.
Reference: [24] Qu, G., Li, N.: Harnessing smoothness to accelerate distributed optimization..IEEE Trans. Control Network Systems 5 (2017), 3, 1245-1260.
Reference: [25] Rabbat, M., Nowak, R.: Distributed optimization in sensor networks..In: Proc. 3rd International Symposium on Information Processing in Sensor Networks 2004, pp. 20-27.
Reference: [26] Shen, Z., Yin, H.: A distributed routing-aware deployment algorithm for underwater sensor networks..IEEE Sensors J. (2024).
Reference: [27] Shi, W., Ling, Q., Wu, G., Yin, W.: Extra: An exact first-order algorithm for decentralized consensus optimization..SIAM J. Optim. 25 (2015), 2, 944-966.
Reference: [28] Tychogiorgos, G., Gkelias, A., Leung, K. K.: A non-convex distributed optimization framework and its application to wireless ad-hoc networks..IEEE Trans. Wireless Commun. 12 (2013), 9, 4286-4296.
Reference: [29] Tron, R., Thomas, J., Loianno, G., Daniilidis, K., Kumar, V.: A distributed optimization framework for localization and formation control: Applications to vision-based measurements..IEEE Control Systems Magazine 36 (2016), 4, 22-44.
Reference: [30] Tu, Z., Liang, S.: Distributed dual averaging algorithm for multi-agent optimization with coupled constraints..Kybernetika 60 (2024), 4, 427-445.
Reference: [31] Yang, T., Yi, X., Wu, J., Yuan, Y., Wu, D., Meng, Z., Hong, Y., Wang, Ho., Lin, Z., Johansson, K. H.: A survey of distributed optimization..Ann. Rev. Control 47 (2019), 278-305.
Reference: [32] Yang, Q., Chen, W.-N., Gu, T., Zhang, H., Yuan, H., Kwong, S., Zhang, J.: A distributed swarm optimizer with adaptive communication for large-scale optimization..IEEE Trans. Cybernetics 50 (2019), 7, 3393-3408.
Reference: [33] Yuan, Y., He, W., Du, W., Tian, Y.-Ch., Han, Q.-L., Qian, F.: Distributed gradient tracking for differentially private multi-agent optimization with a dynamic event-triggered mechanism..IEEE Trans. Systems Man Cybernet.: Systems (2024).
Reference: [34] Wang, Y., Cheng, S.: A stochastic mirror-descent algorithm for solving $AXB=C$ over a multi-agent system..Kybernetika 57 (2021), 2, 256-271.
.

Files

Files Size Format View
Kybernetika_61-2025-6_3.pdf 895.9Kb application/pdf View/Open
Back to standard record
Partner of
EuDML logo