Previous |  Up |  Next

Article

Full entry | Fulltext not available (moving wall 24 months)      Feedback
Keywords:
unconstrained optimization problem; self-scaling memoryless BFGS; conjugate gradient; measure function
Summary:
We introduce a new scaling parameter for the Dai-Kou family of conjugate gradient algorithms (2013), which is one of the most numerically efficient methods for unconstrained optimization. The suggested parameter is based on eigenvalue analysis of the search direction matrix and minimizing the measure function defined by Dennis and Wolkowicz (1993). The corresponding search direction of conjugate gradient method has the sufficient descent property and the extended conjugacy condition. The global convergence of the proposed algorithm is given for both uniformly convex and general nonlinear objective functions. Also, numerical experiments on a set of test functions of the CUTER collections and the practical problem of the manipulator of robot movement control show that the proposed method is effective.
References:
[1] Abubakar, A. B., Kumam, P., Malik, M., Ibrahim, A. H.: A hybrid conjugate gradient based approach for solving unconstrained optimization and motion control problems. Math. Comput. Simul. 201 (2022), 640-657. DOI 10.1016/j.matcom.2021.05.038 | MR 4439403 | Zbl 1540.90255
[2] Aminifard, Z., Babaie-Kafaki, S., Ghafoori, S.: An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing. Appl. Numer. Math. 167 (2021), 187-201. DOI 10.1016/j.apnum.2021.05.002 | MR 4258711 | Zbl 1467.65060
[3] Andrei, N.: Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization. Optim. Methods Softw. 32 (2017), 534-551. DOI 10.1080/10556788.2016.1225211 | MR 3630455 | Zbl 1368.49057
[4] Andrei, N.: New conjugate gradient algorithms based on self-scaling memoryless Broyden-Fletcher-Goldfarb-Shanno method. Calcolo 57 (2020), Article ID 17, 27 pages. DOI 10.1007/s10092-020-00365-7 | MR 4102811 | Zbl 1445.90100
[5] Babaie-Kafaki, S.: On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J. Optim. Theory Appl. 167 (2015), 91-101. DOI 10.1007/s10957-015-0724-x | MR 3395207 | Zbl 1327.90394
[6] Byrd, R. H., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 26 (1989), 727-739. DOI 10.1137/0726042 | MR 0997665 | Zbl 0676.65061
[7] Dai, Y.-H., Kou, C.-X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23 (2013), 296-320. DOI 10.1137/100813026 | MR 3033109 | Zbl 1266.49065
[8] Bouter, M. L. de Leeuw den, Gijzen, M. B. van, Remis, R. F.: CG variants for general-form regularization with an application to low-field MRI. Numerical Mathematics and Advanced Applications: ENUMATH 2019 Lecture Notes in Computational Science and Engineering 139. Springer, Cham (2019), 673-681. DOI 10.1007/978-3-030-55874-1_66 | MR 4266546 | Zbl 1475.65038
[9] J. E. Dennis, Jr., H. Wolkowicz: Sizing and least-change secant methods. SIAM J. Numer. Anal. 30 (1993), 1291-1314. DOI 10.1137/0730067 | MR 1239822 | Zbl 0802.65081
[10] Dolan, E. D., Moré, J. J.: Benchmarking optimization software with performance profiles. Math. Program. 91 (2002), 201-213. DOI 10.1007/s101070100263 | MR 1875515 | Zbl 1049.90004
[11] Esmaeili, H., Shabani, S., Kimiaei, M.: A new generalized shrinkage conjugate gradient method for sparse recovery. Calcolo 56 (2019), Article ID 1, 38 pages. DOI 10.1007/s10092-018-0296-x | MR 3882971 | Zbl 1461.65147
[12] Gould, N. I. M., Orban, D., Toint, P. L.: CUTEr and SifDec: A constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29 (2003), 373-394. DOI 10.1145/962437.96243 | Zbl 1068.90526
[13] Hager, W. W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32 (2006), 113-137. DOI 10.1145/1132973.1132979 | MR 2272354 | Zbl 1346.90816
[14] Kaporin, I. E.: New convergence results and preconditioning strategies for the conjugate gradient method. Numer. Linear Algebra Appl. 1 (1994), 179-210. DOI 10.1002/nla.1680010208 | MR 1277801 | Zbl 0837.65027
[15] Kratzer, D., Parter, S. V., Steuerwalt, M.: Block splittings for the conjugate gradient method. Comput. Fluids 11 (1983), 255-279. DOI 10.1016/0045-7930(83)90015-4 | MR 0726692 | Zbl 0526.76003
[16] Li, W., Liu, Y., Yang, J., Wu, W.: A new conjugate gradient method with smoothing $L_{1/2}$ regularization based on a modified secant equation for training neural networks. Neural Process. Lett. 48 (2018), 955-978. DOI 10.1007/s11063-017-9737-9
[17] Nezhadhosein, S.: A modified descent spectral conjugate gradient method for unconstrained optimization. Iran. J. Sci. Technol., Trans. A, Sci. 45 (2021), 209-220. DOI 10.1007/s40995-020-01012-0 | MR 4208137
[18] Nocedal, J., Wright, S. J.: Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer, New York (2006). DOI 10.1007/978-0-387-40065-5 | MR 2244940 | Zbl 1104.65059
[19] Perry, A.: A class of conjugate gradient algorithms with a two-step variable metric memory. Discussion paper 269 (1977), 16 pages Available at https://EconPapers.repec.org/RePEc:nwu:cmsems:269\kern0pt
[20] Shanno, D. F.: Conjugate gradient methods with inexact searches. Math. Oper. Res. 3 (1978), 244-256. DOI 10.1287/moor.3.3.244 | MR 0506662 | Zbl 0399.90077
[21] Sun, M., Liu, J., Wang, Y.: Two improved conjugate gradient methods with application in compressive sensing and motion control. Math. Probl. Eng. 2020 (2020), Article ID 9175496, 11 pages. DOI 10.1155/2020/9175496 | MR 4099297 | Zbl 7347959
[22] Watkins, D. S.: Fundamentals of Matrix Computations. Pure and Applied Mathematics. John Wiley & Sons, New York (2002). DOI 10.1002/0471249718 | MR 1899577 | Zbl 1005.65027
[23] Winther, R.: Some superlinear convergence results for the conjugate gradient method. SIAM J. Numer. Anal. 17 (1980), 14-17. DOI 10.1137/071700 | MR 0559456 | Zbl 0447.65021
[24] Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11 (1969), 226-235. DOI 10.1137/1011036 | MR 0250453 | Zbl 0177.20603
[25] Wolfe, P.: Convergence conditions for ascent methods. II: Some corrections. SIAM Rev. 13 (1971), 185-188. DOI 10.1137/1013035 | MR 0288943 | Zbl 0216.26901
Partner of
EuDML logo