Abstract
We investigate an algorithm of gradient type with a backward inertial step in connection with the minimization of a nonconvex differentiable function. We show that the generated sequences converge to a critical point of the objective function, if a regularization of the objective function satisfies the Kurdyka-Łojasiewicz property. Further, we provide convergence rates for the generated sequences and the objective function values formulated in terms of the Łojasiewicz exponent. Finally, some numerical experiments are presented in order to compare our numerical scheme with some algorithms well known in the literature.
Authors
Cristian Daniel Alecsa
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy
Department of Mathematics, Babes-Bolyai University, Cluj-Napoca, Romania
Szilárd Csaba László
Technical University of Cluj-Napoca, RomaniaAdrian Viorel
Technical University of Cluj-Napoca, RomaniaKeywords
Optimization; Unconstrained optimization problems; Inertial algorithm; Nonconvex optimization; Kurdyka-Łojasiewicz inequality; Convergence rate; Discrete Lyapunov energy functions.
Paper coordinates
Numer. Algor., 84 (2020), pp. 485–512. doi: 10.1007/s11075-019-00765-z
About this paper
Print ISSN
Online ISSN
[1] H. Attouch, J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions in-volving analytic features, Mathematical Programming 116(1-2) Series B, 5-16, 2009
[2] H. Attouch, J. Bolte, P. Redont, A. Soubeyran, Proximal alternating minimization and projec-tion methods for nonconvex problems: an approach based on the Kurdyka- Lojasiewicz inequality,Mathematics of Operations Research 35(2), 438-457, 2010
[3] H. Attouch, J. Bolte, B.F. Svaiter, Convergence of descent methods for semi-algebraic and tameproblems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods, Mathematical Programming 137(1-2) Series A, 91-129, 2013
[4] H. Attouch, Z. Chbani, J. Peypouquet, P. Redont, Fast convergence of inertial dynamics andalgorithms with asymptotic vanishing viscosity, Math. Program. 168(1-2) Ser. B, 123-175, 2018
[5] P. B´egout, J. Bolte, M.A. Jendoubi, On damped second-order gradient systems, Journal of Differ-ential Equations (259), 3115-3143, 2015
[6] J. Bolte, S. Sabach, M. Teboulle, Proximal alternating linearized minimization for nonconvex andnonsmooth problems, Mathematical Programming Series A (146)(1-2), 459-494, 2014
[7] J. Bolte, A. Daniilidis, A. Lewis, The Lojasiewicz inequality for nonsmooth subanalytic functionswith applications to subgradient dynamical systems, SIAM Journal on Optimization 17(4), 1205-1223, 2006
[8] J. Bolte, A. Daniilidis, A. Lewis, M. Shiota, Clarke subgradients of stratifiable functions, SIAMJournal on Optimization 18(2), 556-572, 2007
[9] J. Bolte, A. Daniilidis, O. Ley, L. Mazet, Characterizations of Lojasiewicz inequalities: subgradientflows, talweg, convexity, Transactions of the American Mathematical Society 362(6), 3319-3363,2010
[10] R.I. Bot¸, E.R. Csetnek, S.C. Laszlo, Approaching nonsmooth nonconvex minimization throughsecond-order proximal-gradient dynamical systems, Journal of Evolution Equations 18(3), 1291-1318, 2018
[11] R.I. Bot¸, E.R. Csetnek, S.C. Laszlo, An inertial forward-backward algorithm for minimizing thesum of two non-convex functions, Euro Journal on Computational Optimization 4(1), 3-25, 2016
[12] R.I. Bot¸, E.R. Csetnek, S.C. Laszlo, A second order dynamical approach withvariable damping to nonconvex smooth minimization, Applicable Analysis (2018),https://doi.org/10.1080/00036811.2018.1495330
[13] P. Frankel, G. Garrigos, J. Peypouquet, Splitting Methods with Variable Metric forKurdyka Lojasiewicz Functions and General Convergence Rates, Journal of Optimization Theoryand Applications, 165(3), 874900, 2015
[14] K. Kurdyka, On gradients of functions definable in o-minimal structures, Annales de l’institutFourier (Grenoble) 48(3), 769-783, 1998
[15] S.C. Laszlo, Convergence rates for an inertial algorithm of gradient type associated to a smoothnonconvex minimization, https://arxiv.org/abs/1807.0038721
[16] G. Li, T. K. Pong, Calculus of the Exponent of Kurdyka- Lojasiewicz Inequality and Its Applicationsto Linear Convergence of First-Order Methods, Foundations of Computational Mathematics, 1-34, 2018
[17] S. Lojasiewicz, Une propriete topologique des sous-ensembles analytiques reels, Les Equations auxDerivees Partielles, Editions du Centre National de la Recherche Scientifique Paris, 87-89, 1963
[18] Y.E. Nesterov, A method for solving the convex programming problem with convergence rateO(1/k2), (Russian) Dokl. Akad. Nauk SSSR 269(3), 543-547, 1983
[19] Y. Nesterov, Introductory lectures on convex optimization: a basic course. Kluwer Academic Pub-lishers, Dordrecht, 2004
[20] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, U.S.S.R. Comput.Math. Math. Phys., 4(5):1-17, 1964
[21] R.T. Rockafellar, R.J.-B. Wets, Variational Analysis, Fundamental Principles of Mathematical Sciences 317, Springer-Verlag, Berlin, 1998
[22] W. Su, S. Boyd, E.J. Candes, A differential equation for modeling Nesterov’s accelerated gradientmethod: theory and insights, Journal of Machine Learning Research, 17, 1-43, 2016
Paper (preprint) in HTML form
A gradient-type algorithm with backward inertial steps associated to a nonconvex minimization problem
Abstract
We investigate an algorithm of gradient type with a backward inertial step in connection with the minimization of a nonconvex differentiable function. We show that the generated sequences converge to a critical point of the objective function, if a regularization of the objective function satisfies the Kurdyka-Łojasiewicz property. Further, we provide convergence rates for the generated sequences and the objective function values formulated in terms of the Łojasiewicz exponent. Finally, some numerical experiments are presented in order to compare our numerical scheme with some algorithms well known in the literature.
Keywords Inertial algorithm Nonconvex optimization Kurdyka-Łojasiewicz inequality Convergence rate
Mathematics Subject Classification (2010) 90C26•90C30
1 Introduction and Preliminaries
Gradient-type algorithms have a long history, going back at least to Cauchy (1847), and also a wealth of applications. Solving linear systems, Cauchy’s original motivation, is maybe the most obvious application, but many of today’s hot topics in machine learning or image processing also deal with optimization problems from an algorithmic perspective and rely on gradient-type algorithms.
The original gradient descent algorithm
which is precisely an explicit Euler method applied to the gradient flow
does not achieve very good convergence rates and much research has been dedicated to accelerating convergence.
Based on the analogy to mechanical systems, e.g., to the movement, with friction, of a heavy ball in a potential well defined by the smooth convex objective function , with -Lipschitz continuous gradient, Polyak [26] was able to provide the seminal idea for achieving acceleration namely the addition of inertial (momentum) terms to the gradient algorithm. His two-step iterative method, the so-called heavy ball method, takes the following form: For the initial values and let:
where and is a step-size parameter. Recently, in [29], the convergence rate of order has been obtained for the heavy ball algorithm, provided is coercive, the inertial parameter is a nonincreasing sequence, and the stepsize parameter satisfies , for some fixed (see also [18] for an ergodic rate). More precisely, in [29], it is shown that under the previous assumption one has:
It is worthwhile mentioning that the forward-backward algorithm studied in [12] in a full nonconvex setting reduces to Polyak’s heavy ball method if the nonsmooth term vanishes; hence, it can be viewed as an extension of the heavy ball method to the case when the objective function is possible nonconvex, but still has Lipschitz continuous gradient with Lipschitz contant . Indeed, Algorithm 1 from [12], in case and has the form: For the initial values and , let:
(2) |
where and for all . In this particular case, convergence of the generated sequence to a critical point of the objective function can be shown under the assumption that a regularization of
the objective function satisfies the Kurdyka-Łojasiewicz property, further and satisfy:
(3) |
Note that (3) implies ; hence, for all . If and are positive numbers such that , then by choosing , relation (3) is satisfied.
Probably the most acclaimed inertial algorithm is Nesterov’s accelerated gradient method, which in its particular form: For the initial values and let:
for a convex with Lipschitz continuous gradient and step size , exhibits an improved convergence rate of (see [15, 23]), and which, as highlighted by Su, Boyd, and Candès [28], (see also [5]), can be seen as the discrete counterpart of the second-order differential equation:
In this respect, it may be useful to recall that until recently little was known about the efficiency of Nesterov’s accelerated gradient method outside a convex setting. However, in [20], a Nesterov-like method, differing from the original only by a multiplicative coefficient, has been studied and convergence rates have been provided for the very general case when a regularization of the objective function has the KL property. More precisely, in [20], the following algorithm was considered.
For the initial values and let:
where and . Unfortunately, for technical reasons, one can not allow therefore one does not have full equivalence between Algorithm (5) and Algorithm (4). However, what is lost at the inertial parameter is gained at the step size, since for one may have . Convergence of the sequences generated by Algorithm (5) was obtained under the assumption that the regularization of the objective function , namely , is a KL function.
In this paper, we deal with the optimization problem:
(P) |
where is a Fréchet differentiable function with -Lipschitz continuous gradient, i.e., there exists such that for all , and we associate to (P) the following inertial algorithm of gradient type.
Consider the starting points , and for every let:
where we assume that
Remark 1 Observe that the inertial parameter becomes negative after a number of iterations and this can be viewed as taking a backward inertial step in our algorithm. Of course, this also shows that after a number of iteration is a convex combination of and , (see [16] for similar constructions), that is:
Another novelty of Algorithm (6) is that it allows variable step size. Moreover, it can easily be verified that whenever one may take .
We emphasize that the analysis of the proposed algorithm (6) is intimately related to the properties of the following regularization of the objective function (see also ), that is:
(7) |
In the remainder of this section, we introduce the necessary apparatus of notions and results that we will need in our forthcoming analysis.
For a differentiable function , we denote by : the set of critical points of . The following so-called descent lemma (see [24]) will play an essential role in our forthcoming analysis.
Lemma 2 Let be Frèchet differentiable with Lipschitz continuous gradient. Then:
Furthermore, the set of cluster points of a given sequence will be denoted by . At the same time, the distance function to a set is defined for as
Our convergence result relies on the concept of a KL function. For , we denote by the class of concave and continuous functions
such that is continuously differentiable on ( ), continuous at 0 and for all .
Definition 1 (Kurdyka-Lojasiewicz property) Let be a differentiable function. We say that satisfies the Kurdyka-Lojasiewicz (KL) property at if there exists , a neighborhood of and a function such that for all in the intersection:
the following inequality holds:
If satisfies the KL property at each point in , then is called a function.
The function is called a desingularizing function (see for instance [6]). The origins of this notion go back to the pioneering work of Łojasiewicz [22], where it is proved that for a real-analytic function and a critical point (that is ), there exists such that the function is bounded around . This corresponds to the situation when , where is a given constant, and leads to the following definition.
Definition 2 (for which we refer to [2,8,22]) A differentiable function has the Łojasiewicz property with exponent at if there exists such that:
(8) |
for every , with .
In the above definition, for , we adopt the convention , such that if , then (see [2]).
The result of Łojasiewicz allows the interpretation of the KL property as a reparametrization of the function values in order to avoid flatness around the critical points. Kurdyka [19] extended this property to differentiable functions definable in an o-minimal structure. Further extensions to the nonsmooth setting can be found in [3, 8-10].
To the class of KL functions belong semi-algebraic, real sub-analytic, semiconvex, uniformly convex, and convex functions satisfying a growth condition. We refer the reader to and the references therein for more details regarding all the classes mentioned above and illustrating examples.
Finally, an important role in our convergence analysis will be played by the following uniformized KL property given in [7, Lemma 6].
Lemma 3 Let be a compact set and let be a differentiable function. Assume that is constant on and satisfies the KL property at each
point of . Then, there exist and such that for all and for all in the intersection:
(9) |
the following inequality holds:
(10) |
The outline of the paper is the following. In Section 2, we give a sufficient condition that ensures the decrease property of the regularization in the iterates, and which also ensures that the iterates gap belongs to . Further, using these results, we show that the set of cluster points of the iterates is included in the set of critical points of the objective function. Finally, by means of the the KL property of , we obtain that the iterates gap belongs to . This implies the convergence of the iterates (see also [4, 7, 12, 20]). In Section 3, we obtain several convergence rates both for the sequences generated by the numerical scheme (6), as well as for the function values in the terms of the Łojasiewicz exponent of and , respectively (see [14, 17, 20] and also [1] for convergence rates under geometrical conditions). Finally, in Section 4, we present some numerical experiments that show that our algorithm, in many cases, has better properties than the algorithms used in the literature.
2 Convergence results
We start to investigate the convergence of the proposed algorithm by showing that is decreasing along certain sequences built upon the iterates generated by (6).
Theorem 4 Let and be the sequences generated by the numerical scheme (6) and for all consider the sequences:
and
Then, there exists such that:
(i) The sequence is decreasing and for all .
Assume further that is bounded from below. Then,
(ii) The sequence is convergent;
(iii) .
Proof By applying the descent Lemma 2 to , we have:
However, after rewriting the first equation in (6) as and taking the inner product with to obtain:
the descent inequality becomes:
(11) | |||
Further,
and
hence:
(12) |
Thus, we have:
and
Replacing the above equalities in (12) gives:
The above inequality motivates the introduction of the following notations:
and
(13) |
for all .
In terms of these notations, after using the equality:
we can write:
(14) |
Consequently, we have:
Now, since as and we have:
Hence, there exists and such that for all one has:
which, in the view of (15), shows (i); that is, the sequence is decreasing for .
By using (15) again, we obtain:
for all , or, more convenient, that:
(16) |
for all . Let . Summing up the latter relations gives:
which leads to:
(17) |
Now, taking into account that is bounded from below, after letting we obtain:
which proves (iii).
This also shows that:
hence
But then, using again the fact that is bounded from below, we have that the sequence is bounded from below and also decreasing (see (i)) for ; hence, there exists:
Remark 5 By introducing the sequence:
(18) |
one can easily observe that the statements of Theorem 4 can be expressed in terms of the regularization of the objective function since for all .
An interesting fact is that for the sequence to be decreasing one does not need the boundedness of the objective function , but only its regularity, as can be seen in the proof of Theorem 4. The energy decay is thus a structural property of the algorithm (6) and only the existence of the limit requires the boundedness of the objective function.
Remark 6 Observe that conclusion (iii) in the hypotheses of Theorem 4 assures that the sequence , in particular that:
(19) |
Lemma 7 In the framework of the optimization problem (P), assume that the objective function is bounded from below and consider the sequences and generated by the numerical algorithm (6) and let be defined by (18). Then, the following statements are valid:
(i) The sets of cluster points of and coincide and are contained in the set o critical points of g, i.e.:
(ii) |
Proof (i) We start by proving and . Bearing in mind that and that the sequences , and are convergent, the conclusion is quite straightforward. Indeed, if and is a subsequence such that , then:
and
imply that the sequences and all converge to the same element . The reverse inclusions follow in a very similar manner from the definitions of and .
In order to prove that , we use the fact that is a continuous operator. So, passing to the limit in and taking into account that , we have:
and finally, as , we obtain:
For proving the statement (ii), we rely on a direct computation yielding:
(20) |
which, in turn, gives
and allows us to apply (i) to obtain the desired conclusion.
Some direct consequences of Theorem 4 (ii) and Lemma 7 are the following.
Fact 8 In the setting of Lemma 7, let . It follows that crit and:
Consequently,
is finite and constant on the set .
The arguments behind the proofs of the following two facts are the same as those in Lemma 13 from [20].
Fact 9 If the assumptions from Lemma 7 hold true and if the sequence is bounded, then the following conclusions hold up :
(i) is nonempty and compact,
(ii) .
Remark 10 We emphasize that if is coercive, that is , then is bounded from below and , the sequences generated by (6), are bounded.
Indeed, notice that is bounded from below, being a continuous and coercive function (see [27]). Note that according to Theorem 4 the sequence is convergent hence is bounded. Consequently, from (17), it follows that is contained for every , ( is defined in the hypothesis of Theorem 4), in a lower level set of , which is bounded. Since is bounded, taking into account (19), it follows that is also bounded.
Now, based on the conclusions of Lemma 7, we present a result which will be crucial later on. For our next result, will denote the 1 -norm and will represent the 2-norm on the linear space .
Lemma 11 Let , and be as in all the previous results, with the mapping bounded from below. Then, the following gradient inequalities hold true:
(21) |
and
(22) |
Proof First of all note that from our numerical scheme (6) we have . In terms of the on , we have:
which proves the desired inequality.
Now, with respect to the Euclidean norm, similar arguments yield:
completing the proof.
Our main result concerning the convergence of the sequence generated by the algorithm (6) to a critical point of the objective function is the following.
Theorem 12 Consider the sequences generated by the algorithm (6) and let the objective function be bounded from below. If the sequence is bounded and is a function, then:
(23) |
and there exists an element such that .
Proof Consider from the set under the assumptions of Lemma 7. It follows that . Also, using Fact 8, we get that . Furthermore, we consider two cases:
I. By using from Theorem 4, assume that there exists , with , such that . Then, since is a decreasing sequence, it follows that:
Now, using (16), we get that for every we have the following inequality:
So, the sequence is constant and the conclusion holds true.
II. Now, we deal with the case when , for every .
So, consider the set . From Fact 9 , we have that the set is nonempty and compact. Also, Fact 8 assures that mapping is constant on . From the hypotheses of the theorem, we have that is a function. So, according to Lemma 3, there exists and a function , such that for all the points ( ) from the set:
one has that:
On the other hand, using Fact 9 , we obtain that . This means that there exists an index , for which it is valid:
Let us introduce the notation:
Because
and since
then there exists another index , such that:
Taking we get that for each it follows:
Since the function is concave, we have:
Thus, the following relation takes place for each :
On one hand, combining the inequality (16) and (21), it follows that for every
(24) |
On the other hand, we know that the sequences and are convergent, and , hence and are bounded. This shows that there exists and there exists , such that:
Thus, the inequality (24) becomes:
(25) |
for every . This implies that for each , the following inequality holds:
From the well-known arithmetical-geometrical inequality, it follows that:
Therefore, we obtain:
Consequently, we have:
(26) |
for every , with . Now, by summing up the latter inequality from to , we get that:
Now, it is time to use the fact that . In this setting, by letting and by using (19) we obtain:
It implies that:
so the first part of the proof is done.
At the same time, the sequence , defined by:
is Cauchy. Thus, for every , there exists a positive integer number , such that for each and for all , one has:
Furthermore,
So, the sequence is Cauchy hence is convergent, i.e., there exists , such that:
Thus, by using (i) of Lemma 7, it follows that:
which leads to the conclusion of the second part of the present theorem.
Remark 13 Since the class of semi-algebraic functions is closed under addition (see for example [7]), and is semi-algebraic, the conclusion of the previous theorem holds if the condition that is a KL function is replaced by the assumption that is semi-algebraic.
Note that, according to Remark 10, the conclusion of Theorem 12 remains valid if we replace in its hypotheses that the conditions that is bounded from below and is bounded by the condition that is coercive.
Finally, observe that under the assumptions of Theorem 12, we have and
3 Convergence rates
In the following theorem, we provide convergence rates for the sequence generated by (6), but also for the function values, in terms of the Łojasiewicz exponent of (see also, [2, 8, 14, 20]). More precisely we obtain finite, linear and sublinear convergence rates, depending the Łojasiewicz exponent of is 0 , or belongs to , or , respectively. Note that the forthcoming results remain valid if one replace in their hypotheses the conditions that is bounded from below and is bounded by the condition that is coercive.
The following lemma was established in [14] and will be crucial in obtaining our convergence rates.
Lemma 14 ([14] Lemma 15) Let be a monotonically decreasing positive sequence converging to 0 . Assume further that there exist the natural numbers and such that for every one has:
(27) |
where is some constant and . Then, following statements are true:
(i) If , then converges in finite time;
(ii) If , then there exists and , such that for every
(iii) If , then there exists , such that for every
In the proof of the following theorem, we use Lemma 14 (see also [2]).
Theorem 15 In the settings of problem (P) consider the sequences generated by Algorithm (6). Assume that is bounded from below and that is bounded. Suppose that
fulfills the Łojasiewicz property with Łojasiewicz constant and Łojasiewicz exponent and let . Then, the following statements hold true:
If , then the sequences
(a) and converge in a finite number of steps;
If , then there exist and such that:
(a for every ,
(a for every ,
(a) for every ,
(a4) for all ;
If then there exist and such that:
( , for all ,
( ) , for all ,
(b) , for all ,
(b , for all .
Proof We start by employing the ideas from the proof of Theorem 12, namely if there exists , with , for which one has that:
then it follows that the sequence is constant. This leads to the fact that the sequence is also constant. Furthermore,
That is, if the regularized energy is constant after a certain number of iterations, one can see that the conclusion follow in a straightforward way.
Now, we can easily assume that:
In order to simplify notations, we will use
Our analysis aims to apply Lemma 14 for
based on three previously established fundamental relations:
(a) The energy decay relation (16), for every
(b) The energy-gradient estimate (22), for every
where
(c) The Łojasiewicz inequality (8), for every
where is defined by the Łojasiewicz property of at ( ) such that for one has
for all .
By combining these three inequalities, one reaches
and we are led to a nonlinear second-order difference inequality
(28) |
for every .
Using the fact that the positive sequence is decreasing we have that for all . Further, since the sequences and converge and have positive limit, there exists such that for all one has
In the view of these observations, (28) becomes
(29) |
for every , where .
Now we can apply Lemma 14 by observing that (29) is nothing else that (27) in Lemma 14, with and . Hence, by taking into account that for all , that is, in the conclusion of Lemma 14 (ii) one has , we have:
(p0) If , then converges in finite time;
(p2) If , then there exists , such that for every
(a). We treat first the case . Then, according to converges in finite time. But then for big enough, hence (16) implies that for big enough, consequently for big enough, thus converge in finite time. The above results show immediately that converge in finite time.
Assume now that .
(a ). According to (p1) there exists and , such that for every one has:
(30) |
Hence,
(31) |
for all , where .
(a). In order to give an upper bound for the difference , we consider the following chain of inequalities based upon Lemma 2:
Here, using the inequality:
leads, after some simplifications, to:
By combining the inequality (16) with the fact that the sequence is decreasing and converges to , one obtains:
(32) |
From (30), we have for all , hence:
This means that for every one has:
Since the sequence is convergent to , we can choose:
and we have
(33) |
(a3). We continue the proof by establishing an estimate for . By the triangle inequality and by summing up (26) from to , (where was defined in the proof of Theorem 12), one has:
so, letting gives:
Recall, however, that the desingularizing function is hence,
(34) |
for every , where .
Further, since converges to 0 one has for big enough, hence holds for , if is big enough. By using (30), we conclude that there exists such that:
(35) |
where .
. Finally, we conclude this part of the proof by deducing an upper bound for . The following inequalities hold true:
for all . Let . Then,
(36) |
Now, if we take then (31), (33), (35), and (36) lead to the conclusions .
Finally, assume that .
(b 1 ). According to (p2) there exists , such that for every one has
(37) |
Consequently,
for every . Hence, we have
(38) |
for every , where .
The other claims now follow quite easily.
(b 2 ). Indeed, note that (32) holds for every , hence:
Therefore, one obtains:
for every . Let . Then
(39) |
for every .
(b 3 ). For proving (b 3 ), we use (34) again, and we have that for all it holds
Let . Then,
(40) |
for all .
(b 4 ). The final estimate is a straightforward consequence of the definition of and the above estimates. Indeed, for all one has:
Let . Then,
(41) |
Now, if we take then (38), (39), (40), and (41) lead to the conclusions .
Remark 16 According to [21], has the Łojasiewicz property with Łojasiewicz exponent , whenever has the Łojasiewicz property with Łojasiewicz exponent . Therefore, one obtains the same convergence rates if in the hypotheses of Theorem 15 one assumes that has the Łojasiewicz property with Łojasiewicz exponent .
4 Numerical experiments
The aim of this section is to support the analytic results of the previous sections by numerical experiments and to highlight some interesting features of the generic algorithm (6).
4.1 Comparing Algorithm (6) with some algorithms from the literature by using different step sizes
In our first experiment, let us consider the convex function:
Based on the boundedness of its Hessian, we infer that the Lipschitz constant of its gradient is . Obviously, is strongly convex and its global minimum is .
In order to give a better perspective on the advantages and disadvantages of algorithm (6) for different choices of step sizes and inertial coefficients, in our first numerical experiment, we compare the following:
(a) The proposed algorithm (6) with inertial parameter , which shows that , and constant step size
(b) The proposed algorithm (6) with inertial parameter and increasing step size ;
(c) The proposed algorithm (6) with inertial parameter and decreasing step size ;
(d) Polyak’s algorithm (1) with inertial parameter and constant step size ;
(e) Polyak’s algorithm (1) with decreasing inertial parameter and increasing step size ;
(f) Nesterov algorithm (4) with maximal admissible step size ;
(g) The Nesterov-like algorithm (5) with inertial parameter , and step size .
The choices of inertial coefficients and step sizes are motivated by theoretical results in [18,29] and [20]. We consider the starting points and run the simulations until the error attains the value . These results are shown in Fig. 1, where the horizontal axis measures the number of iterations and the vertical axis shows the error . The experiment depicted in Fig. 1 shows that Algorithm (6) has the best behavior when we choose a decreasing step size (red square in Fig. 1). This instance outperforms those obtained with the same Algorithm (6) but with constant step size (red star in Fig. 1) and even more so fo increasing step sizes (by red circle in Fig. 1). Further, it should be noted that the Algorithm (6), in all its instances, outperforms Algorithm (5) (green line in Fig. 1), Algorithm (1) with a constant step size (yellow line in Fig. 1) or variable step size (black line in Fig. 1) and also Nesterov’s Algorithm (4) (blue line in Fig. 1).
4.2 The analysis of the behavior of Algorithm (6) via different inertial values and step sizes
In the second set of numerical experiments, we analyze the behavior of our algorithm with different inertial values and different step sizes in a noncovex setting. Our experiments suggest that in order to obtain faster convergence one should use in Algorithm (6) decreasing step size and one should have a sequence of inertial parameter whose limit is as close to 0 as possible (see Fig. 2).
First, consider the Rastrigin function (see [25]):
which is nonconvex. For the initial values , we run Algorithm (6), with the constant step size (yellow circle in Fig. 2a), then with decreasing step size (green arrow in Fig. 2a) and then with increasing step size (red star in Fig. 2a). Meanwhile, the inertial parameter is taken to be with simulations running until the error attains . The results are shown in Fig. 2a, where the horizontal axis measures the number of iterations and the vertical axis shows the error in terms of iterates.
Next, consider the convex quadratic function together with initial values and an inertial parameter . The three instances of our algorithm: with constant step size (yellow circle in Fig. 2b), decreasing step size (green arrow in Fig. 2b) and finally with nondecreasing step size (red star in Fig. 2b),

are compared and results are shown in Fig. 2b, where the horizontal axis measures the number of iterations and the vertical axis shows the error in terms of iterates.
We also consider different initial values for Algorithm (6), namely (0.9, 0.9) together with a fixed step size and the inertial parameters (red circle in Fig. 2c), (yellow star Fig. 2c), and (green arrow Fig. 2c). The result when the objective function is the Rastrigin function is shown in Fig. 2c.
Finally, we consider the same inertial values as before for Algorithm (6), but we take the convex objective function and the fixed step size , see Fig. 2d.
4.3 Comparing Algorithm (6) with known algorithms by using some test functions for optimization
Since Algorithm (6) is new in the literature, it is worthwhile to compare with known algorithms using some so-called test functions for optimization (see [25]). In these experiments, we run the algorithms until the error attains the value . These results are shown in Fig. 3a-d, where the horizontal axis measures the number of iterations and the vertical axis shows the error .
At first consider Beale’s Function:
.
We compare Algorithm (6) with inertial parameter (red star Fig. 3a), with Algorithm (2) with inertial parameter (green square

Fig. 3a), and Algorithm (4) (blue circle Fig. 3a), by taking the same step size , and initial value . Meanwhile Algorithm (6) and Algorithm (2) show a similar behavior for the Beale function, both outperform Algorithm (4) (see Fig. 3a).
Consider next the Rosenbrock Function:
We compare Algorithm (6) with inertial parameter (red star Fig. 3b), with Algorithm (2) with inertial parameter (green square Fig. 3b), and Algorithm (4) (blue circle Fig. 3b), by taking the same step size , and initial value (see Fig. 3b). Note that also for the Rosenbrock Function, Algorithm (6) and Algorithm (2) have a similar behavior; however, in contrast with the oscillations in the error terms of iterates of Algorithm (2), Algorithm (6) shows an almost linear decrease trend.
We are also interested in the quadratic function of the form:
We compare Algorithm (6) with inertial parameter (red star Fig. 3c), with Algorithm (2) with inertial parameter (green square Fig. 3c), and Algorithm (4) (blue circle Fig. 3c), by taking the same step size , and initial value . As Fig. 3c shows that in this case Algorithm (6) clearly outperforms Algorithm (2) and Algorithm (4).
Finally, for the logistic regression with -regularization, we consider the cost function:
with . Further,
and
We compare Algorithm (6) with inertial parameter (red star Fig. 3d), with Algorithm (2) with inertial parameter (green square Fig. 3d), and Algorithm (4) (blue circle Fig. 3d), by taking the same step size , and the initial value . Also here, Algorithm (6) outperforms Algorithm (2) and Algorithm (4) (see Fig. 3d).
4.4 Switching between positive and negative inertial parameter values
Finally, a set of numerical experiments is related to the minimization of the nonconvex, coercive function:

-1\right)ˆ{2}\right)$ by using different inertial values and different starting points}
Observe that this function has two global minima at and and a local maximum at .
The experiments that we present in what follows emphasize the importance of the fact that the inertial parameter in Algorithm (6), though having a strictly negative limit, may have a finite number of positive terms.
Indeed, by taking the same starting points and constant step size , according to Fig. 4a, Algorithm (6), with inertial parameter (red star Fig. 4a), seems to converge faster than algorithm (2) with inertial parameter (black circle Fig. 4a), after a certain number of iterates. Here, we ran the algorithms until the absolute value of the gradient of the objective function in iterates attained the value . These results are shown in Fig. 4a, where the horizontal axis measures the number of iterations and the vertical axis shows the energy error , where in this case is the appropriate minimum 1. However, these algorithms show a similar behavior concerning the scaled error , where is the number of iterations and is the step size (see Fig. 4b).
Now, for the initial values (which is very closed to the local maximum of the objective function), Algorithm (2) (black circle, Fig. 4c, d), clearly outperforms Algorithm (6) (red star, Fig. 4c, d) both for the energy error , (Fig. 4c), and the scaled error (Fig. 4d).
Nevertheless, the very general structure of the generic Algorithm (6) allows for much flexibility, as only the limit of the sequence ( ) is prescribed. So, one can profit by taking the inertial parameter in Algorithm (6). Then, is positive for the first 50 iterates, and this helps Algorithm (6) to outperform Algorithm (5) with inertial parameter , even for the initial values 0.00001 (see Fig. 4e for the energy error and Fig. 4f for the scaled error , where the graphics corresponding to Algorithm (6) are depicted by red, the graphics corresponding to Algorithm (2) are depicted by black).
References
-
1.
Aujol, J.-F., Dossal, C.H., Rondepierre, A.: Optimal convergence rates for Nesterov acceleration. arXiv:1805.05719
-
2.
Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116(1-2), 5-16 (2009)
-
3.
Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438-457 (2010)
-
4.
Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137(1-2), 91-129 (2013)
-
5.
Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity . Math. Program. 168(1-2), 123-175 (2018)
-
6.
Bégout, P., Bolte, J., Jendoubi, M.A.: On damped second-order gradient systems. J. Differ. Equ. 259, 3115-3143 (2015)
-
7.
Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. Series A 146(1-2), 459-494 (2014)
-
8.
Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17(4), 1205-1223 (2006)
-
9.
Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18(2), 556-572 (2007)
-
10.
Bolte, J., Daniilidis, A., Ley, O., Mazet, L.: Characterizations of Łojasiewicz inequalities: subgradient flows, talweg, convexity. Trans. Am. Math. Soc. 362(6), 3319-3363 (2010)
-
11.
Boţ, R.I., Csetnek, E.R., László, S.C.: Approaching nonsmooth nonconvex minimization through second-order proximal-gradient dynamical systems. J. Evol. Equ. 18(3), 1291-1318 (2018)
-
12.
Boţ, R.I., Csetnek, E.R., László, S.C.: An inertial forward-backward algorithm for minimizing the sum of two non-convex functions. Euro J. Comput. Optim. 4(1), 3-25 (2016)
-
13.
Boţ, R.I., Csetnek, E.R., László, S.C.: A second order dynamical approach with variable damping to nonconvex smooth minimization. Applicable Analysis. https://doi.org/10.1080/00036811.2018. 1495330 (2018)
-
14.
Boţ, R.I., Nguyen, D.-K.: The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates. arXiv:1801.01994
-
15.
Chambolle, A., Dossal, C.h.: On the convergence of the iterates of the fast iterative shrinkage/thresholding algorithm. J. Optim. Theory Appl. 166(3), 968-982 (2015)
-
16.
Combettes, P.L., Glaudin, L.E.: Quasinonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to inertial methods. Siam Journal on Optimization 27(4), 2356-2380 (2017)
-
17.
Frankel, P., Garrigos, G., Peypouquet, J.: Splitting methods with variable metric for KurdykaŁojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 165(3), 874-900 (2015)
-
18.
Ghadimi, E., Feyzmahdavian, H.R., Johansson, M.: Global convergence of the heavy-ball method for convex optimization. In: 2015 IEEE European Control Conference (ECC), pp. 310-315 (2015)
-
19.
Kurdyka, K.: On gradients of functions definable in o-minimal structures. Annales de l’institut Fourier (Grenoble) 48(3), 769-783 (1998)
-
20.
László, S.C.: Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization. arXiv:1811.09616
-
21.
Li, G., Pong, T.K.: Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods . Found. Comput. Math. 2018, 1-34 (2018)
-
22.
Łojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels, Les É,quations aux Dérivées Partielles, Éditions du Centre National de la Recherche Scientifique Paris, pp. 87-89 (1963)
-
23.
Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate . (Russian) Dokl. Akad. Nauk SSSR 269(3), 543-547 (1983)
-
24.
Nesterov, Y.: Introductory lectures on convex optimization: a basic course. Kluwer Academic Publishers, Dordrecht (2004)
-
25.
Polheim, H.: Examples of objective functions, Documentation for Genetic and Evolutionary Algorithms for use with MATLAB : GEATbx version 3.7, http://www.geatbx.com
-
26.
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. U.S.S.R. Comput. Math. Math. Phys. 4(5), 1-17 (1964)
-
27.
Rockafellar, R.T., Wets, R.J.-B.: Variational analysis fundamental principles of mathematical sciences, vol. 317. Springer, Berlin (1998)
-
28.
Su, W., Boyd, S., Candes, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17, 1-43 (2016)
-
29.
Sun, T., Yin, P., Li, D., Huang, C., Guan, L., Jiang, H.: Non-ergodic convergence analysis of heavyball algorithms. arXiv:1811.01777