Abstract
In this paper we deal with a general second order continuous dynamical system associated to a convex minimization problem with a Frechet differentiable objective function.
We show that inertial algorithms, such as Nesterov’s algorithm, can be obtained via the natural explicit discretization from our dynamical system.
Our dynamical system can be viewed as a perturbed version of the heavy ball method with vanishing damping, however the perturbation is made in the argument of the gradient of the objective function.
This perturbation seems to have a smoothing effect for the energy error and eliminates the oscillations obtained for this error in the case of the heavy ball method with vanishing damping, as some numerical experiments show.
We prove that the value of the objective function in a generated trajectory converges in order \(\mathcal{O}(1/t^2)\) to the global minimum of the objective function.
Moreover, we obtain that a trajectory generated by the dynamical system converges to a minimum point of the objective function.
Authors
Cristian Daniel Alecsa
Department of Mathematics Babes-Bolyai University, Cluj-Napoca, Romania
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, Cluj-Napoca, Romania
Szilard Csaba Laszlo
Technical University of Cluj-Napoca, Department of Mathematics, Cluj-Napoca, Romania
Titus Pinta
Mathematical Institute University of Oxford, Oxford, England
Keywords
Unconstrained optimization problems; convex optimization; heavy ball method; continuous second order dynamical system; convergence rate; inertial algorithm; Lyapunov energy functions.
Paper coordinates
C.-D. Alecsa, S.C. Laszlo, T. Pinta, An extension of the second order dynamical system that models Nesterov’s convex gradient method, Appl. Math. Optim., 84 (2021), pp. 1687–1716. doi: 10.1007/s00245-020-09692-1
About this paper
Print ISSN
0095-4616
Online ISSN
1432-0606
google scholar link
[1] Y. Bengio, Practical recommendations for gradient-based training of deep architectures, 2012,arXiv:1206.5533.
[2] C. Bishop, Training with noise is equivalent to Tikhonov regularization, Neural computation 7 (1),1995, 108– 116.
[3] L. Bottou, Stochastic gradient descent tricks, Neural networks: Tricks of the trade, Springer, 2012,421– 436.
[4] L. Bottou, F.E. Curtis, J. Nocedal, Optimization Methods for Large-Scale Machine Learning, SIAM Review, vol. 60, no. 2, 2018, 223-311
[5] R.I. Bot, E.R. Csetnek, S.C. László, An inertial forward-backward algorithm for minimizing the sum oftwo non-convex functions, Euro Journal on Computational Optimization 4(1), 2016, 3-25.
[6] A.B. Da Silva, M. Gazeau, A general system of differential equations to model first order adaptive algorithms, 2018, arXiv:1810.13108.
[7] T. Dozat, Incorporating Nesterov momentum into Adam, ICLR Workshop, 2016, 1-4.
[8] J. Duchi, E. Hazan, Y. Singer, Adaptive Subgradient Methods for Online Learning and Stochastic Optimization, Journal of Machine Learning Research, 2011, 2121–2159.
[9] I. Faragó, New operator splitting methods and their applications, Boyanov et al. (Eds.) NumericalMethods and Applications, Lecture Notes in Computational Sciences 4310, Springer Verlag, Berlin, 2007, 443-450.
[10] I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016,http://www.deeplearningbook.org.
[11] E. Hansen, A. Ostermann, Exponential splitting for unbounded operators, Mathematics ofcomputation, vol. 78, no. 267, 2009, 1485-1496.
[12] E. Hansen, F. Kramer, A. Ostermann, A second-order positivity preserving scheme for semilin-ear parabolic problems, Applied Numerical Mathematics, vol. 62, issue 10, 2012, 1428-1435.
[13] C.F. Higham, D.J. Higham, Deep learning : An introduction for applied mathematicians, 2018, arXiv:1801.05894.
[14] H. Holden, K.H. Karlsen, K-A. Lie, Splitting methods for Partial Differential Equations with Rough Solutions : Analysis and MATLAB Programs, European Mathematical Society Publishing House, 2010,DOI:10.4171/078.16
APREPRINT – JUNE 18, 2019
[15] D.P. Kingma, J.L. Ba, Adam : A method for stochastic optimization, 2014,arXiv:1412.6980.
[16] Y. LeCun, L. Bottou, G.B. Orr, K.R. Müller, Efficient backprop, Neural networks: Tricks of thetrade, Springer, 1998, 9–50.
[17] G.I. Marchuk, Some application of splitting-up methods to the solution of mathematical physics problems, Aplikace matematiky, 1968, 103–132.
[18] P. Mehta, M. Bukov, C.H. Wang, A.G.R. Day, C. Richardson, C.K. Fisher, and D.J. Schwab, AHigh-Bias, Low-Variance Introduction to Machine Learning for Physicists, 2018, arXiv:1803.08823.
[19] Y. Nesterov, A method of solving a convex programming problem with convergence rate o(1/k2), Soviet Mathematics Doklady, vol. 27, 1983, 372–376.
[20] M. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015.
[21] J. Nocedal, S. Wright, Numerical Optimization, Springer-Verlag, New York, 2006,10.1007/978-0-387-40065-5.
[22] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR ComputationalMathematics and Mathematical Physics, 4 (5), 1964, 1–17.
[23] S. Ruder, An overview of gradient descent optimization algorithms, 2016, arXiv:1609.04747.
[24] L. Smith, Cyclical learning rates for training neural networks, 2015, arXiv:1506.01186.
[25] G. Strang, On the construction and comparison of difference schemes, SIAM J. Numerical Anaysis,vol. 5, no. 3, 1968, 506-517.
[26] W. Su, S. Boyd, E.J. Candes, A differential equation for modeling Nesterov’s accelerated gradientmethod: theory and insights, Journal of Machine Learning Research, (17), 2016, 1-43.
[27] T. Sun, P. Yin, D. Li, C. Huang, L. Guan, H. Jiang, Non-ergodic Convergence Analysis of Heavy-Ball Algorithms, 2018, https://arxiv.org/abs/1811.01777.
[28] I. Sutskever, J. Martens, G. Dahl, G. Hinton , On the importance of initialization and mo-mentum in deep learning, International conference on machine learning, 2013, 1139–1147.
[29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A.Rabinovich, Going deeper with convolutions, CVPR, 2015, arxiv.org/pdf/1409.4842.pdf.
[30] T. Tieleman, G. Hinton, Lecture 6.5 – rmsprop. Divide the gradient by a running average ofits recent magnitude, Coursera : Neural Networks for Machine Learning, 4, 2012.
[31] A.C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht, The marginal value of adaptive gradient methods in machine learning, 2017, arXiv:1705.08292.
[32] M.D. Zeiler, ADADELTA : An adaptive learning rate method, 2012,http://arxiv.org/abs/1212.5701.
[33] C. Zhang, S. Bengio, M. Hardt, B. Recht, O. Vinyals, Understanding deep learning requires rethinking generalization, 2016, arXiv:1611.03530.17
Paper (preprint) in HTML form
An extension of the second order dynamical system that models Nesterov’s convex gradient method††thanks: This work was supported by a grant of Ministry of Research and Innovation, CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2016-0266.
Abstract. In this paper we deal with a general second order continuous dynamical system associated to a convex minimization problem with a Frèchet differentiable objective function. We show that inertial algorithms, such as Nesterov’s algorithm, can be obtained via the natural explicit discretization from our dynamical system. Our dynamical system can be viewed as a perturbed version of the heavy ball method with vanishing damping, however the perturbation is made in the argument of the gradient of the objective function. This perturbation seems to have a smoothing effect for the energy error and eliminates the oscillations obtained for this error in the case of the heavy ball method with vanishing damping, as some numerical experiments show. We prove that the value of the objective function in a generated trajectory converges in order to the global minimum of the objective function. Moreover, we obtain that a trajectory generated by the dynamical system converges to a minimum point of the objective function.
Key Words. convex optimization, heavy ball method, continuous second order dynamical system, convergence rate, inertial algorithm
AMS subject classification. 34G20, 47J25, 90C25, 90C30, 65K10
1 Introduction
Since Su, Boyd and Candès in [33] showed that Nesterov’s accelerated convex gradient method has the exact limit the second order differential equation that governs the heavy ball system with vanishing damping, that is,
(1) |
with , the latter system has been intensively studied in the literature in connection to the minimization problem Here is a convex Frèchet differentiable function with Lipschitz continuous gradient.
In [33] the authors proved that
for every however they did not show the convergence of a generated trajectory to a minimum of the objective function
In [10], Attouch, Chbani, Peypouquet and Redont considered the case in (1), and showed that the generated trajectory converges to a minimizer of as . Actually in [10] the authors considered the perturbed version of the heavy ball system with vanishing damping, that is,
(2) |
where is a small perturbation therm that satisfies Beside the convergence of a generated trajectory to a minimizer of , they showed that also in this case the convergence rate of the objective function along the trajectory, that is is of order .
Another perturbed version of (1) was studied by Attouch, Peypouquet and Redont in [12]. They assumed that the objective is twice continuously differentiable and the perturbation of their system is made at the damping therm. More precisely, they studied the dynamical system with Hessian driven damping
(3) |
where and In case they showed the convergence of a generated trajectory to a minimizer of . Moreover, they obtained that in this case the convergence rate of the objective function along the trajectory, that is is of order .
Further, Attouch, Chbani and Riahi in [8] studied the subcritical case and they proved that in this case the convergence rates of the objective function along the trajectory generated by (1), i.e is of order .
Another approach is due to Aujol, Dossal and Rondepierre [2], who assumed that beside convexity, the objective in (1) satisfies some geometrical conditions, such as the Łojasiewicz property. The importance of their results obtained in [2] is underlined by the fact that applying the classical Nesterov scheme on a convex objective function without studying its geometrical properties may lead to sub-optimal algorithms.
It is worth mentioning the work of Aujol and Dossal [1], who did not assumed the convexity of but the convexity of the function , where is strongly related to the damping parameter and is a global minimizer of Under these assumptions, they obtained some general convergence rates and also the convergence of the generated trajectories of (1). In case they results reduce to the results obtained in [10, 12, 8].
However, the convergence of the trajectories generated by the continuous heavy ball system with vanishing damping (1), in the general case when is nonconvex is still an open question. Some important steps in this direction have been made in [20] (see also [18]), where convergence of the trajectories of a perturbed system, have been obtained in a nonconvex setting. More precisely in [20] is considered the system
(4) |
where Note that here can take nonpositive values. For we recover the dynamical system studied in [16]. According to [20], the trajectory generated by the dynamical system (4) converges to a critical point of if a regularization of satisfies the Kurdyka-Łojasiewicz property.
Further results concerning the heavy ball method and its extensions can be found in [6, 7, 11, 14, 22, 23].
1.1 An extension of the heavy ball method and the Nesterov type algorithms obtained via explicit discretization
What one can notice concerning the heavy ball system and its variants is, that despite of the result of Su et al. [33], this system will never give through the natural implicit/explicit discretization the Nesterov algorithm. This is due to the fact that the gradient of is evaluated in and this via discretization will become , (or ) and never of the form as Nesterov’s gradient method requires. Another observation is that using the same approach as in [33], one can show, see [26], that (1), (and also (4)), models beside Nesterov’s algorithm other algorithms too. In this paper we overcome the deficiency emphasized above, by introducing a dynamical system that via explicit discretization leads to inertial algorithms of gradient type. To this end, let us consider the optimization problem
(5) |
where is a convex Fréchet differentiable, function with -Lipschitz continuous gradient, i.e. there exists such that for all
Remark 1
The connection of (6) with the heavy ball system with vanishing damping (1) is obvious, the latter one can be obtained from (6) for The study of the dynamical system (6) in connection to the optimization problem (5) is motivated by the following facts:
1. The dynamical system (6) leads via explicit discretization to inertial algorithms. In particular Nesterov’s algorithm can be obtained via this natural discretization.
2. A generated trajectory and the objective function value in this trajectory in general have a better convergence behaviour than a trajectory generated by the heavy ball system with vanishing damping (1), as some numerical examples shows.
3. The same numerical experiments reveal that the perturbation term in the argument of the gradient of the objective function has a smoothing effect and annihilates the oscillations obtained in case of the dynamical system (1) for the errors and where is a minimizer of
4. A trajectory generated by the dynamical system (6) ensures the convergence rate of order for the decay , provided it holds that or
5. A trajectory generated by the dynamical system (6) ensures the same convergence rate of order for the decay as the heavy ball method with vanishing damping, for the cases and
6. The convergence of a generated trajectory to a minimizer of the objective function can be obtained in case and and also in the case and
Remark 2
Nevertheless, in case and the dynamical system (6) can generate periodical solutions, hence the convergence of a generated trajectory to a minimizer of the objective is hopeless. To illustrate this fact, for consider the strongly convex objective function . Then, taking into account that the dynamical system (6) becomes
(7) |
Now, the periodical function is a solution of (7), consequently do not exist the limit
We emphasize, that despite the novelty of the dynamical system (6), its formulation is natural since by explicit discretization leads to inertial gradient methods, in particular the famous Polyak and Nesterov numerical schemes can be obtained from (6). For other inertial algorithms of gradient type we refer to [17, 19, 26].
Indeed, explicit discretization of (6), with the constant stepsize , leads to
Equivalently, the latter equation can be written as
(8) |
Now, setting and denoting the constants and still with and , we get the following general inertial algorithm:
Let and for all consider the sequences
(9) |
However, from a practical point of view it is more convenient to work with the following equivalent formulation: Let and for all consider the sequences
(10) |
where and
Remark 3
Remark 4
Independently to us, very recently, a system similar to (6) was studied by Muehlebach and Jordan in [27] and they show that Nesterov’s accelerated gradient method can be obtained from their system via a semi-implicit Euler discretization scheme. They considered a constant damping instead of and also took Further they also treated the ODE
which for is obviously equivalent to the particular case of the governing ODE from (6), obtained for However, the freedom of controlling the parameters and in (6) is essential as the next numerical experiments show.
1.2 Some numerical experiments
In this section we consider two numerical experiments for the trajectories generated by the dynamical system (6) for a strongly convex and for a convex but not strongly convex objective function.
Everywhere in the following numerical experiments we consider the continuous time dynamical system (6), solved numerically with a Runge Kutta 4-5 (ode45) adaptive method in MATLAB. We solved the dynamical system with ode45 on the interval and the plot in Fig. 1 - Fig. 2 show the energy error on the left, and the iterate error on the right.
We show the evolution of the two errors with respect to different values for , and , including the case that yields the Heavy Ball with Friction. One can observe that the best choice is not which is the case of heavy ball system with vanishing damping (1).
1. Consider the function
Then, is a strongly convex function, , further is the unique minimizer of and
We compare the convergence behaviour of the generated trajectories of (6) by taking into account the following instances.
Color | |||
---|---|---|---|
3 | 0 | 0 | blue |
3.1 | 1 | 0 | red |
3.1 | 0 | 0.5 | yellow |
3.1 | 1 | 1 | purple |
3.1 | -1 | 1 | green |
The result are depicted in Figure.1 for the starting points and , respectively.


2. In the next experiment we consider the convex, but not strongly convex function
Then, , further is the unique minimizer of and
We compare the convergence behaviour of the generated trajectories of (6) by taking into account the following instances.
Color | |||
---|---|---|---|
3.1 | 0 | 0 | blue |
3.1 | 2 | 0 | red |
3.1 | 0 | 1 | yellow |
3.1 | 0.5 | 0.5 | purple |
3.1 | -0.5 | 1 | green |
The result are depicted in Figure.2 for the starting points and , respectively.


Remark 5
Observe that in all cases the trajectories generated by dynamical system (6) have a better behaviour than the trajectories generated by the heavy ball system with vanishing damping (1). As we have emphasized before, it seems that the perturbation in the argument of the gradient of the objective function has a smoothing effect. The choice of the parameters and will be validated by the theoretical results from Section 3, where we show that in case and also in the case the energy error is of order just as the case of heavy ball method. Further, for the values and for we are able to show that a generated trajectory converges to a minimum of the objective function
1.3 The organization of the paper
The outline of the paper is the following. In the next section we show the existence and uniqueness of the trajectories generated by the system (6). Further we show that the third order derivative exists almost for every and we give some estimates of the third order derivative in terms of velocity and acceleration. We do not assume the convexity of the objective function in these results. However, it seems that the assumption that has Lipschitz continuous gradient is essential in obtaining existence and uniqueness of the generated trajectories of the dynamical system (6). In Section 3 we deal with the convergence analysis of the generated trajectories. We introduce a general energy functional, which will play the role of a Lyapunov function associated to the dynamical system (6). We show convergence of the generated trajectories and also a rate of order for the decay Further, we show that also the error has a rate of order . Finally, we show the convergence of the generated trajectories to a minimum of the objective function In section 4 we conclude our paper and we present some possible related future researches.
2 Existence and uniqueness
2.1 On strong global solutions
The first step toward our existence and uniqueness result obtained in the present section concerns the definition of a strong global solution of the dynamical system (6).
Definition 1
We call the function a strong global solution of the dynamical system (6) if satisfies the following properties:
For brevity reasons, we recall that a mapping is called locally absolutely continuous if it is absolutely continuous on every compact interval , where . Further, we have the following equivalent characterizations for an absolutely continuous function , (see, for instance, [13, 5]):
-
(a)
there exists an integrable function, such that
-
(b)
is a continuous function and its distributional derivative is Lebesque integrable on the interval
-
(c)
for every , there exists , such that for every finite family from , the following implication is valid :
Remark 6
Let be a locally absolutely continuous function. Then is differentiable almost everywhere and its derivative coincides with its distributional derivative almost everywhere. On the other hand, we have the equality for almost every , where is defined at the integration formula (a).
The first result of the present section concerns the existence and uniqueness of the trajectories generated by the dynamical system (6). We prove existence and uniqueness of a strong global solution of (6) by making use of the Cauchy-Lipschitz-Picard Theorem for absolutely continues trajectories (see for example [25, Proposition 6.2.1], [32, Theorem 54]). The key argument is that one can rewrite (6) as a particular first order dynamical system in a suitably chosen product space (see also [6, 9, 20, 21]).
Theorem 7
Let . Then, the dynamical system (6) admits a unique strong global solution.
Proof.
By making use of the notation the system (6) can be rewritten as a first order dynamical system:
(11) |
where
The existence and uniqueness of a strong global solution of (6) follows according to the Cauchy-Lipschitz-Picard Theorem applied to the first order dynamical system (11). In order to prove the existence and uniqueness of the trajectories generated by (11) we show the following:
(I) For every the mapping is -Lipschitz continuous and .
(II) For all one has .
Let us prove (I). Let be fixed and consider the pairs and from . Using the Lipschitz continuity of and the obvious inequality for all , we make the following estimations :
By employing the notation , we have that
Obviously the function is continuous on , hence is integrable on for all .
For proving (II) consider a fixed pair of elements and let . We consider the following estimations:
and the conclusion follows by the continuity of the function on .
Remark 8
Note that we did not use the convexity assumption imposed on in the proof of Theorem 7. However, we emphasize that according to the proof of Theorem 7, the assumption that has a Lipschitz continuous gradient is essential in order to obtain existence and uniqueness of the trajectories generated by the dynamical system (6).
2.2 On the third order derivatives
In this section we show that the third order derivative of a strong global solution of the dynamical system (6) exists almost everywhere on . Further we give an upper bound estimate for the third order derivative in terms of velocity and acceleration of a strong global solution of (6). For simplicity, in the proof of the following sequel we employ the -norm on , defined as , for all . Obviously one has
Proposition 9
For the starting points let be the unique strong global solution of the dynamical system (6). Then, is locally absolutely continuous on , consequently the third order derivative exists almost everywhere on .
Proof.
We show that is locally absolutely continuous, hence is also locally absolutely continuous. This implies by Remark 6 that exists almost everywhere on .
Let and . We consider the following chain of inequalities :
and by using the -Lipschitz continuity of we obtain
Further, let us introduce the following additional notations:
Then, one has
By the fact that is the strong global solution for the dynamical system (6), it follows that and are absolutely continuous on the interval . Moreover, the function belongs to , hence it is also absolutely continuous on the interval . Let . Then, there exists , such that for satisfying and , we have that
Summing all up, we obtain
consequently is absolutely continuous on . and the conclusion follows.
Concerning an upper bound estimate of the third order derivative the following result holds.
Lemma 10
For the initial values consider the unique strong global solution of the second-order dynamical system (6). Then, there exists such that for almost every , we have that :
(12) |
Proof.
For we consider the following inequalities :
Now, dividing by and taking the limit , it follows that
Consequently,
Finally,
where .
3 Convergence analysis
3.1 On a general energy functional associated to the dynamical system (6)
In order to obtain convergence rates for the function values in the trajectories generated by the dynamical system (6), we need to introduce an appropriate energy functional which will play the role of a Lyapunov function. The form of such an energy functional associated to heavy ball system with vanishing damping and its extensions is well known in the literature, see for instance [10], [12], [1] and [2]. However, the novelty of the dynamical system (6), compared with the extended/perturbed variants of the heavy ball system studied in the above mentioned papers, consists in the fact that in system (6) the perturbation is carried out in the argument of the gradient of the objective function. This seems to be a new approach in the literature, therefore the previously mentioned energy functionals are not suitable for a valuable convergence analysis of the dynamical system (6). Hence, let us denote and , and assume that Further, let In connection to the dynamical system (6), we introduce the general energy functional
(13) |
which can be seen as an extension of the energy function studied in [10] in connection to the heavy ball system with vanishing damping.
Our purpose is to define the non-negative functions such that , that is, the function is non-increasing after a Indeed, if is non-increasing for then
in other words
In what follows we derive the conditions which must be imposed on the positive functions in order to obtain for every We have,
But
and by the convexity of we have
hence
In order to have for all one must assume that for all the following inequalities hold:
(16) | |||
(17) | |||
(18) | |||
(19) | |||
(20) | |||
(21) |
Remark 11
Observe that (17) implies that for all and this shows that in dynamical system (6), one must have whenever Further, (19) is satisfied whenever and are constant functions. It is obvious that there exists such that for all we have hence from (18) we get
3.2 Error estimates for the values
In this section we obtain convergence rate of order for the difference where From here we are able to show that also has a convergence rate of order However, just as in the case of heavy ball system with vanishing damping, in order to obtain these rates, it is necessary to assume in our system (6). We have the following result.
Theorem 12
Let be the unique strong global solution of (6) and assume that Assume further that , and .
Then, there exists and such that
(22) |
Further,
(23) |
and
(24) |
Proof.
Let Consider the energy functional (13) with
According to Remark 11 we have
and the conditions (18)-(21) are satisfied with equality for every .
Obviously, if is big enough on has that and since it holds that for big enough, even if . Hence, there exists such (17) is satisfied for every
Now, and by taking into account that we obtain that there exists such that (16) holds for every
Let We conclude that, for all hence is nonincreasing, i.e.
But, obviously there exists such that for all and by denoting we obtain
Next we prove (23). Since there exist such that
Hence,
and by integrating from to we get
Now, by letting and taking into account that is nonincreasing, the conclusion follows.
For proving (24) observe that there exists such that for all Consequently
Integrating from to and letting we obtain the desired conclusion.
Remark 13
Remark 14
Concerning the case , note that (17) is satisfied if one assumes that but Moreover, in this case (16) is also satisfied since, one has Hence, also in the case (22) in the conclusion of Theorem 12 holds.
Moreover, if we assume then (24) becomes:
Next we show that also the error is of order For obtaining this result we need the Descent Lemma, see [28].
Lemma 15
Let be a Frèchet differentiable function with Lipschitz continuous gradient. Then one has
Theorem 16
Let be the unique strong global solution of (6) and assume that If and , then is bounded and there exists and such that
(25) |
Further,
(26) |
and there exists and such that
(27) |
Proof.
For let and consider the energy function (13) with where , , and
According to Remark 11 there exists such that the conditions (16)-(21) are satisfied.
From the definition of one has
and
that is
and
By using the fact that nonincreasing on an interval the latter inequality assures that is bounded.
Now, by using the inequality we get
hence for all one has
where
Further, (21) becomes hence for all one has
By integrating from to one gets
By letting we obtain
Now according to (24) there exists and such that
Further (25) assures that there exists and such that
Obviously,
hence (28) assures that there exists and such that
(29) |
Now, by adding (22) and (29) we get that there exists and such that
that is
Remark 17
Remark 18
3.3 The convergence of the generated trajectories
In this section we show convergence of the generated trajectories to a minimum point of the objective The main tool that we use in order to attain this goal will be the following continuous version of Opial Lemma, (see [34, 30, 10].
Lemma 21
Let to be a separable Hilbert space and , with . Further, consider a given map. Suppose that the following conditions are satisfied :
Then, converges weakly to a point of as .
Remark 22
In the setting of the proof concerning the convergence of the generated trajectories, we consider the set to be . Moreover, the Hilbert space is , and this implies that we actually deduce strong convergence of a strong global solution of the dynamical system (6) to a minimum of the objective function
Consider now the set of limit points of the trajectory , that is
We show that We emphasize that since is convex one has
We have the following result.
Lemma 23
Let be the unique strong global solution of (6) and assume that If and , then the following assumptions hold.
-
(i)
;
-
(ii)
Proof.
Indeed (33) assures that , which immediately proves (i).
For proving (ii) consider Then, there exists such that Now, since is continuous and one has
Remark 24
Lemma 25
(Lemma A.4. [10]) Let , and let be a continuously differentiable function which is bounded from below. Assume that
for some almost every , and some nonnegative function Then, the positive part of belongs to and limit exists.
Now we can prove the following.
Lemma 26
Let be the unique strong global solution of (6) and assume that If and , then for every there exists the limit
Proof.
Let and define the function . Using the chain rule with respect to the differentiation of , we obtain that
and that
Let us denote . Using the dynamical system (6), one has that
This means that
(34) |
By the convexity of the mapping and taking into account that , we obtain
hence by using (6) the inequality (34) becomes
Consequently, one has
(35) |
Now, we present the main result of this subsection regarding the convergence of the solution of the dynamical system (6) as .
Theorem 27
Let be the unique strong global solution of the dynamical system (6) and assume that If and , then converges to a point in as .
Proof.
Remark 28
As it was expected, Theorem 27 remains true if in its hypothesis we assume only that and Indeed, note that under these assumptions the conclusion of Lemma 26 holds, since from its proof becomes
and according to Remark 20 Moreover, it is obvious that there exists such that is non negative for every
This fact combined with Remark 24 lead to the desired conclusion.
4 Conclusions
In this paper we study a second order dynamical system which can be viewed as an extension of the heavy ball system with vanishing damping. This dynamical system is actually a perturbed version of the heavy ball system with vanishing damping, but the perturbation is made in the argument of the gradient of the objective function. Numerical experiments show that this perturbation brings a smoothing effect in the behaviour of the energy error and also in the behaviour of the absolute error , where is a generated trajectory by our dynamical system and is a minimum of the objective Another novelty of our system that via explicit discretization leads to inertial algorithms. A related future research is the convergence analysis of algorithm (10), since this algorithm contains as particular case the famous Nesterov algorithm. However, since Algorithm (10) may allow different inertial steps, its area of applicability can be considerable.
We have shown existence and uniqueness of the trajectories generated by our dynamical system even for the case the objective function is non-convex. Further, we treated the cases when the energy error is of order and we obtained the convergence of a generated trajectory to a minimum of the objective function Another related research is the convergence analysis of the generated trajectories in the case when the objective function is possible non-convex. This would be a novelty in the literature even for the case , that is, for the case of the heavy ball system with vanishing damping.
We underline that the dynamical system (6) can easily be extended to proximal-gradient dynamical systems (see [18, 21] and the references therein). Indeed, let be a proper convex and lower semicontinuous function and let be a possible non-convex smooth function with Lipschitz continuous gradient.
We recall that the proximal point operator of the convex function is defined as
Consider the optimization problem
One can associate to this optimization problem the following second order proximal-gradient dynamical system.
(39) |
where and
The discretization of the the dynamical system (39) leads to proximal-gradient inertial algorithms, similar to the modified FISTA algorithm studied by Chambolle and Dossal in [24], (see also [15]).
Indeed, the explicit discretization of (6) with respect to the time variable , with step size and initial points yields the iterative scheme
The latter can be expressed as
Now, the simplest case is obtained by taking in (40) constant step size . By denoting still with , the Algorithm (40) becomes: For and for every consider
(41) |
where and The convergence of the sequences generated by Algorithm (41) to a critical point of the objective function would open the gate for the study of FISTA type algorithms with nonidentical inertial terms.
References
- [1] J.F. Aujol, Ch. Dossal, Optimal rate of convergence of an ODE associated to the Fast Gradient Descent schemes for , HAL preprint, https://hal.inria.fr/hal-01547251v2/document
- [2] J.F. Aujol, Ch. Dossal, A. Rondepierre, Optimal convergence rates for Nesterov acceleration, arxiv.org/abs/1805.05719
- [3] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Analysis, 9, 3-11, 2001
- [4] V. Apidopoulos, J.F. Aujol, Ch. Dossal, Convergence rate of inertial Forward–Backward algorithm beyond Nesterov’s rule, Math. Program. (2018). https://doi.org/10.1007/s10107-018-1350-9
- [5] B. Abbas, H. Attouch, B.F. Svaiter, Newton-like dynamics and forward-backward methods for structured monotone inclusions in Hilbert spaces, Journal of Optimization Theory and its Applications 161(2), 331-360, 2014
- [6] F. Alvarez, H. Attouch, J. Bolte, P. Redont, A second-order gradient-like dissipative dynamical system with Hessian-driven damping. Application to optimization and mechanics, Journal de Mathématiques Pures et Appliquées (9) 81(8), 747-779, 2002
- [7] H. Attouch, Z. Chbani,Fast inertial dynamics and fista algorithms in convex optimization. Perturbation aspects, arXiv preprint arXiv:1507.01367, 2015
- [8] H. Attouch, Z. Chbani, H. Riahi, Rate of convergence of the Nesterov accelerated gradient method in the subcritical case , ESAIM: COCV (2017). https://doi.org/10.1051/cocv/2017083
- [9] H. Attouch, Z. Chbani, H. Riahi, Fast convex optimization via time scaling of damped inertial gradient dynamics, https://hal.archives-ouvertes.fr/hal-02138954
- [10] H. Attouch, Z. Chbani, J. Peypouquet, P. Redont, Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity, Math. Program. 168(1-2) Ser. B, 123-175, 2018
- [11] H. Attouch, X. Goudou, P. Redont, The heavy ball with friction method, I. The continuous dynamical system: global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system, Communications in Contemporary Mathematics 2, 1-34, 2000
- [12] H. Attouch, J. Peypouquet, P. Redont, Fast convex optimization via inertial dynamics with Hessian driven damping, J. Differential Equations, 261(10), 5734-5783, 2016
- [13] H. Attouch, B.F. Svaiter, A continuous dynamical Newton-like approach to solving monotone inclusions, SIAM Journal on Control and Optimization 49(2), 574-598, 2011
- [14] M. Balti, R. May,Asymptotic for the perturbed heavy ball system with vanishing damping term, arXiv preprint arXiv:1609.00135, 2016.
- [15] A. Beck, M. Teboulle, A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems, SIAM Journal on Imaging Sciences, 2(1), 183-202, 2009
- [16] P. Bégout, J. Bolte, M.A. Jendoubi, On damped second-order gradient systems, Journal of Differential Equations, (259), 3115-3143, 2015
- [17] R.I. Boţ¸, E.R. Csetnek, S.C. László, An inertial forward-backward algorithm for minimizing the sum of two non-convex functions, Euro Journal on Computational Optimization, 4(1), 3-25, 2016
- [18] R.I. Boţ, E.R. Csetnek, S.C. László, Approaching nonsmooth nonconvex minimization through second-order proximal-gradient dynamical systems, Journal of Evolution Equations, 18(3), 1291-1318, 2018
- [19] C. Alecsa, S.C. László, A. Viorel, A gradient type algorithm with backward inertial steps associated to a nonconvex minimization problem, https://www.researchgate.net/publication/331832475
- [20] R.I. Boţ, E.R. Csetnek, S.C. László, A second order dynamical approach with variable damping to nonconvex smooth minimization, Applicable Analysis (2018). https://doi.org/10.1080/00036811.2018.1495330
- [21] R.I. Boţ, E.R. Csetnek, S.C. László, A primal-dual dynamical approach to structured convex minimization problems, https://arxiv.org/abs/1905.08290
- [22] A. Cabot, H. Engler, S. Gadat, On the long time behavior of second order differential equations with asymptotically small dissi- pation, Trans. Amer. Math. Soc. 361, pp. 5983-6017, 2009
- [23] A. Cabot, H. Engler, S. Gadat,Second order differential equations with asymptotically small dissipation and piecewise at potentials, Electronic Journal of Differential Equations 17, 33-38, 2009
- [24] A. Chambolle, Ch. Dossal, On the convergence of the iterates of the ”fast iterative shrinkage/thresholding algorithm”, J. Optim. Theory Appl., 166(3), 968-982, 2015
- [25] A. Haraux, Systèmes Dynamiques Dissipatifs et Applications, Recherches en Mathématiques Appliquéées 17, Masson, Paris, 1991
- [26] S.C. László, Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization, https://arxiv.org/pdf/1811.09616.pdf
- [27] M. Muehlebach, M. I. Jordan, A Dynamical Systems Perspective on Nesterov Acceleration, arXiv:1905.07436v1
- [28] Y. Nesterov , Introductory lectures on convex optimization: a basic course. Kluwer Academic Publishers, Dordrecht, 2004
- [29] Y.E. Nesterov, A method for solving the convex programming problem with convergence rate , (Russian) Dokl. Akad. Nauk SSSR 269(3), 543-547, 1983
- [30] Z. Opial,Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73, 591-597, 1967
- [31] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, U.S.S.R. Comput. Math. Math. Phys., 4(5), 1-17, 1964
- [32] E.D. Sontag, Mathematical control theory. Deterministic finite-dimensional systems, Second edition, Texts in Applied Mathematics 6, Springer-Verlag, New York, 1998
- [33] W. Su, S. Boyd, E.J. Candes, A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights, Journal of Machine Learning Research, 17, 1-43, 2016
- [34] R.E. Bruck, Asymptotic convergence of nonlinear contraction semigroups in Hilbert spaces, J. Funct. Anal., 18 , 15-26, 1975