Abstract
The paper deals with fractional optimization problems where the objective function (ratio of two functions) is defined on a Cartesian product of two real normed spaces X and Y. Within this framework, we are interested to determine the so-called partial minimizers, i.e. points in \(X\times Y\) with the property that any of its variables minimizes the objective function, restricted to this variable, with respect to the other one. While any global minimizer is obviously a partial minimizer, the reverse implication holds true only under additional assumptions (e.g. separability properties of the involved functions). By exploiting the particularities of the objective function, we deliver a Dinkelbach type algorithm for computing partial minimizers of fractional optimization problems. Further assumptions on the involved spaces and functions, such as Lipschitz-type continuity, partial Fr\'{e}chet differentiability, and coercivity, enable us to establish the convergence of our algorithm to a partial minimizer.
Authors
Christian Günther
Alexandru Orzan
Faculty of Mathematics and Computer Science, Babeş-Bolyai University, Cluj-Napoca, Romania; Department of Mathematics, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
Radu Precup
Babes-Bolyai University, Cluj-Napoca, Romania &
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy
Keywords
Fractional optimization; Dinkelbach type algorithm; coercive function; partial minimizer
Paper coordinates
??
About this paper
Journal
Optimization
A Journal of Mathematical Programming and Operations Research
Publisher Name
Taylor and Francis
Print ISSN
Online ISSN
google scholar link
Paper (preprint) in HTML form
Componentwise Dinkelbach algorithm for nonlinear fractional optimization problems
Abstract
The paper deals with fractional optimization problems where the objective function (ratio of two functions) is defined on a Cartesian product of two real normed spaces and . Within this framework, we are interested to determine the so-called partial minimizers, i.e., points in with the property that any of its variables minimizes the objective function, restricted to this variable, with respect to the other one. While any global minimizer is obviously a partial minimizer, the reverse implication holds true only under additional assumptions (e.g. separability properties of the involved functions). By exploiting the particularities of the objective function, we deliver a Dinkelbach type algorithm for computing partial minimizers of fractional optimization problems. Further assumptions on the involved spaces and functions, such as Lipschitz-type continuity, partial Fréchet differentiability, and coercivity, enable us to establish the convergence of our algorithm to a partial minimizer.
Dedicated to Professor Wolfgang W. Breckner on the occasion of his 80th anniversary.
1 Introduction and Preliminaries
Fractional optimization problems occur in numerous practical applications where the goal is to optimize a ratio-based performance metric (see, e.g., Shen and Yu [1, 2], Elbenani and Ferland [3] and the references therein). Thus they are important in many fields such as economy, industrial planning, medical planning, social politics and other related domains. Moreover, fractional optimization problems have a wide range of mathematical applications (see, e.g., Frenk and Schaible [4], Göpfert et al. [5], Schaible [6, 8, 7]). Within operator theory, for instance, the problem of eigenvalues and eigenvectors reduces to a fractional minimization problem (see, e.g., Precup [9]). Problems of this type also occur in other domains of mathematics, for example in graph theory and game theory (see Stancu-Minasian [10], Stancu-Minasian and Tigan [11]).
Throughout the entire article, let us consider that and are two real normed spaces, , are nonempty sets and are functions such that is bounded from below and
The general fractional optimization problem we are interested in is given by
() |
In problem , one is usually searching for points such that
which means that is a global minimizer. Within this paper, we switch our focus on the problem of finding points with the property that
(1) | ||||
(2) |
Points satisfying (1) and (2) are called partial minimizers for problem ().
Remark 1.1.
Obviously, any global minimizer of the functional is also a partial minimizer. However, the converse implication does not generally hold true, as shown by the following example:
It is easy to check that is a partial minimizer but not a global one, since .
When dealing with fractional optimization problems, one famous method for computing global minimizers is the Dinkelbach algorithm (see Dinkelbach [12] and the references [13], [14], [15], [16], [17], [18]).
In order to obtain partial minimizers, we use a modified version of the Dinkelbach algorithm, which relies on solving at each iteration step two parametric problems of the following type
() | ||||
() |
where is a parameter. Our algorithmic procedure generates three sequences and , where is a solution of the problem , is a solution of and
Under some appropriate conditions on the fractional optimization problem, the convergence of the sequences is ensured and we obtain that the point , where and are the limits of and , is a partial minimizer of , while the limit of equals the value .
The paper is structured as follows: in Section 2 we show some relationships between global minimizers and partial minimizers (Proposition 2.1) and we emphasize the particular case when and have separate variables, showing that in this framework the partial minimum coincides with the global minimum (Proposition 2.2). In Section 3 we first present for comparison the original Dinkelbach algorithm (Algorithm 3.1). Next we give our componentwise version of the algorithm (Algorithm 3.2). The last Section 4 is devoted to the convergence of Algorithm 3.2. The monotony and convergence of the sequence are established by Theorem 4.1, while the convergence of the sequences and is proved by Theorem 4.4. We conclude the paper by an illustrative example.
2 Relationships between global minimizers and partial minimizers
In what follows, we outline the connections between global minimizers, partial minimizers, and critical points.
Proposition 2.1.
Let and be two real normed spaces and a function. Then:
-
Any global minimizer of is a partial minimizer, but the converse implication is not generally true, even in the case when the function is convex.
-
If is a Fréchet differentiable function, then any partial minimizer is a critical point.
-
If is convex, then is a global minimizer if and only if the origin belongs to the subdifferential of at i.e., In particular, if is convex and Fréchet differentiable, then any partial minimizer is a global minimizer.
Proof.
The first part of assertion is straightforward. In order to show the second part, one can simply take the function , . By some quick computations using the definition, it can be deduced that is convex. Additionally, one can easily check that the point is a partial minimizer. However, does not have a global minimum, since it is not bounded from below ( as ).
For assertion , recall that for the product space , the Fréchet derivative of at the point is in fact the couple of partial derivatives . Assume now that is a partial minimizer for . Then we have that
Thus is a minimum point for the function , whereas is a minimum point for the function . Hence, from the infinite dimensional version of Fermat’s theorem and taking into account that is Fréchet differentiable, one has that . Therefore is indeed a critical point of .
The next result comes as a sufficient condition under which the partial minimizer points are global minimizers, for a fractional function .
Proposition 2.2.
If have separate variables in the sense that
then any partial minimizer is a global minimizer of the ratio .
Proof.
3 Dinkelbach type algorithms
In this section we are going to state the componentwise Dinkelbach algorithm for calculating partial minimizers of fractional optimization problems. We start by recalling the original Dinkelbach algorithm applied to objective functions , depending of two variables, with the aim to have a better understanding of our componentwise algorithm that follows.
Algorithm 3.1 (Dinkelbach’s algorithm).
- Step (initialization)
-
Take any point calculate
and set
- Step (cycle step for
-
Find a point such that
calculate
and perform Step
Remark 3.1.
It is known that, under certain assumptions on the objective function, both sequences and generated by the Dinkelbach algorithm are convergent to and , and the point is a global minimizer of on (see Dinkelbach [12]).
We are going to give now our modified version of the Dinkelbach algorithm for computing partial minimizers of fractional optimization problems.
Algorithm 3.2 (Componentwise Dinkelbach algorithm).
- Step (initialization)
-
Take any point calculate
and set
- Step (cycle step)
-
Find a point such that
(5) then find such that
(6) next calculate
(7) and perform Step
Remark 3.2.
Our componentwise algorithm (Algorithm 3.2) for computing partial minimizers of fractional optimization problems combines ideas of the original Dinkelbach method (Algorithm 3.1) and the so-called alternating direction method (see, e.g., Boţ and Csetnek [21], Boţ, Dao and Li [22], Boyd et al. [23], Geissler et al. [24], Goldstein et al. [25], Kleinert and Schmidt [26]). However, one has to notice that compared to the alternating direction method, in our approach the objective function keeps on changing at every iteration due to the sequence .
In the particular case when partial minimizers coincide with global minimizers (see Section 2), the componentwise algorithm will give an alternative to the original Dinkelbach procedure (Algorithm 3.1) for computing global minimizers.
In view of Proposition 2.2, for the case when the functions and have separate variables, our componentwise algorithm requires at each step to separately solve the minimization problems
instead of the global minimization problem
required by the original Dinkelbach algorithm.
Remark 3.3.
We prescribe the implementation of Algorithm 3.2 along with other well-known fractional programming techniques. Notably, Boţ and Csetnek [21] introduced modifications of the Dinkelbach algorithm for certain classes of fractional programming problems. Instead of completely solving the scalarized problems in each iteration, these modifications take only one iterative step of a numerical method. This approach has been further developed in [22]. Furthermore, Algorithm 3.2 could also be adapted following the approach presented in the paper Orzan and Precup [15]. This adaptation would involve replacing the exact solutions of the minimization problems at each iterative step with approximate solutions, introducing a predefined error tolerance in the process.
Remark 3.4.
It is worth noticing that the most problematic step when applying the algorithm is solving the minimization problems given by (5) and (6). Certainly, from an efficiency perspective, these problems should be easier to solve than the original fractional problem. In many cases this can be guaranteed if we consider the following frameworks:
-
•
Convex-Concave Fractional Programs: Assume that , are convex sets, is nonnegative and convex with respect to (w.r.t.) each variable and is concave w.r.t. each variable. Then the minimization problems at hand turn into classical convex problems.
-
•
Convex-Convex Fractional Programs: Consider that , are convex sets, is nonnegative and convex w.r.t. each variable and is also convex w.r.t. each variable. Then the objective functions involved in the minimization problems mentioned above are differences of convex functions. Such problems are known in the literature as DC optimization problems and can be solved by using specific techniques (see, e.g., Hiriart-Urruty [27], Le Thi and Pham [28, 29], de Oliveira [30], Singer [31, Chapter 8], Toland [32]).
Our last Remark provides some conditions under which the structure of the minimization problems involved in the algorithm is more simple to deal with. The following Proposition states existence results for the solutions of problems (5) and (6). Some other conditions of generalized convexity type (see, e.g., Breckner and Kassay [33]) might also be considered.
Proposition 3.1.
-
(a)
, are compact sets, is is lower semicontinuous (l.s.c.) in each variable and is continuous in each variable.
-
(b)
and are finite-dimensional normed spaces, is l.s.c. in each variable, coercive in (i.e., for every ), coercive in (i.e., for every ) and is continuous in each variable.
-
(c)
, are reflexive Banach spaces, is both l.s.c. and convex in each variable, coercive in , coercive in and nonnegative, whereas is both upper semicontinuous and concave in each variable.
Proof.
For the result follows by the classical Weierstrass theorem. follows easily by our assumptions, taking into account the compactness of the unit ball and the Weierstrass theorem. follows by Zălinescu [34, Theorem 2.5.1]. ∎
4 Convergence of the componentwise Dinkelbach algorithm
In this section, we present the results concerning the convergence of the sequences given by our Algorithm 3.2.
Theorem 4.1.
The sequence given by Algorithm 3.2 is nonincreasing and convergent to a real number .
Proof.
Let . The inequality is equivalent to and since is positive, to the inequality
that we have to prove. Using the definition of the point , we have
By doing the same thing for we obtain
These two inequalities imply the desired result. Thus is nonincreasing and since it is also bounded from below, it converges to some ∎
Theorem 4.2.
Assume that are closed subsets of the normed spaces and respectively, the operators and are continuous, and
(8) |
Then the following assertions hold true:
-
(a)
One has
-
(b)
Denoting
one has
-
(c)
is a partial minimizer on of
Proof.
Conclusions (a) and (b) follow from (7). To prove (c) we start from
for every By letting go to infinity, we obtain that
which is equivalent to (). Similarly, for any one has
whence since we get
that is (). Hence we conclude that is a partial minimizer. ∎
In our way to ensure that condition (8) takes place, we first proceed to guarantee the boundedness of one of the sequences and . To this aim, we are going to assume that , are entire normed spaces and the following hypothesis holds true:
- (H1)
-
The functional is coercive in uniformly w.r.t. i.e., for any , there is such that for and all
Remark 4.1.
One can easily prove that if functional is coercive in uniformly w.r.t. , then it is also coercive in (as defined by in Proposition 3.1). The converse, however, is not generally true (take and consider the points from the first bisector to reach the contradiction).
Proposition 4.3.
Under assumption (H1), the sequence is bounded.
Proof.
Since and the operator are bounded, there exists such that
(9) |
Consequently, if we suppose that the sequence is unbounded, then by the coercivity condition (H1), we can find some unbounded subsequence of and arrive to a contradiction with (9). ∎
In the following part of the section we proceed with the essential step of ensuring the convergence of the sequences , which will conclude the convergence of our Algorithm 3.2. In order to achieve this, we consider that and are Hilbert spaces, , and , are functionals with their partial (Fréchet) derivatives (see, e.g., Precup [35]) satisfying some Lipschitz continuity or monotonicity conditions related to their derivatives.
We denote by and the partial Fréchet derivatives of and , and we assume that
- (H2)
-
The derivatives and are bounded on
Now we define the operators
where
(10) |
Furthermore we also assume that
(H3) There exist some positive constants , , such that
for all and
We denote
and we assume that
(H4) and
We now present a sufficient condition for the convergence of the sequences and , reminiscent of Banach’s fixed point theorem.
Theorem 4.4.
Under the assumptions (H1)-(H4), the sequences and are convergent.
Proof.
By the definition of and Fermat’s theorem we have that
hence
(11) |
where . In a similar manner we obtain for that
(12) |
where . We note that in virtue of (H2), the sequences and are convergent to zero. From relation (11) we obtain that
which is equivalent to
(13) |
Taking in the role of in formula (13) we infer that
(14) |
From the hypothesis (H3) and by combining relations (13) and (14), we have that
Since and one has and so
Taking into account the assumptions imposed on and in (10), we obtain that which leads to and consequently we infer that . In addition, since , it follows that Hence
(15) |
By making use of relation (12), it can be deduced in a similar manner that
and based on (10), we obtain that
(16) |
Consequently, relations (15) and (16) become
Hence, due to (H4), we arrive to
(17) |
and
(18) |
Using relation (18) with in the role of , we obtain that
(19) |
By combining relations (17) and (19) we are led to
Consequently, denoting
one can rewrite the last inequality as
By making use of [36, Lemma 3.2] and assumption (H4), we obtain that as uniformly in , hence is a Cauchy sequence, therefore convergent. Taking into account relation (18), we obtain that is also convergent and the conclusion follows. ∎
Remark 4.2.
We are now going to put our results to the test by providing a simple example.
Example 4.5.
Let , and given by
One can easily check that and satisfy condition (b) from Proposition 3.1, hence the existence of the sequences and is ensured. We start with Assumptions can be verified real quick. After some simple computations, we have that
where the constants must satisfy and Thus we have
for all , hence assumption is also valid. Consequently we have that
If we choose , , then is also satisfied and according to Remark 4.2 there exists which is a partial minimizer for . Notice that the functions and mentioned above have separate variables and, as a consequence, we obtain, via Proposition 2.2, that the point is also a global minimizer.
Acknowledgement
The authors are very grateful to the anonymous referees for their valuable comments that helped us improve the manuscript.
References
- [1] Shen K, Yu W. Fractional programming for communication systems–Part I: Power control and beamforming. IEEE Transactions on Signal Processing 2018;66(10):2616–2630.
- [2] Shen K, Yu W. Fractional programming for communication systems–Part II: Uplink scheduling via matching. IEEE Transactions on Signal Processing 2018;66(10):2631–2644.
- [3] Elbenani B, Ferland JA. Cell formation problem solved exactly with the Dinkelbach algorithm. Report CIRRELT-2012-07, University of Montreal. 2012.
- [4] Frenk JBG, Schaible S. Fractional programming. In: Hadjisavvas N, Komlósi S, Schaible S, editors. Handbook of generalized convexity and generalized monotonicity. New York (NY): Springer; 2005. p. 335–386.
- [5] Göpfert A, Riahi H, Tammer C, et al. Variational methods in partially ordered spaces. New York (NY): Springer; 2006.
- [6] Schaible S. A survey of fractional programming. In: Schaible S, Ziemba WT, editors. Generalized concavity in optimization and economics. New York (NY): Academic Press; 1981. p. 417-440.
- [7] Schaible S. Bibliography in fractional programming. Math. Methods Oper. Res. 1982;26:211–241.
- [8] Schaible S, Shi J. Fractional programming: The sum-of-ratios case. Optim. Methods Softw. 2003;18:219–229.
- [9] Precup R. Linear and semilinear partial differential equations. Berlin (BE): De Gruyter; 2013.
- [10] Stancu-Minasian IM. Fractional programming. Theory, methods and applications. Dordrecht (NL): Kluwer; 1997.
- [11] Stancu-Minasian IM, Tigan S. Continuous time linear-fractional programming. The minimum-risk approach. RAIRO Oper. Res. 2000;34:397–409.
- [12] Dinkelbach W. On nonlinear fractional programming. Manage. Sci. 1967;13:492–498.
- [13] Crouzeix JP, Ferland JA. Algorithms for generalized fractional programming. Math. Program. 1991;52:191–207.
- [14] Crouzeix JP, Ferland JA, Nguyen VH. Revisiting Dinkelbach-type algorithms for generalized fractional programs. OPSEARCH. 2008;45:97–110.
- [15] Orzan A, Precup R. Dinkelbach type approximation algorithms for nonlinear fractional optimization problems. Numerical Functional Analysis and Optimization. 2023; 44(9):954–969. DOI: 10.1080/01630563.2023.2217893
- [16] Ródenas RG, López ML, Verastegui D. Extensions of Dinkelbach’s algorithm for solving non-linear fractional programming problems. Top 1999;7:33–70.
- [17] Shi J. A combined algorithm for fractional programming. Ann. Oper. Res. 2001;103:135–147.
- [18] Tammer K, Tammer C, Ohlendorf E. Multicriterial fractional optimization, Berlin (BE): Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Mathematik; 2005.
- [19] Barbu V, Precupanu T. Convexity and optimization in Banach spaces. Alphen aan de Rijn (NL): Sijthoff and Noordhoff; 1978.
- [20] Cambini A, Martein L. Generalized convexity and optimization: Theory and applications. Berlin (DE): Springer-Verlag; 2009.
- [21] Boţ RI, Csetnek ER. Proximal-gradient algorithms for fractional programming. Optimization. 2017;66(8):1383–1396.
- [22] Boţ RI, Dao MN, Li G. Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Mathematics of Operations Research. 2021;47(3):2415–2443.
- [23] Boyd S, Parikh N, Chu E, Peleato B, Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011; 3(1):1–122.
- [24] Geissler B, Morsi A, Schewe L, et al. Penalty alternating direction methods for mixed-integer optimization: A new view on feasibility pumps. SIAM J. Optim. 2017;27(3):1611–1636.
- [25] Goldstein T, O’Donoghue B, Setzer S, Baraniuk R. Fast alternating direction optimization methods. SIAM J. Imaging Sci. 2014;7(3):1588–1623.
- [26] Kleinert T, Schmidt M. Computing feasible points of bilevel problems with a penalty alternating direction method. INFORMS J. Comput. 2021;33(1):198–215.
- [27] Hiriart-Urruty JB. Generalized differentiability / duality and optimization for problems dealing with differences of convex functions. In: Ponstein J, editors. Convexity and duality in optimization. Lecture notes in economics and mathematical systems, vol 256. Berlin, Heidelberg (DE): Springer; 1985. p. 37–70.
- [28] Le Thi HA, Pham DT. The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Annals of Operations Research. 2005;133:23–46.
- [29] Le Thi HA, Pham DT. Open issues and recent advances in DC programming and DCA. Journal of Global Optimization. 2023. https://doi.org/10.1007/s10898-023-01272-1.
- [30] de Oliveira W. The ABC of DC programming. Set-Valued and Variational Analysis. 2020;28:679–706.
- [31] Singer I. Duality for nonconvex approximation and optimization. New York (NY): Springer; 2006.
- [32] Toland JF. A duality principle for non-convex optimisation and the calculus of variations. Arch. Rational Mech. Anal. 1979;71:41–61.
- [33] Breckner WW, Kassay G. A systematization of convexity concepts for sets and functions. J. Convex Anal. 1997;4:109–127.
- [34] Zălinescu C. Convex analysis in general vector spaces. Singapore (SG): World Scientific Publising; 2002.
- [35] Precup R. Methods in nonlinear integral equations. Dordrecht (DR): Springer; 2002.
- [36] Precup R. Nash-type equilibria and periodic solutions to nonvariational systems. Adv. Nonlinear Anal. 2014;3:197–207.
[1] Shen K, Yu W. Fractional programming for communication systems–part I: power control and beamforming. IEEE Trans Signal Process. 2018;66(10):2616–2630. doi: 10.1109/TSP.2018.2812733 [Crossref] [Web of Science ®], [Google Scholar]
[2] Shen K, Yu W. Fractional programming for communication systems–part II: uplink scheduling via matching. IEEE Trans Signal Process. 2018;66(10):2631–2644. doi: 10.1109/TSP.2018.2812748 [Crossref] [Web of Science ®], [Google Scholar]
[3] Elbenani B, Ferland JA. Cell formation problem solved exactly with the Dinkelbach algorithm. University of Montreal; 2012. (Report CIRRELT-2012-07). [Google Scholar]
[4] Frenk JBG, Schaible S. Fractional programming. In: Hadjisavvas N, Komlósi S, Schaible S, editors. Handbook of generalized convexity and generalized monotonicity. New York (NY): Springer; 2005. p. 335–386. [Crossref], [Google Scholar]
[5] Göpfert A, Riahi H, Tammer C, et al. Variational methods in partially ordered spaces. New York (NY): Springer; 2006. [Google Scholar]
[6] Schaible S. A survey of fractional programming. In: Schaible S, Ziemba WT, editors. Generalized concavity in optimization and economics. New York (NY): Academic Press; 1981. p. 417–440. [Google Scholar]
[7] Schaible S. Bibliography in fractional programming. Math Methods Oper Res. 1982;26(1):211–241. doi: 10.1007/BF01917115 [Crossref], [Google Scholar]
[8] Schaible S, Shi J. Fractional programming: the sum-of-ratios case. Optim Methods Softw. 2003;18(2):219–229. doi: 10.1080/1055678031000105242 [Taylor & Francis Online] [Web of Science ®], [Google Scholar]
[9] Precup R. Linear and semilinear partial differential equations. Berlin (BE): De Gruyter; 2013. [Google Scholar]
[10] Stancu-Minasian IM. Fractional programming: theory, methods and applications. Dordrecht (NL): Kluwer; 1997. [Crossref], [Google Scholar]
[11] Stancu-Minasian IM, Tigan S. Continuous time linear-fractional programming. the minimum-risk approach. RAIRO Oper Res. 2000;34(4):397–409. doi: 10.1051/ro:2000121 [Crossref] [Web of Science ®], [Google Scholar]
[12] Dinkelbach W. On nonlinear fractional programming. Manage Sci. 1967;13(7):492–498. doi: 10.1287/mnsc.13.7.492 [Crossref], [Google Scholar]
[13] Crouzeix JP, Ferland JA. Algorithms for generalized fractional programming. Math Program. 1991;52(1-3):191–207. doi: 10.1007/BF01582887 [Crossref], [Google Scholar]
[14] Crouzeix JP, Ferland JA, Nguyen VH. Revisiting Dinkelbach-type algorithms for generalized fractional programs. OPSEARCH. 2008;45(2):97–110. doi: 10.1007/BF03398807 [Crossref], [Google Scholar]
[15] Orzan A, Precup R. Dinkelbach type approximation algorithms for nonlinear fractional optimization problems. Numer Funct Anal Optim. 2023;44(9):954–969. doi: 10.1080/01630563.2023.2217893 [Taylor & Francis Online] [Web of Science ®], [Google Scholar]
[16] Ródenas RG, López ML, Verastegui D. Extensions of Dinkelbach’s algorithm for solving non-linear fractional programming problems. Top. 1999;7(1):33–70. doi: 10.1007/BF02564711 [Crossref], [Google Scholar]
[17] Shi J. A combined algorithm for fractional programming. Ann Oper Res. 2001;103(1/4):135–147. doi: 10.1023/A:1012998904482 [Crossref], [Google Scholar]
[18] Tammer K, Tammer C, Ohlendorf E. Multicriterial fractional optimization. Berlin (BE): Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Mathematik; 2005. [Google Scholar]
[19] Barbu V, Precupanu T. Convexity and optimization in Banach spaces. Alphen aan de Rijn (NL): Sijthoff and Noordhoff; 1978. [Google Scholar]
[20] Cambini A, Martein L. Generalized convexity and optimization: theory and applications. Berlin (DE): Springer-Verlag; 2009. [Google Scholar]
[21] Boţ RI, Csetnek ER. Proximal-gradient algorithms for fractional programming. Optimization. 2017;66(8):1383–1396. doi: 10.1080/02331934.2017.1294592 [Taylor & Francis Online] [PubMed] [Web of Science ®], [Google Scholar]
[22] Boţ RI, Dao MN, Li G. Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Math Oper Res. 2021;47(3):2415–2443. [Crossref], [Google Scholar]
[23] Boyd S, Parikh N, Chu E, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn. 2010;3(1):1–122. doi: 10.1561/2200000016 [Crossref], [Google Scholar]
[24] Geissler B, Morsi A, Schewe L, et al. Penalty alternating direction methods for mixed-integer optimization: a new view on feasibility pumps. SIAM J Optim. 2017;27(3):1611–1636. doi: 10.1137/16M1069687 [Crossref] [Web of Science ®], [Google Scholar]
[25] Goldstein T, O’Donoghue B, Setzer S, et al. Fast alternating direction optimization methods. SIAM J Imaging Sci. 2014;7(3):1588–1623. doi: 10.1137/120896219 [Crossref] [Web of Science ®], [Google Scholar]
[26] Kleinert T, Schmidt M. Computing feasible points of bilevel problems with a penalty alternating direction method. INFORMS J Comput. 2021;33(1):198–215. doi: 10.1287/ijoc.2019.0945 [Crossref] [Web of Science ®], [Google Scholar]
[27] Hiriart-Urruty JB. Generalized differentiability / duality and optimization for problems dealing with differences of convex functions. In: Ponstein J, editor. Convexity and duality in optimization. Lecture notes in economics and mathematical systems, vol. 256. Berlin, Heidelberg (DE): Springer; 1985. p. 37–70. [Crossref], [Google Scholar]
[28] Le Thi HA, Pham DT. The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann Oper Res. 2005;133(1-4):23–46. doi: 10.1007/s10479-004-5022-1 [Crossref] [Web of Science ®], [Google Scholar]
[29] Le Thi HA, Pham DT. Open issues and recent advances in DC programming and DCA. J Glob Optim. 2023. doi: 10.1007/s10898-023-01272-1 [Crossref] [Web of Science ®], [Google Scholar]
[30] de Oliveira W. The ABC of DC programming. Set-Valued Var Anal. 2020;28(4):679–706. doi: 10.1007/s11228-020-00566-w [Crossref] [Web of Science ®], [Google Scholar]
[31] Singer I. Duality for nonconvex approximation and optimization. New York (NY): Springer; 2006. [Crossref], [Google Scholar]
[32] Toland JF. A duality principle for non-convex optimisation and the calculus of variations. Arch Rational Mech Anal. 1979;71(1):41–61. doi: 10.1007/BF00250669 [Crossref] [Web of Science ®], [Google Scholar]
[33] Breckner WW, Kassay G. A systematization of convexity concepts for sets and functions. J Convex Anal. 1997;4:109–127. [Google Scholar]
[34] Zălinescu C. Convex analysis in general vector spaces. Singapore (SG): World Scientific Publising; 2002. [Crossref], [Google Scholar]
[35] Precup R. Methods in nonlinear integral equations. Dordrecht (DR): Springer; 2002. [Crossref], [Google Scholar]
[36] Precup R. Nash-type equilibria and periodic solutions to nonvariational systems. Adv Nonlinear Anal. 2014;3:197–207. doi: 10.1515/anona-2014-0006 [Crossref] [Web of Science ®], [Google Scholar]