Abstract
In this paper we establish some approximation versions of the classical Dinkelbach algorithm for nonlinear fractional optimization problems in Banach spaces. We start by examining what occurs if at any step of the algorithm, the generated point desired to be a minimizer can only be determined with a given error. Next we assume that the step error tends to zero as the algorithm advances. The last version of the algorithm we present is making use of Ekeland’s variational principle for generating the sequence of minimizer-like points. In the final part of the article we deliver some results in order to achieve a Palais-Smale type compactness condition that guarantees the convergence of our Dinkelbach-Ekeland algorithm.
Authors
Alexandru Orzan
Faculty of Mathematics and Computer Science, Babes-Bolyai University, Cluj-Napoca, Romania;
Department of Mathematics, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
Radu Precup
Faculty of Mathematics and Computer Science and Institute of Advanced Studies in Science and Technology, Babes–Bolyai University, Cluj-Napoca, Romania;
Tiberiu Popoviciu Institute ofNumerical Analysis, Romanian Academy, Cluj-Napoca, Romania
Keywords
Dinkelbach algorithm; Ekeland principle; fractional optimization; Palais-Smale condition.
Paper coordinates
A. Orzan, R. Precup, Dinkelbach type approximation algorithms for nonlinear fractional optimization problems, Numerical Functional Analysis and Optimization, 44 (2023) no. 9, pp. 954–969. https://doi.org/10.1080/01630563.2023.2217893
About this paper
Journal
Numerical Functional Analysis and Optimization
Publisher Name
Taylor and Francis Ltd.
Print ISSN
Online ISSN
google scholar link
Paper (preprint) in HTML form
Dinkelbach type approximation algorithms for nonlinear fractional optimization problems
Abstract.
In this paper we establish some approximation versions of the classical Dinkelbach algorithm for nonlinear fractional optimization problems in Banach spaces. We start by examining what occurs if at any step of the algorithm, the generated point desired to be a minimizer can only be determined with a given error. Next we assume that the step error tends to zero as the algorithm advances. The last version of the algorithm we present is making use of Ekeland’s variational principle for generating the sequence of minimizer-like points. In the final part of the article we deliver some results in order to achieve a Palais-Smale type compactness condition that guarantees the convergence of our Dinkelbach-Ekeland algorithm.
Key words and phrases:
Fractional optimization, Dinkelbach algorithm, Ekeland principle, Palais-Smale condition1991 Mathematics Subject Classification:
65K10, 49M37, 90C301. Introduction
Fractional optimization problems arise in the modeling of many real-world processes, while the objective is to optimize a certain indicator expressed as a ratio. Examples can be given from financial and corporate planning, production planning, health care and hospital planning, social project development etc. (see, e.g., [1, 4, 9, 19] and the references therein). Examples of fractional optimization problems can also be identified in mathematics. For instance, the problem of eigenvalues and eigenvectors of a positive-definite auto-adjoint operator returns to the minimization of the ratio
The minimal value is the first eigenvalue and the minimum point is a corresponding eigenvector (see, e.g., [12, Section 3.11]). Other examples come from graph theory and game theory (see [19]). On more research papers on this subjects, we refer the reader to the following titles [5, 10, 15, 16] and also to some of more recent work in this field [2, 3, 6, 14, 17, 18, 21].
The general form of a fractional problem reads as follows
where and a two given functionals defined on a set and for all Solving the problem means to obtain a pair where is the minimal value of and is a minimum point, i.e.,
Beside the direct minimization methods, there are known some specific techniques which reduce the problem to an optimization one concerning a non-ratio functional. Such a technique was introduced by Dinkelbach [7] and is based on the parametric optimization problem
It consists in generating a sequence of values of the parameter which ultimately gives the minimum of ratio and an accompanying sequence which at limit gives a desired minimum point. At each step being found, it is assume that the functional reaches its infimum at some point This is the aspect that limits the applicability of the method, otherwise useful under general conditions on the nonlinear functionals and
This paper focuses exactly on this issue. Our first aim is to adapt Dinkelbach’s algorithm to the case where at each step, we only consider an -approximation of the minimum point. It is proved that the modified algorithm converges to an approximation solution with -dependent accuracy. Under some topological assumptions on and a compactness condition, it is shown that the functional achieves the value Next we prove that if the error accepted at any step decreases to zero, then the algorithm converges to where is the infimum of which is reached under the compactness assumption. Our third aim is to give sufficient conditions for the compactness assumption. Here Ekeland’s variational principle proves to be useful for a more special choice of the -point which would make it possible to involve a Palais-Smale (PS) type compactness condition. Achieving the (PS) condition is also discussed.
2. Preliminaries
For informing the reader and for comparison with the results that will be presented in the next section, we recall Dinkelbach’s algorithm and the main stages of the proof of its convergence.
Let be a nonempty arbitrary set, bounded from bellow and with
The problem is to find the minimum on of the ration and a point in at which is reached.
Algorithm 2.1 (Dinkelbach’s algorithm).
- Step (initialization):
-
Take any point calculate and set
- Step (cycle step for :
-
Find a point such that
calculate
and perform Step
Theorem 2.2.
Dinkelbach’s algorithm is convergent and if and is such that
then
(2.1) |
Proof.
(a) The sequence is decreasing. Indeed the inequality is equivalent with which is true since
In addition the boundedness conditions on and guarantee that is bounded. Hence it has a finite limit
(b) Since is decreasing and is positive it follows that the sequence is increasing. In addition, it is bounded from above by as follows from
Hence has a limit. Next from
passing to limit gives
(c) One has This follows from
(d) Next it is shown that From (c), one has Assume however that the strict inequality holds. Then for some and consequently there exists an with One has
whence passing to limit, using the conclusion in (b), gives the false result Therefore the equality holds.
(e) Now (c) and (d) lead to (2.1).
Notice that the above algorithm and theorem do not involve any structuring of the set However, the general version of Dinkelbach’s algorithm is based on the assumption that the minimization problem for has a solution for every where In this way the minimization problem is transferred from to the for which it is assumed that there are simpler methods of solving, as is the case where and are affine or convex/concave functions.
We conclude this Preliminaries section by recalling the Ekeland variational principle and some of its consequences (see, e.g., [8, 11]), which, as stated in the Introduction, will prove to be of great importance for our work. We also recall the Palais-Smale (compactness) condition, denoted for simplicity by (PS).
Theorem 2.3 (Ekeland).
Let be a complete metric space and let be a lower semicontinuous function bounded from below. Then given and , there exists a point such that
and
Corollary 2.4.
Let be a Banach space and a functional bounded from below. Then for every there exists an element such that
A functional defined on a Banach space is said to satisfy the (PS) condition if any sequence with
has a convergent subsequence.
Theorem 2.5.
Let be a Banach space and be a functional bounded from below that satisfies the (PS) condition. Then there exists a point with
3. Approximation Dinkelbach algorithms
Dinkelbach’s algorithm requires that at each step an exact minimum point can be obtained. However this requirement is almost impossible to fulfill. In what follows we investigate the impact of approximation errors over the iterative algorithm which follows similar steps to those from Dinkelbach’s procedure.
3.1. Dinkelbach algorithm with a fixed error
Let the conditions of Dinkelbach’s algorithm hold and let be fixed. Assuming that at any step of the algorithm, the point desired to be a point of minimum can only be determined with an error of at most Thus Dinkelbach’s algorithm is modified as follows:
Algorithm 3.1 (Dinkelbach algorithm with a fixed error).
- Step (initialization):
-
Take any point calculate and set
- Step (cycle step for :
-
Find a point such that
(3.1) calculate
and perform Step
Theorem 3.2.
If and is such that
then
(3.2) |
and
(3.3) |
Proof.
From definition, the sequence is decreasing and since is bounded from below, it is also bounded. Hence it has a finite limit Since any is a value at some point of one has and then
(3.4) |
Now we prove that
(3.5) |
Assuming the contrary we can find and with
Hence
Here
Consequently
Letting go to infinity gives
whence since leads to the false result Thus (3.5) holds.
Therefore (3.4) and (3.5) show that
Clearly the second inequality implies the first inequality in (3.3). To prove the second inequality of (3.3), we first show that
Two cases are possible:
Case 1: There is an index such that for every one has Let be the smalest index having this property and denote . Clearly one has
whence There are two subcases:
(a) Then
(b) Then since is the smalest index with the above property, we have whence
Hence in both subcases (a) and (b), we have Then
Hence
(3.6) |
as wished.
Case 2, contrary to Case 1: For every index there exists with Thus, in this case, we have a subsequence of with
(3.7) |
for all Then
and letting again gives whence (3.6).
According to Theorem 3.2, differs from with at most and is a approximation minimum point of
Next we try to see how the functional reaches the value To this aim we need a topological structure of the continuity of and and a compactness condition.
Theorem 3.3.
Assume in addition that is a metric space, and are continuous on and that the following compactness condition holds:
- (C):
-
any sequence of elements of for which the sequence is convergent has a subsequence convergent in
Then there exists a point (either a term or an accumulation point of the sequence generated by the approximation Dinkerbach’s algorithm) such that
Proof.
Refer to the cases mentioned in the proof of the previous theorem.
In Case 1, the statement is obvious since
In Case 2, since according to our compactness assumption (C), the sequence has a subsequence that converges to a certain element Going to the limit with the corresponding subsequence in (3.7) we find that
Remark 3.2.
It is clear that condition (C) is satisfied if is a compact metric space.
3.2. Dinkelbach algorithm with errors decreasing to zero
Let be a decreasing to zero sequence of numbers, where for each is the admissible error at step of the approximation Dinkelbach type algorithm from above. Hence instead of (3.1) we now consider
In this case, equals the infimum of as shown by the next result.
Theorem 3.4.
If and is such that
then
(3.8) |
and
Proof.
We note that the result in Theorem 3.3 remains valid in this context as well. More exactly we have
Theorem 3.5.
Under the assumptions of Theorem 3.3, if and are sequences given by the Dinkelbach algorithm with error decreasing to zero and then there exists a point (either a term or an accumulation point of the sequence ) such that
In addition
(3.10) |
Proof.
The first part follows from Theorems 3.3 and 3.4. To prove (3.10), denote
Since the sequence is decreasing, one has that is increasing. In addition from we have hence Consequently, the sequence is convergent. Let Obviously Suppose that Then there exists such that Let Since is bounded and positive and is convergent to it follows there exists such that for any we have
(3.11) |
Let be an arbitrarly number with . Since we get the existence of a point with
The last relation can be written as
or equivalently as
Now taking into account relation (3.11), we obtain
which implies
whence a contradiction with the choice of Thus and the conclusion follows.
Our next goal is to fulfil assumption (C) for noncompact sets We get closer to this desire if we can have additional properties on the sequence Such a property is guaranteed by Ekeland’s principle, and under the differentiability condition on and makes it possible to invoke the Palais-Smale condition.
4. Dinkelbach-Ekeland approximation algorithm
4.1. Dinkelbach-Ekeland algorithm
We are now going to present a modified version of Dinkelbach’s algorithm. More precisely, while we are generally operating the same way as above, we are also making use of Ekeland’s variational principle to generate the points of the sequence . To this aim, let be a Banach space and two functionals such that is bounded from below and
As above, we are interested in finding the minimum value on of the ratio and a point in at which is attained.
Let be a decreasing sequence of positive real numbers. The Dinkelbach-Ekeland algorithm is described below:
Algorithm 4.1 (Dinkelbach-Ekeland algorithm).
- Step (initialization):
-
Take any point calculate and set
- Step (cycle step for :
-
By using Ekeland’s principle, more exactly Corollary 2.4, we find a point such that
(4.1) then calculate
and perform Step
Theorem 4.2.
Under the above assumptions, if and are the sequences given by the algorithm and , then
(4.2) |
If in addition the functional satisfies the (PS) condition, then there exists a point (either a term or an accumulation point of the sequence ) such that
Proof.
By using the notations from Theorem 3.5, relation (4.1) implies
which since , leads to
and by (3.10) we obtain
Furthurmore we have
and since is bunded and , we obtain , consqeuently
Next, assume that the (PS) condition holds for the functional .
Case 1: has a subsequence such that . Then satisfies relations (4.2) and since the functional satisfies the (PS) condition, via Theorem 2.5, it follows there exists a subsequence of convergent to a point with
and by using Theorem 3.2, and letting in relation (3.2), we have
which since leads to
Case 2: There exists an index such that for all and Then and by Theorem 3.2 we have the same conclusion.
4.2. Sufficient conditions for the (PS) requirement
Here we provide some sufficient conditions for a functional to satisfy the (PS) condition. They require some topological properties on that are well-known in nonlinear analysis.
Lemma 4.3.
Let be a Hilbert space and a functional having the following two properties:
- (i):
-
any sequence for which converges is bounded;
- (ii):
-
the operator is completely continuous.
Then satisfies the (PS) condition.
Proof.
Let be any sequence of points from such that
(4.3) |
where . We shall prove that has a convergent subsequence. From (4.3) and assumption (i), it follows that the sequence is bounded. Also from (4.3), we deduce that the set is relatively compact. Next from (ii), the set is relatively compact and from the equality it follows that the set as a sum of two relatively compact sets, is relatively compact too. Then the sequence has a convergent sequence.
A stronger topological condition on guarantees that any sequence satisfying is entirely convergent ( satisfies strongly the (PS) condition).
Lemma 4.4.
Let be a Hilbert space and a functional such that
- (ii*):
-
the operator is a contraction on
Then satisfies strongly the (PS) condition.
Proof.
Coming back to functionals of the form as in our Dinkelbach-Ekeland algorithm, we have the following result.
Proposition 4.5.
Let be a Hilbert space, be functionals such there exist constants with . The following statements hold:
- (a):
-
If is coercive and the operators and are completely continuous, then for each the functional satisfies the (PS) condition.
- (b):
-
If the operators and are - respectively -Lipschitz continuous with then the functional satisfies strongly the (PS) condition for every with
Proof.
Let and let be any sequence satisfying (4.3).
(a) The coercivity of and boundedness of guarantee that is coercive. Then since the limit of is finite, one has that is bounded. Thus satisfies condition (i) from Lemma 4.3. In addition, the operator is completely continuous as a sum of the completely continuous operators and Thus also satisfies condition (ii) of Lemma 4.3. The conclusion now follows from Lemma 4.3.
(b) It is easily seen that the operator is a contraction if Then Lemma 4.4 gives the result.
As regards the Dinkelbach-Ekeland algorithm we have the following final result on its convergence.
Theorem 4.6.
Let be a Hilbert space, the conditions of Theorem 4.2 hold, and be the sequences given by the Algorithm 4.1 and
- (a):
-
If is coercive and the operators and are completely continuous, then there exists a point (either a term or an accumulation point of the sequence ) such that
(4.5) - (b):
-
If the operators and are - respectively -Lipschitz continuous with and
(4.6) where then the sequence converges to some and (4.5) holds.
Proof.
Acknowledgement. The authors would like to especially thank the reviewer for the suggestion of a possible continuation of the research and the related bibliographic indications.
References
- [1] Avriel, M., Diewert, W.E., Schaible, S., Zang, I. (1988). Generalized Concavity. New York, NY: Plenum Press.
- [2] Boţ, R.I., Csetnek, E.R. (2017). Proximal-gradient algorithms for fractional programming. Optimization 66:1383-1396.
- [3] Boţ, R.I., Dao, M.N., Li, G. (2022). Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Math. Oper. Res. 47:2415-2443.
- [4] Cambini, A., Martein, L. (2009). Generalized Convexity and Optimization: Theory and Applications. Berlin: Springer.
- [5] Crouzeix, J.P., Ferland, J.A. (1991). Algorithms for generalized fractional programming. Math. Program. 52: 191-207.
- [6] Crouzeix, J.P., Ferland, J.A., Nguyen, V.H. (2008). Revisiting Dinkelbach-type algorithms for generalized fractional programs. Operational Research Society of India 45: 96-110.
- [7] Dinkelbach, W. (1967). On nonlinear fractional programming. Management Sci. 13:492-498.
- [8] Ekeland, I. (1947). On the variational principle. J. Math. Anal. Appl. 47:324-353.
- [9] Elbenani, B., Ferland, J.A. (2012). Cell formation problem solved exactly with the Dinkelbach algorithm. CIRRELT-2012-07, University of Montreal.
- [10] Frenk, J.B.G., Schaibe, S. (2005). Fractional programming. In: Hadjisavvas, N., Komlósi, S., Schaible, S.S., Eds. Handbook of Generalized Convexity and Generalized Monotonicity, Vol. 76, Berlin: Springer, pp. 335-386.
- [11] Frigon, M. (2011). On some generalizations of Ekeland’s principle and inward contractions in gauge spaces. J. Fixed Point Theory Appl. 10:279-298.
- [12] Precup, R. (2013). Linear and Semilinear Partial Differential Equations. Berlin: De Gruyter.
- [13] Precup, R. (2014). Nash-type equilibria and periodic solutions to nonvariational systems. Adv. Nonlinear Anal. 3:197-207. (2014)
- [14] Ródenas, R.G., López M.L., Verastegui, D. (1999). Extensions of Dinkelbach’s algorithm for solving non-linear fractional programming problems. Top 7:33-70.
- [15] Schaible, S. (1981). A survey of fractional programming. In: Schaible, S., Ziemba, W.T., Eds. Generalized Concavity in Optimization and Economics. New-York: Academic Press, pp. 417-440.
- [16] Schaible, S. (1982). Bibliography in fractional programming. Math. Methods Oper. Res. 26:211-241.
- [17] Schaible, S., Shi, J. (2003). Fractional programming: the sum-of-ratios case. Optim. Methods Softw. 18:219-229.
- [18] Shi, J. (2001). A Combined algorithm for fractional programming. Ann. Oper. Res. 103:135-147.
- [19] Stancu-Minasian, I.M. (1997). Fractional Programming. Dordrecht: Kluwer.
- [20] Stancu-Minasian I.M., Tigan, S. (2000). Continuous time linear-fractional programming. The minimum-risk approach. RAIRO Oper. Res. 34:397-409.
- [21] Tammer, K., Tammer, Chr., Ohlendorf, E. (2005). Multicriterial Fractional Optimization, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Mathematik, preprint.
[1] Avriel, M., Diewert, W. E., Schaible, S., Zang, I. (1988). Generalized Concavity. New York, NY: Plenum Press.
[2] Cambini, A., Martein, L. (2009). Generalized Convexity and Optimization: Theory and Applications. Berlin: Springer.
[3] Elbenani, B., Ferland, J.A. (2012). Cell formation problem solved exactly with the Dinkelbach algorithm. CIRRELT-2012-07, University of Montreal, Montreal. Quebec., p.1–14.
[4] Stancu-Minasian, I. M. (1997). Fractional Programming. Dordrecht: Kluwer.
[5] Precup, R. (2013). Linear and Semilinear Partial Differential Equations. Berlin: De Gruyter.
[6] Crouzeix, J. P., Ferland, J. A. (1991). Algorithms for generalized fractional programming. Math. Program. 52(1-3):191–207. DOI: 10.1007/BF01582887.
[7] Frenk, J. B. G., Schaibe, S. (2005). Fractional programming. In: Hadjisavvas, N., Komlosi, S., Schaible, S.S., eds. Handbook of Generalized Convexity and Generalized Monotonicity, Vol. 76, Berlin: Springer, p. 335–386.
[8] Schaible, S. (1981). A survey of fractional programming. In: Schaible, S., Ziemba, W.T., eds. Generalized Concavity in Optimization and Economics. New York: Academic Press, pp. 417–440.
[9] Schaible, S. (1982). Bibliography in fractional programming. Math. Methods Oper. Res. 26(1):211–241. DOI: 10.1007/BF01917115.
[10] Bot¸, R. I., Csetnek, E. R. (2017). Proximal-gradient algorithms for fractional programming. Optimization. 66(8):1383–1396. DOI: 10.1080/02331934.2017.1294592.
[11] Bot¸, R. I., Dao, M. N., Li, G. (2022). Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Mathematics of OR. 47(3):2415–2443. DOI: 10.1287/moor.2021.1214.
[12] Crouzeix, J. P., Ferland, J. A., Nguyen, V. H. (2008). Revisiting Dinkelbach-type algorithms for generalized fractional programs. Operational Res. Soc. India. 45:96-110.
[13] Rodenas, R. G., Lopez, M. L., Verastegui, D. (1999). Extensions of Dinkelbach ’s algorithm for solving non-linear fractional programming problems. Top. 7(1):33–70. DOI: 10.1007/BF02564711.
[14] Schaible, S., Shi, J. (2003). Fractional programming: The sum-of-ratios case. Optim. Methods Softw. 18(2):219–229. DOI: 10.1080/1055678031000105242.
[15] Shi, J. (2001). A Combined algorithm for fractional programming. Ann. Oper. Res. 103(1/4):135–147. DOI: 10.1023/A:1012998904482.
[16] Tammer, K., Tammer, C., Ohlendorf, E. (2005). Multicriterial Fractional Optimization. Humboldt-Universitat zu Berlin, Mathematisch-Naturwissenschaftliche Fakultat II, Institut fur Mathematik, Berlin, preprint.
[17] Dinkelbach, W. (1967). On nonlinear fractional programming. Manage. Sci. 13(7): 492–498. DOI: 10.1287/mnsc.13.7.492.
[18] Ekeland, I. (1974). On the variational principle. J. Math. Anal. Appl. 47(2):324–353. DOI: 10.1016/0022-247X(74)90025-0.
[19] Frigon, M. (2011). On some generalizations of Ekeland’s principle and inward contractions in gauge spaces. J. Fixed Point Theory Appl. 10(2):279–298. DOI:10.1007/s11784-011-0068-6.
[20] Precup, R. (2014). Nash-type equilibria and periodic solutions to nonvariational systems. Adv. Nonlinear Anal. 3(4):197–207. DOI: 10.1515/anona-2014-0006.