Dinkelbach Type Approximation Algorithms for Nonlinear Fractional Optimization Problems

Abstract


In this paper we establish some approximation versions of the classical Dinkelbach algorithm for nonlinear fractional optimization problems in Banach spaces. We start by examining what occurs if at any step of the algorithm, the generated point desired to be a minimizer can only be determined with a given error. Next we assume that the step error tends to zero as the algorithm advances. The last version of the algorithm we present is making use of Ekeland’s variational principle for generating the sequence of minimizer-like points. In the final part of the article we deliver some results in order to achieve a Palais-Smale type compactness condition that guarantees the convergence of our Dinkelbach-Ekeland algorithm.

Authors

Alexandru Orzan
Faculty of Mathematics and Computer Science, Babes-Bolyai University, Cluj-Napoca, Romania;
Department of Mathematics, Technical University of Cluj-Napoca, Cluj-Napoca, Romania

Radu Precup
Faculty of Mathematics and Computer Science and Institute of Advanced Studies in Science and Technology, Babes–Bolyai University, Cluj-Napoca, Romania;
Tiberiu Popoviciu Institute ofNumerical Analysis, Romanian Academy, Cluj-Napoca, Romania

Keywords

Dinkelbach algorithm; Ekeland principle; fractional optimization; Palais-Smale condition.

Paper coordinates

A. Orzan, R. Precup, Dinkelbach type approximation algorithms for nonlinear fractional optimization problems, Numerical Functional Analysis and Optimization, 44 (2023) no. 9, pp. 954–969. https://doi.org/10.1080/01630563.2023.2217893

PDF

About this paper

Journal

Numerical Functional Analysis and Optimization

Publisher Name

Taylor and Francis Ltd.

Print ISSN
Online ISSN

google scholar link

Paper (preprint) in HTML form

Dinkelbach type approximation algorithms for nonlinear fractional optimization problems

Dinkelbach type approximation algorithms for nonlinear fractional optimization problems

Alexandru Orzan, Radu Precup A. Orzan, Faculty of Mathematics and Computer Science, Babeş–Bolyai University, 400084 Cluj-Napoca, Romania & Technical University of Cluj-Napoca
Department of Mathematics, Cluj-Napoca 400027, Romania
alexandru.orzan@ubbcluj.ro, alexandru.orzan@math.utcluj.ro R. Precup, Faculty of Mathematics and Computer Science and Institute of Advanced Studies in Science and Technology, Babeş–Bolyai University, 400084 Cluj-Napoca, Romania & Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, P.O. Box 68-1, 400110 Cluj-Napoca, Romania r.precup@math.ubbcluj.ro
Abstract.

In this paper we establish some approximation versions of the classical Dinkelbach algorithm for nonlinear fractional optimization problems in Banach spaces. We start by examining what occurs if at any step of the algorithm, the generated point desired to be a minimizer can only be determined with a given error. Next we assume that the step error tends to zero as the algorithm advances. The last version of the algorithm we present is making use of Ekeland’s variational principle for generating the sequence of minimizer-like points. In the final part of the article we deliver some results in order to achieve a Palais-Smale type compactness condition that guarantees the convergence of our Dinkelbach-Ekeland algorithm.

Key words and phrases:
Fractional optimization, Dinkelbach algorithm, Ekeland principle, Palais-Smale condition
1991 Mathematics Subject Classification:
65K10, 49M37, 90C30

1. Introduction

Fractional optimization problems arise in the modeling of many real-world processes, while the objective is to optimize a certain indicator expressed as a ratio. Examples can be given from financial and corporate planning, production planning, health care and hospital planning, social project development etc. (see, e.g., [1, 4, 9, 19] and the references therein). Examples of fractional optimization problems can also be identified in mathematics. For instance, the problem of eigenvalues and eigenvectors of a positive-definite auto-adjoint operator L returns to the minimization of the ratio

(Lx,x)|x|2,x0.

The minimal value is the first eigenvalue and the minimum point is a corresponding eigenvector (see, e.g., [12, Section 3.11]). Other examples come from graph theory and game theory (see [19]). On more research papers on this subjects, we refer the reader to the following titles [5, 10, 15, 16] and also to some of more recent work in this field [2, 3, 6, 14, 17, 18, 21].

The general form of a fractional problem reads as follows

A(x)B(x)minxD,

where A and B a two given functionals defined on a set D and B(x)0 for all xD. Solving the problem means to obtain a pair (λ,x), where λ is the minimal value of A/B and x is a minimum point, i.e.,

λ=A(x)B(x)=minDAB.

Beside the direct minimization methods, there are known some specific techniques which reduce the problem to an optimization one concerning a non-ratio functional. Such a technique was introduced by Dinkelbach [7] and is based on the parametric optimization problem

A(x)λB(x)minxD.

It consists in generating a sequence (λk) of values of the parameter which ultimately gives the minimum of ratio A/B, and an accompanying sequence (xk) which at limit gives a desired minimum point. At each step k, λk1 being found, it is assume that the functional Aλk1B reaches its infimum at some point xk. This is the aspect that limits the applicability of the method, otherwise useful under general conditions on the nonlinear functionals A andB.

This paper focuses exactly on this issue. Our first aim is to adapt Dinkelbach’s algorithm to the case where at each step, we only consider an ε-approximation of the minimum point. It is proved that the modified algorithm converges to an approximation solution (λ~,x~) with ε-dependent accuracy. Under some topological assumptions on D, A, B and a compactness condition, it is shown that the functional A/B achieves the value λ~. Next we prove that if the error εk accepted at any step decreases to zero, then the algorithm converges to (λ,x~), where λ is the infimum of A/B which is reached under the compactness assumption. Our third aim is to give sufficient conditions for the compactness assumption. Here Ekeland’s variational principle proves to be useful for a more special choice of the xk-point which would make it possible to involve a Palais-Smale (PS) type compactness condition. Achieving the (PS) condition is also discussed.

2. Preliminaries

For informing the reader and for comparison with the results that will be presented in the next section, we recall Dinkelbach’s algorithm and the main stages of the proof of its convergence.

Let D be a nonempty arbitrary set, A:D bounded from bellow and B:D with

0<cB(x)Cfor all xD.

The problem is to find λ, the minimum on D of the ration A/B, and x, a point in D at which λ is reached.

Algorithm 2.1 (Dinkelbach’s algorithm).
Step 0 (initialization):

Take any point x0D, calculate λ0=A(x0)B(x0) and set k=1.

Step k (cycle step for k1):

Find a point xkD such that

A(xk)λk1B(xk)=mk:=minD(Aλk1B),

calculate

λk=A(xk)B(xk)

and perform Step k+1.

Theorem 2.2.

Dinkelbach’s algorithm is convergent and if λ=limλk and xD is such that

A(x)λB(x)=minD(AλB),

then

(2.1) λ=A(x)B(x)=minDAB.
Proof.

(a) The sequence (λk) is decreasing. Indeed the inequality λkλk1 is equivalent with A(xk)λk1B(xk)0 which is true since

A(xk)λk1B(xk)=mkA(xk1)λk1B(xk1)=0.

In addition the boundedness conditions on A and B guarantee that (λk) is bounded. Hence it has a finite limit λ.

(b) Since (λk) is decreasing and B is positive it follows that the sequence (mk) is increasing. In addition, it is bounded from above by 0 as follows from

mk=minD(Aλk1B)A(xk1)λk1B(xk1)=0.

Hence (mk) has a limit. Next from

0 = A(xk)λkB(xk)=A(xk)λk1B(xk)+(λk1λk)B(xk)
= mk+(λk1λk)B(xk)

passing to limit gives mk0.

(c) One has λA(x)B(x). This follows from

A(x)λB(x) A(xk)λB(xk)
= A(xk)λkB(xk)+(λkλ)B(xk)
= (λkλ)B(xk)0.

(d) Next it is shown that λ=infDAB. From (c), one has λinfDAB. Assume however that the strict inequality holds. Then for some ε>0, λε>infDAB and consequently there exists an x¯D with λεA(x¯)B(x¯). One has

0 A(x¯)(λε)B(x¯)
= A(x¯)λk1B(x¯)+(λk1λ)B(x¯)+εB(x¯)
mk+(λk1λ)B(x¯)+εB(x¯),

whence passing to limit, using the conclusion in (b), gives the false result 0εB(x¯). Therefore the equality holds.

(e) Now (c) and (d) lead to (2.1).    

Notice that the above algorithm and theorem do not involve any structuring of the set D. However, the general version of Dinkelbach’s algorithm is based on the assumption that the minimization problem for AλB has a solution for every λ[λ1,λ0], where λ1=infD(A/B). In this way the minimization problem is transferred from A/B to the AλB, for which it is assumed that there are simpler methods of solving, as is the case where A and B are affine or convex/concave functions.

We conclude this Preliminaries section by recalling the Ekeland variational principle and some of its consequences (see, e.g., [8, 11]), which, as stated in the Introduction, will prove to be of great importance for our work. We also recall the Palais-Smale (compactness) condition, denoted for simplicity by (PS).

Theorem 2.3 (Ekeland).

Let (X,d) be a complete metric space and let E:X be a lower semicontinuous function bounded from below. Then given ε>0 and u0X, there exists a point uX such that

E(u)E(v)+εd(u,v)for allvX

and

E(u)E(u0)εd(u,u0).
Corollary 2.4.

Let (X,||) be a Banach space and E:X a C1 functional bounded from below. Then for every ε>0, there exists an element uX such that

E(u)infXE+ε,|E(u)|ε

A C1 functional E defined on a Banach space is said to satisfy the (PS) condition if any sequence (xk) with

E(xk)c(c)andE(xk)0

has a convergent subsequence.

Theorem 2.5.

Let (X,||) be a Banach space and E:X be a C1 functional bounded from below that satisfies the (PS) condition. Then there exists a point xX with

E(x)=infXEandE(x)=0.

3. Approximation Dinkelbach algorithms

Dinkelbach’s algorithm requires that at each step k, an exact minimum point can be obtained. However this requirement is almost impossible to fulfill. In what follows we investigate the impact of approximation errors over the iterative algorithm which follows similar steps to those from Dinkelbach’s procedure.

3.1. Dinkelbach algorithm with a fixed error

Let the conditions of Dinkelbach’s algorithm hold and let ε>0 be fixed. Assuming that at any step of the algorithm, the point xk desired to be a point of minimum can only be determined with an error of at most ε. Thus Dinkelbach’s algorithm is modified as follows:

Algorithm 3.1 (Dinkelbach algorithm with a fixed error).
Step 0 (initialization):

Take any point x0D, calculate λ0=A(x0)B(x0) and set k=1.

Step k (cycle step for k1):

Find a point xkD such that

(3.1) A(xk)λk1B(xk)infD(Aλk1B)+ε,

calculate

λk=min{A(xk)B(xk),λk1}

and perform Step k+1.

Theorem 3.2.

If λ~=limλk and x~D is such that

A(x~)λ~B(x~)infD(Aλ~B)+ε,

then

(3.2) infDABλ~infDAB+εc

and

(3.3) λ~εcA(x~)B(x~)λ~+εc.
Proof.

From definition, the sequence (λk) is decreasing and since A/B is bounded from below, it is also bounded. Hence it has a finite limit λ~. Since any λk is a value at some point of A/B, one has λkinfA/B and then

(3.4) λ~infDAB.

Now we prove that

(3.5) λ~infDAB+εc.

Assuming the contrary we can find δ>0 and x¯D with

λ~δA(x¯)B(x¯)+εc.

Hence

0 A(x¯)(λ~δεc)B(x¯)
= A(x¯)λk1B(x¯)+(λk1λ~)B(x¯)+(δ+εc)B(x¯)
infD(Aλk1B)+(λk1λ~)B(x¯)+(δ+εc)B(x¯)
A(xk)λk1B(xk)ε+(λk1λ~)B(x¯)+(δ+εc)B(x¯).

Here

A(xk)λk1B(xk)=(A(xk)B(xk)λk1)B(xk)(λkλk1)C.

Consequently

0(λkλk1)Cε+(λk1λ~)B(x¯)+(δ+εc)B(x¯).

Letting k go to infinity gives

0ε+(δ+εc)B(x¯),

whence since B(x¯)/c1, leads to the false result 0δB(x¯). Thus (3.5) holds.

Therefore (3.4) and (3.5) show that

infDABλ~infDAB+εc.

Clearly the second inequality implies the first inequality in (3.3). To prove the second inequality of (3.3), we first show that A(x~)λ~B(x~)ε.

Two cases are possible:

Case 1: There is an index j1 such that for every kj, one has λk=λk1. Let j0 be the smalest index having this property and denote k0=j01. Clearly one has

λk0=λk0+1=λk0+2=,

whence λ~=λk0. There are two subcases:

(a) k0=0. Then

λk0=A(x0)B(x0);

(b) k01. Then since j0 is the smalest index with the above property, we have λk0<λk01, whence

λk0=min{A(xk0)B(xk0),λk01}=A(xk0)B(xk0).

Hence in both subcases (a) and (b), we have λk0= A(xk0)/B(xk0). Then

A(x~)λ~B(x~) = A(x~)λk0B(x~)infD(Aλk0B)+ε
A(xk0)λk0B(xk0)+ε=ε.

Hence

(3.6) A(x~)B(x~)λ~+εc,

as wished.

Case 2, contrary to Case 1: For every index j1, there exists kjj with λkj<λkj1. Thus, in this case, we have a subsequence (xkj) of (xk) with

(3.7) λkj=A(xkj)B(xkj)

for all j. Then

A(x~)λ~B(x~) infD(Aλ~B)+ε
A(xkj)λkjB(xkj)+(λkjλ~)B(xkj)+ε
= (λkjλ~)B(xkj)+ε

and letting j again gives A(x~)λ~B(x~)ε, whence (3.6).    

Remark 3.1.

Theorem 3.2 reduces to Theorem 2.2 for ε=0, when at every step, A(xk)/B(xk)A(xk1)/B(xk1) and so λk=A(xk)/B(xk).

According to Theorem 3.2, λ~ differs from infDA/B with at most ε/c and x~ is a 2ε/c approximation minimum point of infDA/B.

Next we try to see how the functional A/B reaches the value λ~. To this aim we need a topological structure of D, the continuity of A and B, and a compactness condition.

Theorem 3.3.

Assume in addition that D is a metric space, A and B are continuous on D and that the following compactness condition holds:

(C):

any sequence (yk) of elements of D for which the sequence (A(yk)/B(yk)) is convergent has a subsequence convergent in D.

Then there exists a point x~{xk}¯ (either a term or an accumulation point of the sequence (xk) generated by the approximation Dinkerbach’s algorithm) such that λ~=A(x~)/B(x~).

Proof.

Refer to the cases mentioned in the proof of the previous theorem.

In Case 1, the statement is obvious since

λ~=λk0=A(xk0)B(xk0).

In Case 2, since λkjλ~, according to our compactness assumption (C), the sequence (xkj) has a subsequence that converges to a certain element x~D. Going to the limit with the corresponding subsequence in (3.7) we find that

λ~=A(x~)B(x~).

   

Remark 3.2.

It is clear that condition (C) is satisfied if D is a compact metric space.

3.2. Dinkelbach algorithm with errors decreasing to zero

Let (εk) be a decreasing to zero sequence of numbers, where for each k, εk is the admissible error at step k of the approximation Dinkelbach type algorithm from above. Hence instead of (3.1) we now consider

A(xk)λk1B(xk)infD(Aλk1B)+εk.

In this case, λ~ equals the infimum of A/B, as shown by the next result.

Theorem 3.4.

If λ~=limλk and x~D is such that

A(x~)λ~B(x~)infD(Aλ~B)+ε,

then

(3.8) λ~=λ=infDAB

and

A(x~)B(x~)infDAB+εc.
Proof.

For the proof we only need to show instead of (3.5) the following inequality

(3.9) λ~infDAB.

Assuming by contradiction that the strict opposite inequality holds, we can find δ>0 and x¯D such that

λ~δA(x¯)B(x¯).

Then

0 A(x¯)(λ~δ)B(x¯)
= A(x¯)λk1B(x¯)+(λk1λ~)B(x¯)+δB(x¯)
infD(Aλk1B)+(λk1λ~)B(x¯)+δB(x¯)
A(xk)λk1B(xk)εk+(λk1λ~)B(x¯)+δB(x¯)
(λkλk1)Cεk+(λk1λ~)B(x¯)+δB(x¯).

Letting k go to infinity gives the false result 0δB(x¯). Thus (3.9) holds and yields (3.8).    

We note that the result in Theorem 3.3 remains valid in this context as well. More exactly we have

Theorem 3.5.

Under the assumptions of Theorem 3.3, if (λk) and (xk) are sequences given by the Dinkelbach algorithm with error decreasing to zero and λ=limλk, then there exists a point x{xk}¯ (either a term or an accumulation point of the sequence (xk)) such that

λ=A(x)B(x)=minDAB.

In addition

(3.10) infD(AλkB)infD(AλB)=0as k.
Proof.

The first part follows from Theorems 3.3 and 3.4. To prove (3.10), denote

lk=infD(AλkB)andl=infD(AλB).

Since the sequence (λk) is decreasing, one has that (lk) is increasing. In addition from λλk, we have AλBAλkB, hence llk. Consequently, the sequence (lk) is convergent. Let l~=limklk. Obviously l~l. Suppose that l~<l. Then there exists δ>0 such that l~+δl. Let ϵ(0,δ2). Since B is bounded and positive and (λk) is convergent to λ, it follows there exists kϵ such that for any kkϵ we have

(3.11) (λkλ)Bϵ.

Let k be an arbitrarly number with kkϵ. Since lk=infD(AλkB), we get the existence of a point xϵD with

A(xϵ)λkB(xϵ)lk+ϵ.

The last relation can be written as

A(xϵ)λB(xϵ)(λkλ)B(xϵ)lk+ϵ,

or equivalently as

A(xϵ)λB(xϵ)lk+ϵ+(λkλ)B(xϵ).

Now taking into account relation (3.11), we obtain

lA(xϵ)λB(xϵ)lk+2ϵ,

which implies

lk+2ϵll~+δlk+δ,

whence δ2ϵ, a contradiction with the choice of ϵ. Thus l~=l and the conclusion follows.    

Our next goal is to fulfil assumption (C) for noncompact sets D. We get closer to this desire if we can have additional properties on the sequence (xk). Such a property is guaranteed by Ekeland’s principle, and under the differentiability condition on A and B, makes it possible to invoke the Palais-Smale condition.

4. Dinkelbach-Ekeland approximation algorithm

4.1. Dinkelbach-Ekeland algorithm

We are now going to present a modified version of Dinkelbach’s algorithm. More precisely, while we are generally operating the same way as above, we are also making use of Ekeland’s variational principle to generate the points of the sequence (xk). To this aim, let (X,||) be a Banach space and A,B:X two C1 functionals such that A is bounded from below and

0<cB(x)Cfor all xX.

As above, we are interested in finding λ, the minimum value on X of the ratio A/B, and x, a point in X at which λ is attained.

Let (εk) be a decreasing sequence of positive real numbers. The Dinkelbach-Ekeland algorithm is described below:

Algorithm 4.1 (Dinkelbach-Ekeland algorithm).
Step 0 (initialization):

Take any point x0X, calculate λ0=A(x0)B(x0) and set k=1.

Step k (cycle step for k1):

By using Ekeland’s principle, more exactly Corollary 2.4, we find a point xkX such that

(4.1) A(xk)λk1B(xk)infX(Aλk1B)+εk,
|A(xk)λk1B(xk)|εk

then calculate

λk=min{A(xk)B(xk),λk1}

and perform Step k+1.

Theorem 4.2.

Under the above assumptions, if (λk) and (xk) are the sequences given by the algorithm and λ~=limλk, then

(4.2) A(xk)λ~B(xk)infX(Aλ~B)andA(xk)λ~B(xk)0.

If in addition the functional Aλ~B satisfies the (PS) condition, then there exists a point x{xk}¯ (either a term or an accumulation point of the sequence (xk)) such that

λ~=λ=A(x)B(x)=minXAB.
Proof.

By using the notations from Theorem 3.5, relation (4.1) implies

lkA(xk+1)λkB(xk+1)lk+εk+1

which since εk0, leads to

lim{A(xk+1)λkB(xk+1)}=limlk,

and by (3.10) we obtain

lim{A(xk+1)λkB(xk+1)}=inf{Aλ~B}=0.

Furthurmore we have

lim{A(xk+1)λ~B(xk+1)}=lim{A(xk+1)λkB(xk+1)+(λ~λk)B(xk+1)},

and since B is bunded and λ~λk, we obtain (λ~λk)B0, consqeuently

lim{A(xk+1)λ~B(xk+1)}=inf{Aλ~B}.

Because

|A(xk)λk1B(xk)|=|A(xk)λ~B(xk)+(λ~λk1)B(xk)|ϵk

we obtain similarly that

limA(xk)λ~B(xk)=0.

Hence the sequence (xk) has the properties from (4.2).

Next, assume that the (PS) condition holds for the functional Aλ~B.

Case 1: (λk) has a subsequence (λkj) such that λkj=A(xkj)B(xkj). Then (xkj) satisfies relations (4.2) and since the functional Aλ~B satisfies the (PS) condition, via Theorem 2.5, it follows there exists (xv) a subsequence of (xkj) convergent to a point xX with

A(x)λ~B(x)=inf{Aλ~B}andA(x)λ~B(x)=0,

and by using Theorem 3.2, and letting ε:=εk in relation (3.2), we have

infXABλ~infXAB+εkc,

which since εk0 leads to

λ~=λ=A(x)B(x)=minXAB.

Case 2: There exists an index k0 such that λk=λk0 for all kk0 and λk01>λk0. Then λ~= λk0=A(xk0)/B(xk0), and by Theorem 3.2 we have the same conclusion.    

4.2. Sufficient conditions for the (PS) requirement

Here we provide some sufficient conditions for a functional F to satisfy the (PS) condition. They require some topological properties on F that are well-known in nonlinear analysis.

Lemma 4.3.

Let (X,||) be a Hilbert space and F:X a C1 functional having the following two properties:

(i):

any sequence (xk) for which (F(xk)) converges is bounded;

(ii):

the operator N:=IF is completely continuous.

Then F satisfies the (PS) condition.

Proof.

Let (xk) be any sequence of points from X such that

(4.3) F(xk)candF(xk)0,

where c. We shall prove that (xk) has a convergent subsequence. From (4.3) and assumption (i), it follows that the sequence (xk) is bounded. Also from (4.3), we deduce that the set {F(xk)} is relatively compact. Next from (ii), the set {N(xk)} is relatively compact and from the equality xk=N(xk)+F(xk) it follows that the set {xk}, as a sum of two relatively compact sets, is relatively compact too. Then the sequence (xk) has a convergent sequence.    

A stronger topological condition on F guarantees that any sequence (xk) satisfying F(xk)0 is entirely convergent (F satisfies strongly the (PS) condition).

Lemma 4.4.

Let (X,||) be a Hilbert space and F:X a C1 functional such that

(ii*):

the operator N=IF is a contraction on X.

Then F satisfies strongly the (PS) condition.

Proof.

Let (xk) be any sequence of points from X satisfying (4.3). Since xk=N(xk)+F(xk) it follows that for any integer p1 we have

|xk+pxk||N(xk+p)N(xk)|+|F(xk+p)F(xk)|.

Let L be the Lipschitz constant (L<1) of the operator N. Then

|N(xk+p)N(xk)|L|xk+pxk|

which leads to

(4.4) |xk+pxk|11L|F(xk+p)F(xk)|.

From (4.3) and (4.4) we conclude that (xk) is a Cauchy sequence, consequently the statement is proved.    

Coming back to functionals of the form AλB as in our Dinkelbach-Ekeland algorithm, we have the following result.

Proposition 4.5.

Let (X,||) be a Hilbert space, A,B:X be C1 functionals such there exist constants c,C with 0<cBC. The following statements hold:

(a):

If A is coercive and the operators IA and B are completely continuous, then for each λ, the functional AλB satisfies the (PS) condition.

(b):

If the operators IA and B are LA- respectively LB-Lipschitz continuous with LA<1, then the functional AλB satisfies strongly the (PS) condition for every λ with |λ|<(1LA)/LB.

Proof.

Let F=AλB and let (xk) be any sequence satisfying (4.3).

(a) The coercivity of A and boundedness of B guarantee that F is coercive. Then since the limit of F(xk) is finite, one has that (xk) is bounded. Thus F satisfies condition (i) from Lemma 4.3. In addition, the operator IF=IA+λB is completely continuous as a sum of the completely continuous operators IA and λB. Thus F also satisfies condition (ii) of Lemma 4.3. The conclusion now follows from Lemma 4.3.

(b) It is easily seen that the operator IA+λB is a contraction if LA+|λ|LB<1. Then Lemma 4.4 gives the result.    

As regards the Dinkelbach-Ekeland algorithm we have the following final result on its convergence.

Theorem 4.6.

Let (X,||) be a Hilbert space, the conditions of Theorem 4.2 hold, (λk) and (xk) be the sequences given by the Algorithm 4.1 and λ~=limλk.

(a):

If A is coercive and the operators IA and B are completely continuous, then there exists a point x{xk}¯ (either a term or an accumulation point of the sequence (xk)) such that

(4.5) λ~=λ=A(x)B(x)=minXAB.
(b):

If the operators IA and B are LA- respectively LB-Lipschitz continuous with LA<1 and

(4.6) max{λ0,λ1}<1LALB,

where λ1=infXA/B, then the sequence (xk) converges to some x and (4.5) holds.

Proof.

(a) The result immediately follows from Theorem 4.2 and Proposition 4.5 (a).

(b) Clearly one has λ1λkλ0 for all k. It follows that λ1λ~λ0 and so λ~λ1. Then |λ~|max{λ0,λ1}.This in view of (4.6) gives |λ~|<(1LA)/LB. Thus Proposition 4.5 (b) applies to λ=λ~. Notice that the entire sequence (xk) converges since A(xk)λ~B(xk)0.    


Acknowledgement. The authors would like to especially thank the reviewer for the suggestion of a possible continuation of the research and the related bibliographic indications.

References

  • [1] Avriel, M., Diewert, W.E., Schaible, S., Zang, I. (1988). Generalized Concavity. New York, NY: Plenum Press.
  • [2] Boţ, R.I., Csetnek, E.R. (2017). Proximal-gradient algorithms for fractional programming. Optimization 66:1383-1396.
  • [3] Boţ, R.I., Dao, M.N., Li, G. (2022). Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Math. Oper. Res. 47:2415-2443.
  • [4] Cambini, A., Martein, L. (2009). Generalized Convexity and Optimization: Theory and Applications. Berlin: Springer.
  • [5] Crouzeix, J.P., Ferland, J.A. (1991). Algorithms for generalized fractional programming. Math. Program. 52: 191-207.
  • [6] Crouzeix, J.P., Ferland, J.A., Nguyen, V.H. (2008). Revisiting Dinkelbach-type algorithms for generalized fractional programs. Operational Research Society of India 45: 96-110.
  • [7] Dinkelbach, W. (1967). On nonlinear fractional programming. Management Sci. 13:492-498.
  • [8] Ekeland, I. (1947). On the variational principle. J. Math. Anal. Appl. 47:324-353.
  • [9] Elbenani, B., Ferland, J.A. (2012). Cell formation problem solved exactly with the Dinkelbach algorithm. CIRRELT-2012-07, University of Montreal.
  • [10] Frenk, J.B.G., Schaibe, S. (2005). Fractional programming. In: Hadjisavvas, N., Komlósi, S., Schaible, S.S., Eds. Handbook of Generalized Convexity and Generalized Monotonicity, Vol. 76, Berlin: Springer, pp. 335-386.
  • [11] Frigon, M. (2011). On some generalizations of Ekeland’s principle and inward contractions in gauge spaces. J. Fixed Point Theory Appl. 10:279-298.
  • [12] Precup, R. (2013). Linear and Semilinear Partial Differential Equations. Berlin: De Gruyter.
  • [13] Precup, R. (2014). Nash-type equilibria and periodic solutions to nonvariational systems. Adv. Nonlinear Anal. 3:197-207. (2014)
  • [14] Ródenas, R.G., López M.L., Verastegui, D. (1999). Extensions of Dinkelbach’s algorithm for solving non-linear fractional programming problems. Top 7:33-70.
  • [15] Schaible, S. (1981). A survey of fractional programming. In: Schaible, S., Ziemba, W.T., Eds. Generalized Concavity in Optimization and Economics. New-York: Academic Press, pp. 417-440.
  • [16] Schaible, S. (1982). Bibliography in fractional programming. Math. Methods Oper. Res. 26:211-241.
  • [17] Schaible, S., Shi, J. (2003). Fractional programming: the sum-of-ratios case. Optim. Methods Softw. 18:219-229.
  • [18] Shi, J. (2001). A Combined algorithm for fractional programming. Ann. Oper. Res. 103:135-147.
  • [19] Stancu-Minasian, I.M. (1997). Fractional Programming. Dordrecht: Kluwer.
  • [20] Stancu-Minasian I.M., Tigan, S. (2000). Continuous time linear-fractional programming. The minimum-risk approach. RAIRO Oper. Res. 34:397-409.
  • [21] Tammer, K., Tammer, Chr., Ohlendorf, E. (2005). Multicriterial Fractional Optimization, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Mathematik, preprint.

[1] Avriel, M., Diewert, W. E., Schaible, S., Zang, I. (1988). Generalized Concavity. New York, NY: Plenum Press.
[2] Cambini, A., Martein, L. (2009). Generalized Convexity and Optimization: Theory and Applications. Berlin: Springer.
[3] Elbenani, B., Ferland, J.A. (2012). Cell formation problem solved exactly with the Dinkelbach algorithm. CIRRELT-2012-07, University of Montreal, Montreal. Quebec., p.1–14.
[4] Stancu-Minasian, I. M. (1997). Fractional Programming. Dordrecht: Kluwer.
[5] Precup, R. (2013). Linear and Semilinear Partial Differential Equations. Berlin: De Gruyter.
[6] Crouzeix, J. P., Ferland, J. A. (1991). Algorithms for generalized fractional programming. Math. Program. 52(1-3):191–207. DOI: 10.1007/BF01582887.
[7] Frenk, J. B. G., Schaibe, S. (2005). Fractional programming. In: Hadjisavvas, N., Komlosi, S., Schaible, S.S., eds. Handbook of Generalized Convexity and Generalized Monotonicity, Vol. 76, Berlin: Springer, p. 335–386.
[8] Schaible, S. (1981). A survey of fractional programming. In: Schaible, S., Ziemba, W.T., eds. Generalized Concavity in Optimization and Economics. New York: Academic Press, pp. 417–440.
[9] Schaible, S. (1982). Bibliography in fractional programming. Math. Methods Oper. Res. 26(1):211–241. DOI: 10.1007/BF01917115.
[10] Bot¸, R. I., Csetnek, E. R. (2017). Proximal-gradient algorithms for fractional programming. Optimization. 66(8):1383–1396. DOI: 10.1080/02331934.2017.1294592.
[11] Bot¸, R. I., Dao, M. N., Li, G. (2022). Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Mathematics of OR. 47(3):2415–2443. DOI: 10.1287/moor.2021.1214.
[12] Crouzeix, J. P., Ferland, J. A., Nguyen, V. H. (2008). Revisiting Dinkelbach-type algorithms for generalized fractional programs. Operational Res. Soc. India. 45:96-110.
[13] Rodenas, R. G., Lopez, M. L., Verastegui, D. (1999). Extensions of Dinkelbach ’s algorithm for solving non-linear fractional programming problems. Top. 7(1):33–70. DOI: 10.1007/BF02564711.
[14] Schaible, S., Shi, J. (2003). Fractional programming: The sum-of-ratios case. Optim. Methods Softw. 18(2):219–229. DOI: 10.1080/1055678031000105242.
[15] Shi, J. (2001). A Combined algorithm for fractional programming. Ann. Oper. Res. 103(1/4):135–147. DOI: 10.1023/A:1012998904482.
[16] Tammer, K., Tammer, C., Ohlendorf, E. (2005). Multicriterial Fractional Optimization. Humboldt-Universitat zu Berlin, Mathematisch-Naturwissenschaftliche Fakultat II, Institut fur Mathematik, Berlin, preprint.
[17] Dinkelbach, W. (1967). On nonlinear fractional programming. Manage. Sci. 13(7): 492–498. DOI: 10.1287/mnsc.13.7.492.
[18] Ekeland, I. (1974). On the variational principle. J. Math. Anal. Appl. 47(2):324–353. DOI: 10.1016/0022-247X(74)90025-0.
[19] Frigon, M. (2011). On some generalizations of Ekeland’s principle and inward contractions in gauge spaces. J. Fixed Point Theory Appl. 10(2):279–298. DOI:10.1007/s11784-011-0068-6.
[20] Precup, R. (2014). Nash-type equilibria and periodic solutions to nonvariational systems. Adv. Nonlinear Anal. 3(4):197–207. DOI: 10.1515/anona-2014-0006.

2023

Related Posts