Componentwise Dinkelbach algorithm for nonlinear fractional optimization problems

Abstract

The paper deals with fractional optimization problems where the objective function (ratio of two functions) is defined on a Cartesian product of two real normed spaces X and Y. Within this framework, we are interested to determine the so-called partial minimizers, i.e. points in \(X\times Y\) with the property that any of its variables minimizes the objective function, restricted to this variable, with respect to the other one. While any global minimizer is obviously a partial minimizer, the reverse implication holds true only under additional assumptions (e.g. separability properties of the involved functions). By exploiting the particularities of the objective function, we deliver a Dinkelbach type algorithm for computing partial minimizers of fractional optimization problems. Further assumptions on the involved spaces and functions, such as Lipschitz-type continuity, partial Fr\'{e}chet differentiability, and coercivity, enable us to establish the convergence of our algorithm to a partial minimizer.

Authors

Christian Günther
Institut für Angewandte Mathematik, Leibniz University Hannover, Hannover, Germany

Alexandru Orzan
Faculty of Mathematics and Computer Science, Babeş-Bolyai University, Cluj-Napoca, Romania; Department of Mathematics, Technical University of Cluj-Napoca, Cluj-Napoca, Romania

Radu Precup
Babes-Bolyai University, Cluj-Napoca, Romania &
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy

Keywords

Fractional optimization; Dinkelbach type algorithm; coercive function; partial minimizer

Paper coordinates

PDF

??

About this paper

Journal

Optimization
A Journal of Mathematical Programming and Operations Research

Publisher Name

Taylor and Francis

Print ISSN
Online ISSN

google scholar link

Paper (preprint) in HTML form

Componentwise Dinkelbach algorithm for nonlinear fractional optimization problems

Componentwise Dinkelbach algorithm for nonlinear fractional optimization problems

Christian Günthera, Alexandru Orzanb and Radu Precupc CONTACT A. Orzan Email: alexandru.orzan@ubbcluj.ro, alexandru.orzan@math.utcluj.ro aLeibniz University Hannover, Institut für Angewandte Mathematik, Welfengarten 1, 30167 Hannover, Germany; bBabeş-Bolyai University, Faculty of Mathematics and Computer Science, 400084 Cluj-Napoca & Technical University of Cluj-Napoca, Department of Mathematics, 400027 Cluj-Napoca, Romania; cBabeş-Bolyai University, Faculty of Mathematics and Computer Science, 400084 Cluj-Napoca & Institute of Advanced Studies In Science and Technology, 400048 Cluj-Napoca & Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, 400110 Cluj-Napoca, Romania.
Abstract

The paper deals with fractional optimization problems where the objective function (ratio of two functions) is defined on a Cartesian product of two real normed spaces X and Y. Within this framework, we are interested to determine the so-called partial minimizers, i.e., points in X×Y with the property that any of its variables minimizes the objective function, restricted to this variable, with respect to the other one. While any global minimizer is obviously a partial minimizer, the reverse implication holds true only under additional assumptions (e.g. separability properties of the involved functions). By exploiting the particularities of the objective function, we deliver a Dinkelbach type algorithm for computing partial minimizers of fractional optimization problems. Further assumptions on the involved spaces and functions, such as Lipschitz-type continuity, partial Fréchet differentiability, and coercivity, enable us to establish the convergence of our algorithm to a partial minimizer.

Keywords: Fractional optimization; Dinkelbach type algorithm; coercive function; partial minimizer
Mathematics Subject Classification: 90C32; 49M37; 46N10

Dedicated to Professor Wolfgang W. Breckner on the occasion of his 80th anniversary.

1 Introduction and Preliminaries

Fractional optimization problems occur in numerous practical applications where the goal is to optimize a ratio-based performance metric (see, e.g., Shen and Yu [1, 2], Elbenani and Ferland [3] and the references therein). Thus they are important in many fields such as economy, industrial planning, medical planning, social politics and other related domains. Moreover, fractional optimization problems have a wide range of mathematical applications (see, e.g., Frenk and Schaible [4], Göpfert et al. [5], Schaible [6, 8, 7]). Within operator theory, for instance, the problem of eigenvalues and eigenvectors reduces to a fractional minimization problem (see, e.g., Precup [9]). Problems of this type also occur in other domains of mathematics, for example in graph theory and game theory (see Stancu-Minasian [10], Stancu-Minasian and Tigan [11]).

Throughout the entire article, let us consider that (X,||X) and (Y,||Y) are two real normed spaces, D1X, D2Y are nonempty sets and A,B:D1×D2 are functions such that A is bounded from below and

0<cB(x,y)Cfor all (x,y)D1×D2.

The general fractional optimization problem we are interested in is given by

A(x,y)B(x,y)min(x,y)D1×D2. (P)

In problem (P), one is usually searching for points (x,y)D1×D2 such that

A(x,y)B(x,y)=min(x,y)D1×D2A(x,y)B(x,y),

which means that (x,y) is a global minimizer. Within this paper, we switch our focus on the problem of finding points (x,y)D1×D2 with the property that

A(x,y)B(x,y) =minxD1A(,y)B(,y), (1)
A(x,y)B(x,y) =minyD2A(x,)B(x,). (2)

Points satisfying (1) and (2) are called partial minimizers for problem (P).

Remark 1.1.

Obviously, any global minimizer of the functional AB is also a partial minimizer. However, the converse implication does not generally hold true, as shown by the following example:

A(x,y)=x2+y23xy,B(x,y)=x2+y2+1 withD1=D2=[0,1].

It is easy to check that (0,0) is a partial minimizer but not a global one, since A(1,1)/B(1,1)=1/3<0=A(0,0)/B(0,0).

When dealing with fractional optimization problems, one famous method for computing global minimizers is the Dinkelbach algorithm (see Dinkelbach [12] and the references [13], [14], [15], [16], [17], [18]).

In order to obtain partial minimizers, we use a modified version of the Dinkelbach algorithm, which relies on solving at each iteration step two parametric problems of the following type

A(x,y)λB(x,y) minxD1, (Py(λ))
A(x,y)λB(x,y) minyD2, (Px(λ))

where λ is a parameter. Our algorithmic procedure generates three sequences (λk),(xk) and (yk), where xk is a solution of the problem (Pyk1(λk1)), yk is a solution of (Pxk(λk1)) and

λk=A(xk,yk)B(xk,yk).

Under some appropriate conditions on the fractional optimization problem, the convergence of the sequences is ensured and we obtain that the point (x,y), where x and y are the limits of (xk) and (yk), is a partial minimizer of AB, while the limit of (λk) equals the value A(x,y)B(x,y).

The paper is structured as follows: in Section 2 we show some relationships between global minimizers and partial minimizers (Proposition 2.1) and we emphasize the particular case when A(x,y) and B(x,y) have separate variables, showing that in this framework the partial minimum coincides with the global minimum (Proposition 2.2). In Section 3 we first present for comparison the original Dinkelbach algorithm (Algorithm 3.1). Next we give our componentwise version of the algorithm (Algorithm 3.2). The last Section 4 is devoted to the convergence of Algorithm 3.2. The monotony and convergence of the sequence (λk) are established by Theorem 4.1, while the convergence of the sequences (xk) and (yk) is proved by Theorem 4.4. We conclude the paper by an illustrative example.

2 Relationships between global minimizers and partial minimizers

In what follows, we outline the connections between global minimizers, partial minimizers, and critical points.

Proposition 2.1.

Let X and Y be two real normed spaces and f:X×Y a function. Then:

  • 1

    Any global minimizer of f is a partial minimizer, but the converse implication is not generally true, even in the case when the function is convex.

  • 2

    If f is a C1 Fréchet differentiable function, then any partial minimizer is a critical point.

  • 3

    If f is convex, then (x,y) is a global minimizer if and only if the origin belongs to the subdifferential of f at (x,y), i.e., 0f(x,y). In particular, if f is convex and C1 Fréchet differentiable, then any partial minimizer is a global minimizer.

Proof.

The first part of assertion 1 is straightforward. In order to show the second part, one can simply take the function f:2, f(x,y)=2|xy|y. By some quick computations using the definition, it can be deduced that f is convex. Additionally, one can easily check that the point (0,0) is a partial minimizer. However, f does not have a global minimum, since it is not bounded from below (f(x,x)=x as x+).

For assertion 2, recall that for the product space X×Y, the Fréchet derivative of f at the point (x,y) is in fact the couple of partial derivatives (fx(x,y),fy(x,y)). Assume now that (x,y) is a partial minimizer for f. Then we have that

f(x,y) f(x,y),for all xX
f(x,y) f(x,y),for all yY.

Thus x is a minimum point for the function f(,y), whereas y is a minimum point for the function f(x,). Hence, from the infinite dimensional version of Fermat’s theorem and taking into account that f is Fréchet differentiable, one has that fx(x,y)=0,fy(x,y)=0. Therefore (x,y) is indeed a critical point of f.

The first part of assertion 3 is very popular in convex optimization (see, e.g., [19, p. 91] for infinite dimensional spaces and [20, Theorem 3.2.10] for the finite dimensional case). The last assertion is now a simple consequence of 2.

The next result comes as a sufficient condition under which the partial minimizer points are global minimizers, for a fractional function AB.

Proposition 2.2.

If A,B:D1×D2 have separate variables in the sense that

A(x,y)=A1(x)+A2(y),B(x,y)=B1(x)+B2(y)

then any partial minimizer is a global minimizer of the ratio A(x,y)/B(x,y).

Proof.

Indeed, if (x,y) is a partial minimizer, we have that

A1(x)+A2(y)B1(x)+B2(y)A1(x)+A2(y)B1(x)+B2(y)for all xD1 (3)

and

A1(x)+A2(y)B1(x)+B2(y)A1(x)+A2(y)B1(x)+B2(y)for all yD2. (4)

After a few computations we have that relation (3) is equivalent to

A1(x)B1(x)+A1(x)B2(y)+A2(y)B1(x)
A1(x)B1(x)+A1(x)B2(y)+A2(y)B1(x).

In a similar way, we arrive to the fact that relation (4) is equivalent to

A1(x)B2(y)+A2(y)B1(x)+A2(y)B2(y)
A1(x)B2(y)+A2(y)B1(x)+A2(y)B2(y).

By adding the last two relations and removing the similar terms, we obtain that

A1(x)B1(x)+A2(y)B1(x)+A1(x)B2(y)+A2(y)B2(y)
A1(x)B1(x)+A1(x)B2(y)+A2(y)B1(x)+A2(y)B2(y)

which leads to

A1(x)+A2(y)B1(x)+B2(y)A1(x)+A2(y)B1(x)+B2(y)for all (x,y)D1×D2.

3 Dinkelbach type algorithms

In this section we are going to state the componentwise Dinkelbach algorithm for calculating partial minimizers of fractional optimization problems. We start by recalling the original Dinkelbach algorithm applied to objective functions AB, depending of two variables, with the aim to have a better understanding of our componentwise algorithm that follows.

Algorithm 3.1 (Dinkelbach’s algorithm).
Step 0 (initialization)

Take any point (x0,y0)D1×D2, calculate

λ0:=A(x0,y0)B(x0,y0)

and set k=1.

Step k (cycle step for k1)

Find a point (xk,yk)D1×D2 such that

A(xk,yk)λk1B(xk,yk)=minD1×D2(Aλk1B),

calculate

λk=A(xk,yk)B(xk,yk)

and perform Step k+1.

Remark 3.1.

It is known that, under certain assumptions on the objective function, both sequences (xk) and (yk) generated by the Dinkelbach algorithm are convergent to x and y, and the point (x,y) is a global minimizer of A/B on D1×D2 (see Dinkelbach [12]).

We are going to give now our modified version of the Dinkelbach algorithm for computing partial minimizers of fractional optimization problems.

Algorithm 3.2 (Componentwise Dinkelbach algorithm).
Step 0 (initialization)

Take any point (x0,y0)D1×D2, calculate

λ0:=A(x0,y0)B(x0,y0)

and set k=1.

Step k (cycle step)

Find a point xkD1 such that

minxD1[A(,yk1)λk1B(,yk1)]=A(xk,yk1)λk1B(xk,yk1), (5)

then find ykD2 such that

minyD2[A(xk,)λk1B(xk,)]=A(xk,yk)λk1B(xk,yk), (6)

next calculate

λk:=A(xk,yk)B(xk,yk) (7)

and perform Step k+1.

Remark 3.2.

Our componentwise algorithm (Algorithm 3.2) for computing partial minimizers of fractional optimization problems combines ideas of the original Dinkelbach method (Algorithm 3.1) and the so-called alternating direction method (see, e.g., Boţ and Csetnek [21], Boţ, Dao and Li [22], Boyd et al. [23], Geissler et al. [24], Goldstein et al. [25], Kleinert and Schmidt [26]). However, one has to notice that compared to the alternating direction method, in our approach the objective function Aλk1B keeps on changing at every iteration due to the sequence (λk).

In the particular case when partial minimizers coincide with global minimizers (see Section 2), the componentwise algorithm will give an alternative to the original Dinkelbach procedure (Algorithm 3.1) for computing global minimizers.

In view of Proposition 2.2, for the case when the functions A and B have separate variables, our componentwise algorithm requires at each step k to separately solve the minimization problems

A1(.)λk1B1(.) minD1for obtaining xk
A2(.)λk1B2(.) minD2for obtaining yk

instead of the global minimization problem

Aλk1BminD1×D2for obtaining (xk,yk)

required by the original Dinkelbach algorithm.

Remark 3.3.

We prescribe the implementation of Algorithm 3.2 along with other well-known fractional programming techniques. Notably, Boţ and Csetnek [21] introduced modifications of the Dinkelbach algorithm for certain classes of fractional programming problems. Instead of completely solving the scalarized problems in each iteration, these modifications take only one iterative step of a numerical method. This approach has been further developed in [22]. Furthermore, Algorithm 3.2 could also be adapted following the approach presented in the paper Orzan and Precup [15]. This adaptation would involve replacing the exact solutions of the minimization problems at each iterative step with approximate solutions, introducing a predefined error tolerance in the process.

Remark 3.4.

It is worth noticing that the most problematic step when applying the algorithm is solving the minimization problems given by (5) and (6). Certainly, from an efficiency perspective, these problems should be easier to solve than the original fractional problem. In many cases this can be guaranteed if we consider the following frameworks:

  • Convex-Concave Fractional Programs: Assume that D1, D2 are convex sets, A is nonnegative and convex with respect to (w.r.t.) each variable and B is concave w.r.t. each variable. Then the minimization problems at hand turn into classical convex problems.

  • Convex-Convex Fractional Programs: Consider that D1, D2 are convex sets, A is nonnegative and convex w.r.t. each variable and B is also convex w.r.t. each variable. Then the objective functions involved in the minimization problems mentioned above are differences of convex functions. Such problems are known in the literature as DC optimization problems and can be solved by using specific techniques (see, e.g., Hiriart-Urruty [27], Le Thi and Pham [28, 29], de Oliveira [30], Singer [31, Chapter 8], Toland [32]).

Our last Remark provides some conditions under which the structure of the minimization problems involved in the algorithm is more simple to deal with. The following Proposition states existence results for the solutions of problems (5) and (6). Some other conditions of generalized convexity type (see, e.g., Breckner and Kassay [33]) might also be considered.

Proposition 3.1.

The minimization problems given by (5) and (6) have solutions if any of the following conditions is fulfilled:

  • (a)

    D1, D2 are compact sets, A is is lower semicontinuous (l.s.c.) in each variable and B is continuous in each variable.

  • (b)

    D1=X and D2=Y are finite-dimensional normed spaces, A is l.s.c. in each variable, coercive in x (i.e., lim|x|XA(x,y)= for every yY), coercive in y (i.e., lim|y|YA(x,y)= for every xX) and B is continuous in each variable.

  • (c)

    D1=X, D2=Y are reflexive Banach spaces, A is both l.s.c. and convex in each variable, coercive in x, coercive in y and nonnegative, whereas B is both upper semicontinuous and concave in each variable.

Proof.

For (a) the result follows by the classical Weierstrass theorem. (b) follows easily by our assumptions, taking into account the compactness of the unit ball and the Weierstrass theorem. (c) follows by Zălinescu [34, Theorem 2.5.1]. ∎

4 Convergence of the componentwise Dinkelbach algorithm

In this section, we present the results concerning the convergence of the sequences (λk),(xk),(yk) given by our Algorithm 3.2.

Theorem 4.1.

The sequence (λk) given by Algorithm 3.2 is nonincreasing and convergent to a real number λ.

Proof.

Let k1. The inequality λkλk1 is equivalent to A(xk,yk)B(xk,yk)λk1 and since B is positive, to the inequality

A(xk,yk)λk1B(xk,yk)0

that we have to prove. Using the definition of the point yk, we have

A(xk,yk)λk1B(xk,yk)A(xk,yk1)λk1B(xk,yk1).

By doing the same thing for xk we obtain

A(xk,yk1)λk1B(xk,yk1)A(xk1,yk1)λk1B(xk1,yk1)=0.

These two inequalities imply the desired result. Thus (λk) is nonincreasing and since it is also bounded from below, it converges to some λ.

Theorem 4.2.

Assume that D1,D2 are closed subsets of the normed spaces X and Y, respectively, the operators A and B are continuous, and

xkx,yky. (8)

Then the following assertions hold true:

  1. (a)

    One has

    λ=A(x,y)B(x,y).
  2. (b)

    Denoting

    mk,1:=A(xk,yk1)λk1B(xk,yk1),mk,2:=A(xk,yk)λk1B(xk,yk),

    one has

    limkmk,1=limkmk,2=0.
  3. (c)

    (x,y) is a partial minimizer on D1×D2 of A(x,y)/B(x,y).

Proof.

Conclusions (a) and (b) follow from (7). To prove (c) we start from

mk,1A(x,yk1)λk1B(x,yk1)

for every xD1. By letting k go to infinity, we obtain that

0A(x,y)λB(x,y)

which is equivalent to λA(x,y)B(x,y) (xD1). Similarly, for any yD2 one has

mk,2A(xk,y)λk1B(xk,y),

whence since mk,20 we get

0A(x,y)λB(x,y),

that is λA(x,y)B(x,y) (yD2). Hence we conclude that (x,y) is a partial minimizer. ∎

In our way to ensure that condition (8) takes place, we first proceed to guarantee the boundedness of one of the sequences (xk) and (yk). To this aim, we are going to assume that D1=X, D2=Y are entire normed spaces and the following hypothesis holds true:

(H1)

The functional A(x,y) is coercive in x uniformly w.r.t. y, i.e., for any M>0, there is lM>0 such that A(x,y)M for |x|XlM and all yY.

Remark 4.1.

One can easily prove that if functional A is coercive in x uniformly w.r.t. y, then it is also coercive in x (as defined by (b) in Proposition 3.1). The converse, however, is not generally true (take A(x,y)=x2y2 and consider the points from the first bisector to reach the contradiction).

Proposition 4.3.

Under assumption (H1), the sequence (xk) is bounded.

Proof.

Since (λk) and the operator B are bounded, there exists M>0 such that

A(xk,yk)M for all k. (9)

Consequently, if we suppose that the sequence (xk) is unbounded, then by the coercivity condition (H1), we can find some unbounded subsequence (xkp) of (xk) and arrive to a contradiction with (9). ∎

In the following part of the section we proceed with the essential step of ensuring the convergence of the sequences (xk),(yk), which will conclude the convergence of our Algorithm 3.2. In order to achieve this, we consider that X and Y are Hilbert spaces, D1=X, D2=Y and A, B are C1 functionals with their partial (Fréchet) derivatives (see, e.g., Precup [35]) satisfying some Lipschitz continuity or monotonicity conditions related to their derivatives.

We denote by Ax,Ay,Bx and By the partial Fréchet derivatives of A and B, and we assume that

(H2)

The derivatives Bx and By are bounded on X×Y.

Now we define the operators

N11(x,y) :=c1xAx(x,y), N12(x,y) :=c1yAy(x,y),
N21(x,y) :=c2xBx(x,y), N22(x,y) :=c2yBy(x,y),

where

c1max{1,λ0}+λ0c2andc2>0. (10)

Furthermore we also assume that

(H3) There exist some positive constants aij,bij, i,j{1,2}, such that

|N11(x,y)N11(x¯,y¯)|X a11|xx¯|X+a12|yy¯|Y,
|N12(x,y)N12(x¯,y¯)|Y a21|xx¯|X+a22|yy¯|Y,
|N21(x,y)N21(x¯,y¯)|X b11|xx¯|X+b12|yy¯|Y,
|N22(x,y)N22(x¯,y¯)|Y b21|xx¯|X+b22|yy¯|Y

for all x,x¯X and y,y¯Y.

We denote

c11 :=a11+b11,c12:=a12+b12,c21:=a21+b21,c22:=a22+b22,
a :=c12c21(1c11)(1c22)

and we assume that

(H4) c11<1,c22<1 and a<1.

We now present a sufficient condition for the convergence of the sequences (xk) and (yk), reminiscent of Banach’s fixed point theorem.

Theorem 4.4.

Under the assumptions (H1)-(H4), the sequences (xk) and (yk) are convergent.

Proof.

By the definition of xk and Fermat’s theorem we have that

Ax(xk,yk1)λk1Bx(xk,yk1)=0

hence

Ax(xk,yk1)λBx(xk,yk1)+αk=0 (11)

where αk=(λλk1)Bx(xk,yk1). In a similar manner we obtain for yk that

Ay(xk,yk)λBy(xk,yk)+βk=0 (12)

where βk=(λλk1)By(xk,yk). We note that in virtue of (H2), the sequences (αk) and (βk) are convergent to zero. From relation (11) we obtain that

c1xkN11(xk,yk1)λc2xk+λN21(xk,yk1)+αk=0

which is equivalent to

(c1λc2)xk=N11(xk,yk1)λN21(xk,yk1)αk. (13)

Taking k+p in the role of k in formula (13) we infer that

(c1λc2)xk+p=N11(xk+p,yk+p1)λN21(xk+p,yk+p1)αk+p. (14)

From the hypothesis (H3) and by combining relations (13) and (14), we have that

|c1λc2||xk+pxk|X a11|xk+pxk|X+a12|yk+p1yk1|Y
+λb11|xk+pxk|X+λb12|yk+p1yk1|Y
+|αk+pαk|X.

Since c1>λ0c2, λ0λ and c2>0, one has c1>λc2 and so

|xk+pxk|X a11c1λc2|xk+pxk|X+a12c1λc2|yk+p1yk1|Y
+λb11c1λc2|xk+pxk|X+λb12c1λc2|yk+p1yk1|Y
+1c1λc2|αk+pαk|X.

Taking into account the assumptions imposed on c1 and c2 in (10), we obtain that c1max{1,λ0}+λ0c2(1+c2)λ0, which leads to λc1λc2λ0c1λ0c2,    and consequently we infer that λc1λc21. In addition, since c11+λ0c21+λc2, it follows that 1c1λc21. Hence

|xk+pxk|X(a11+b11)|xk+pxk|X+(a12+b12)|yk+p1yk1|Y+|αk+pαk|X. (15)

By making use of relation (12), it can be deduced in a similar manner that

|c1λc2||yk+pyk|Y a21|xk+pxk|X+a22|yk+pyk|Y
+λb21|xk+pxk|X+λb22|yk+pyk|Y
+|βk+pβk|Y,

and based on (10), we obtain that

|yk+pyk|Y(a21+b21)|xk+pxk|X+(a22+b22)|yk+pyk|Y+|βk+pβk|Y. (16)

Consequently, relations (15) and (16) become

(1c11)|xk+pxk|X c12|yk+p1yk1|Y+|αk+pαk|X,
(1c22)|yk+pyk|Y c21|xk+pxk|X+|βk+pβk|Y.

Hence, due to (H4), we arrive to

|xk+pxk|Xc121c11|yk+p1yk1|Y+11c11|αk+pαk|X (17)

and

|yk+pyk|Yc211c22|xk+pxk|X+11c22|βk+pβk|Y. (18)

Using relation (18) with k1 in the role of k, we obtain that

|yk+p1yk1|Yc211c22|xk+p1xk1|X+11c22|βk+p1βk1|Y. (19)

By combining relations (17) and (19) we are led to

|xk+pxk|X c12c21(1c11)(1c22)|xk+p1xk1|X
+c12(1c11)(1c22)|βk+p1βk1|Y+11c11|αk+pαk|X.

Consequently, denoting

γk,p:=|xk+pxk|X,bk,p:=c12(1c11)(1c22)|βk+p1βk1|Y+11c11|αk+pαk|X,

one can rewrite the last inequality as

γk,paγk1,p+bk,p.

By making use of [36, Lemma 3.2] and assumption (H4), we obtain that γk,p0 as k uniformly in p, hence (xk) is a Cauchy sequence, therefore convergent. Taking into account relation (18), we obtain that (yk) is also convergent and the conclusion follows. ∎

Remark 4.2.

Taking into account Theorem 4.2, under the assumptions of Theorem 4.4, if x and y are the limits of the sequences (xk) and (yk) given by the componentwise Dinkelbach algorithm, then (x,y) is a partial minimizer for A(x,y)/B(x,y).

We are now going to put our results to the test by providing a simple example.

Example 4.5.

Let X=Y=, θ1,θ2>0 and A,B:2 given by

A(x,y)=(x+1)2+y21andB(x,y)=1+θ1x21+x2+θ2y21+y2.

One can easily check that A and B satisfy condition (b) from Proposition 3.1, hence the existence of the sequences (xk) and (yk) is ensured. We start with λ0=A(0,0)B(0,0)=0. Assumptions (H1),(H2) can be verified real quick. After some simple computations, we have that

N11(x,y) =(c12)x2, N12(x,y) =(c12)y,
N21(x,y) =c2xθ12x(1+x2)2, N22(x,y) =c2yθ22y(1+y2)2,

where the constants c1,c2 must satisfy c11 and c2>0. Thus we have

|N11(x,y)N11(x¯,y¯)| |c12||xx¯|,
|N12(x,y)N12(x¯,y¯)| |c12||yy¯|,
|N21(x,y)N21(x¯,y¯)| (c2+2θ1)|xx¯|,
|N22(x,y)N22(x¯,y¯)| (c2+2θ2)|yy¯|

for all x,x¯,y,y¯, hence assumption (H3) is also valid. Consequently we have that

c11 =|c12|+c2+2θ1, c12 =0, c21 =0, c22 =|c12|+c2+2θ2, a =0.

If we choose c1=2, c2=θ1=θ2=14, then (H4) is also satisfied and according to Remark 4.2 there exists (x,y) which is a partial minimizer for A(x,y)/B(x,y). Notice that the functions A and B mentioned above have separate variables and, as a consequence, we obtain, via Proposition 2.2, that the point (x,y) is also a global minimizer.

Acknowledgement

The authors are very grateful to the anonymous referees for their valuable comments that helped us improve the manuscript.

References

  • [1] Shen K, Yu W. Fractional programming for communication systems–Part I: Power control and beamforming. IEEE Transactions on Signal Processing 2018;66(10):2616–2630.
  • [2] Shen K, Yu W. Fractional programming for communication systems–Part II: Uplink scheduling via matching. IEEE Transactions on Signal Processing 2018;66(10):2631–2644.
  • [3] Elbenani B, Ferland JA. Cell formation problem solved exactly with the Dinkelbach algorithm. Report CIRRELT-2012-07, University of Montreal. 2012.
  • [4] Frenk JBG, Schaible S. Fractional programming. In: Hadjisavvas N, Komlósi S, Schaible S, editors. Handbook of generalized convexity and generalized monotonicity. New York (NY): Springer; 2005. p. 335–386.
  • [5] Göpfert A, Riahi H, Tammer C, et al. Variational methods in partially ordered spaces. New York (NY): Springer; 2006.
  • [6] Schaible S. A survey of fractional programming. In: Schaible S, Ziemba WT, editors. Generalized concavity in optimization and economics. New York (NY): Academic Press; 1981. p. 417-440.
  • [7] Schaible S. Bibliography in fractional programming. Math. Methods Oper. Res. 1982;26:211–241.
  • [8] Schaible S, Shi J. Fractional programming: The sum-of-ratios case. Optim. Methods Softw. 2003;18:219–229.
  • [9] Precup R. Linear and semilinear partial differential equations. Berlin (BE): De Gruyter; 2013.
  • [10] Stancu-Minasian IM. Fractional programming. Theory, methods and applications. Dordrecht (NL): Kluwer; 1997.
  • [11] Stancu-Minasian IM, Tigan S. Continuous time linear-fractional programming. The minimum-risk approach. RAIRO Oper. Res. 2000;34:397–409.
  • [12] Dinkelbach W. On nonlinear fractional programming. Manage. Sci. 1967;13:492–498.
  • [13] Crouzeix JP, Ferland JA. Algorithms for generalized fractional programming. Math. Program. 1991;52:191–207.
  • [14] Crouzeix JP, Ferland JA, Nguyen VH. Revisiting Dinkelbach-type algorithms for generalized fractional programs. OPSEARCH. 2008;45:97–110.
  • [15] Orzan A, Precup R. Dinkelbach type approximation algorithms for nonlinear fractional optimization problems. Numerical Functional Analysis and Optimization. 2023; 44(9):954–969. DOI: 10.1080/01630563.2023.2217893
  • [16] Ródenas RG, López ML, Verastegui D. Extensions of Dinkelbach’s algorithm for solving non-linear fractional programming problems. Top 1999;7:33–70.
  • [17] Shi J. A combined algorithm for fractional programming. Ann. Oper. Res. 2001;103:135–147.
  • [18] Tammer K, Tammer C, Ohlendorf E. Multicriterial fractional optimization, Berlin (BE): Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Mathematik; 2005.
  • [19] Barbu V, Precupanu T. Convexity and optimization in Banach spaces. Alphen aan de Rijn (NL): Sijthoff and Noordhoff; 1978.
  • [20] Cambini A, Martein L. Generalized convexity and optimization: Theory and applications. Berlin (DE): Springer-Verlag; 2009.
  • [21] Boţ RI, Csetnek ER. Proximal-gradient algorithms for fractional programming. Optimization. 2017;66(8):1383–1396.
  • [22] Boţ RI, Dao MN, Li G. Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Mathematics of Operations Research. 2021;47(3):2415–2443.
  • [23] Boyd S, Parikh N, Chu E, Peleato B, Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011; 3(1):1–122.
  • [24] Geissler B, Morsi A, Schewe L, et al. Penalty alternating direction methods for mixed-integer optimization: A new view on feasibility pumps. SIAM J. Optim. 2017;27(3):1611–1636.
  • [25] Goldstein T, O’Donoghue B, Setzer S, Baraniuk R. Fast alternating direction optimization methods. SIAM J. Imaging Sci. 2014;7(3):1588–1623.
  • [26] Kleinert T, Schmidt M. Computing feasible points of bilevel problems with a penalty alternating direction method. INFORMS J. Comput. 2021;33(1):198–215.
  • [27] Hiriart-Urruty JB. Generalized differentiability / duality and optimization for problems dealing with differences of convex functions. In: Ponstein J, editors. Convexity and duality in optimization. Lecture notes in economics and mathematical systems, vol 256. Berlin, Heidelberg (DE): Springer; 1985. p. 37–70.
  • [28] Le Thi HA, Pham DT. The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Annals of Operations Research. 2005;133:23–46.
  • [29] Le Thi HA, Pham DT. Open issues and recent advances in DC programming and DCA. Journal of Global Optimization. 2023. https://doi.org/10.1007/s10898-023-01272-1.
  • [30] de Oliveira W. The ABC of DC programming. Set-Valued and Variational Analysis. 2020;28:679–706.
  • [31] Singer I. Duality for nonconvex approximation and optimization. New York (NY): Springer; 2006.
  • [32] Toland JF. A duality principle for non-convex optimisation and the calculus of variations. Arch. Rational Mech. Anal. 1979;71:41–61.
  • [33] Breckner WW, Kassay G. A systematization of convexity concepts for sets and functions. J. Convex Anal. 1997;4:109–127.
  • [34] Zălinescu C. Convex analysis in general vector spaces. Singapore (SG): World Scientific Publising; 2002.
  • [35] Precup R. Methods in nonlinear integral equations. Dordrecht (DR): Springer; 2002.
  • [36] Precup R. Nash-type equilibria and periodic solutions to nonvariational systems. Adv. Nonlinear Anal. 2014;3:197–207.

[1] Shen K, Yu W. Fractional programming for communication systems–part I: power control and beamforming. IEEE Trans Signal Process. 2018;66(10):2616–2630. doi: 10.1109/TSP.2018.2812733  [Crossref] [Web of Science ®][Google Scholar]
[2]
Shen K, Yu W. Fractional programming for communication systems–part II: uplink scheduling via matching. IEEE Trans Signal Process. 2018;66(10):2631–2644. doi: 10.1109/TSP.2018.2812748  [Crossref] [Web of Science ®][Google Scholar]
[3]
Elbenani B, Ferland JA. Cell formation problem solved exactly with the Dinkelbach algorithm. University of Montreal; 2012. (Report CIRRELT-2012-07).  [Google Scholar]
[4]
Frenk JBG, Schaible S. Fractional programming. In: Hadjisavvas N, Komlósi S, Schaible S, editors. Handbook of generalized convexity and generalized monotonicity. New York (NY): Springer; 2005. p. 335–386.  [Crossref][Google Scholar]
[5]
Göpfert A, Riahi H, Tammer C, et al. Variational methods in partially ordered spaces. New York (NY): Springer; 2006.  [Google Scholar]
[6]
Schaible S. A survey of fractional programming. In: Schaible S, Ziemba WT, editors. Generalized concavity in optimization and economics. New York (NY): Academic Press; 1981. p. 417–440.  [Google Scholar]
[7]
Schaible S. Bibliography in fractional programming. Math Methods Oper Res. 1982;26(1):211–241. doi: 10.1007/BF01917115  [Crossref][Google Scholar]
[8]
Schaible S, Shi J. Fractional programming: the sum-of-ratios case. Optim Methods Softw. 2003;18(2):219–229. doi: 10.1080/1055678031000105242  [Taylor & Francis Online] [Web of Science ®][Google Scholar]
[9]
Precup R. Linear and semilinear partial differential equations. Berlin (BE): De Gruyter; 2013.  [Google Scholar]
[10]
Stancu-Minasian IM. Fractional programming: theory, methods and applications. Dordrecht (NL): Kluwer; 1997.  [Crossref][Google Scholar]
[11]
Stancu-Minasian IM, Tigan S. Continuous time linear-fractional programming. the minimum-risk approach. RAIRO Oper Res. 2000;34(4):397–409. doi: 10.1051/ro:2000121  [Crossref] [Web of Science ®][Google Scholar]
[12]
Dinkelbach W. On nonlinear fractional programming. Manage Sci. 1967;13(7):492–498. doi: 10.1287/mnsc.13.7.492  [Crossref][Google Scholar]
[13]
Crouzeix JP, Ferland JA. Algorithms for generalized fractional programming. Math Program. 1991;52(1-3):191–207. doi: 10.1007/BF01582887  [Crossref][Google Scholar]
[14]
Crouzeix JP, Ferland JA, Nguyen VH. Revisiting Dinkelbach-type algorithms for generalized fractional programs. OPSEARCH. 2008;45(2):97–110. doi: 10.1007/BF03398807  [Crossref][Google Scholar]
[15]
Orzan A, Precup R. Dinkelbach type approximation algorithms for nonlinear fractional optimization problems. Numer Funct Anal Optim. 2023;44(9):954–969. doi: 10.1080/01630563.2023.2217893  [Taylor & Francis Online] [Web of Science ®][Google Scholar]
[16]
Ródenas RG, López ML, Verastegui D. Extensions of Dinkelbach’s algorithm for solving non-linear fractional programming problems. Top. 1999;7(1):33–70. doi: 10.1007/BF02564711  [Crossref][Google Scholar]
[17]
Shi J. A combined algorithm for fractional programming. Ann Oper Res. 2001;103(1/4):135–147. doi: 10.1023/A:1012998904482  [Crossref][Google Scholar]
[18]
Tammer K, Tammer C, Ohlendorf E. Multicriterial fractional optimization. Berlin (BE): Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Mathematik; 2005.  [Google Scholar]
[19]
Barbu V, Precupanu T. Convexity and optimization in Banach spaces. Alphen aan de Rijn (NL): Sijthoff and Noordhoff; 1978.  [Google Scholar]
[20]
Cambini A, Martein L. Generalized convexity and optimization: theory and applications. Berlin (DE): Springer-Verlag; 2009.  [Google Scholar]
[21]
Boţ RI, Csetnek ER. Proximal-gradient algorithms for fractional programming. Optimization. 2017;66(8):1383–1396. doi: 10.1080/02331934.2017.1294592  [Taylor & Francis Online] [PubMed] [Web of Science ®][Google Scholar]
[22]
Boţ RI, Dao MN, Li G. Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional programs. Math Oper Res. 2021;47(3):2415–2443.  [Crossref][Google Scholar]
[23]
Boyd S, Parikh N, Chu E, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn. 2010;3(1):1–122. doi: 10.1561/2200000016  [Crossref][Google Scholar]
[24]
Geissler B, Morsi A, Schewe L, et al. Penalty alternating direction methods for mixed-integer optimization: a new view on feasibility pumps. SIAM J Optim. 2017;27(3):1611–1636. doi: 10.1137/16M1069687  [Crossref] [Web of Science ®][Google Scholar]
[25]
Goldstein T, O’Donoghue B, Setzer S, et al. Fast alternating direction optimization methods. SIAM J Imaging Sci. 2014;7(3):1588–1623. doi: 10.1137/120896219  [Crossref] [Web of Science ®][Google Scholar]
[26]
Kleinert T, Schmidt M. Computing feasible points of bilevel problems with a penalty alternating direction method. INFORMS J Comput. 2021;33(1):198–215. doi: 10.1287/ijoc.2019.0945  [Crossref] [Web of Science ®][Google Scholar]
[27]
Hiriart-Urruty JB. Generalized differentiability / duality and optimization for problems dealing with differences of convex functions. In: Ponstein J, editor. Convexity and duality in optimization. Lecture notes in economics and mathematical systems, vol. 256. Berlin, Heidelberg (DE): Springer; 1985. p. 37–70.  [Crossref][Google Scholar]
[28]
Le Thi HA, Pham DT. The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann Oper Res. 2005;133(1-4):23–46. doi: 10.1007/s10479-004-5022-1  [Crossref] [Web of Science ®][Google Scholar]
[29]
Le Thi HA, Pham DT. Open issues and recent advances in DC programming and DCA. J Glob Optim. 2023. doi: 10.1007/s10898-023-01272-1  [Crossref] [Web of Science ®][Google Scholar]
[30]
de Oliveira W. The ABC of DC programming. Set-Valued Var Anal. 2020;28(4):679–706. doi: 10.1007/s11228-020-00566-w  [Crossref] [Web of Science ®][Google Scholar]
[31]
Singer I. Duality for nonconvex approximation and optimization. New York (NY): Springer; 2006.  [Crossref][Google Scholar]
[32]
Toland JF. A duality principle for non-convex optimisation and the calculus of variations. Arch Rational Mech Anal. 1979;71(1):41–61. doi: 10.1007/BF00250669  [Crossref] [Web of Science ®][Google Scholar]
[33]
Breckner WW, Kassay G. A systematization of convexity concepts for sets and functions. J Convex Anal. 1997;4:109–127.  [Google Scholar]
[34]
Zălinescu C. Convex analysis in general vector spaces. Singapore (SG): World Scientific Publising; 2002.  [Crossref][Google Scholar]
[35]
Precup R. Methods in nonlinear integral equations. Dordrecht (DR): Springer; 2002.  [Crossref][Google Scholar]
[36]
Precup R. Nash-type equilibria and periodic solutions to nonvariational systems. Adv Nonlinear Anal. 2014;3:197–207. doi: 10.1515/anona-2014-0006  [Crossref] [Web of Science ®][Google Scholar]

2023

Related Posts