Return to Article Details Improving complexity of Karmarkar's approach for linear programming

IMPROVING COMPLEXITY OF KARMARKAR’S APPROACH FOR LINEAR PROGRAMMING

DJAMEL BENTERKI MOUSAAB BOUAFIA§

November 2, 2014.
Published online: January 23, 2015.

Laboratoire de Mathématiques Fondamentales et Numériques LMFN, Département de Mathématiques, Faculté des Sciences, Université Sétif-1, Algérie, e-mail: dj_benterki@yahoo.fr.

§LabCAV, Laboratoire de Contrôle Avancé, Département de Mathématiques, Faculté des Mathématiques et de l’Informatique et des Sciences de la Matière, Université de Guelma, Algérie, e-mail: mousaab84@yahoo.fr.

In this paper, we are interested in the performance of Karmarkar’s projective algorithm for linear programming. Based on the work of Schrijver, we offer a new displacement step better than Schrijver’s one which led to a moderate improvement in the behavior of the algorithm shift. We show later that the algorithm converges after n1log(2)+(nr210)log(ctenε) iterations.

This purpose is confirmed by numerical experiments showing the efficiency of the obtained algorithm, which are presented in the last of paper.

MSC. 90C05, 65K05

Keywords. Linear programming, interior point method, potential function.

1 Introduction

Karmarkar’s algorithm for solving linear optimization problems [ 1 ] is the first truly efficient algorithm that solves these problems in polynomial time. The ellipsoid method also works in polynomial time, but it is ineffective in practice. Using a potential function shows that the Karmarkar’s algorithm converges after a number O(nq+ nlogn) iterations for a displacement step αK=14. Since then, many researchers are interested in this algorithm in order to more improve its numerical behavior. The two covered basic elements covered are the direction of displacement that dominates the cost computation at each iteration and the displacement step which plays an important role in the speed of convergence. Indeed, by changing the analytical form of the potential function, Padberg [ 2 ] was able to reduce the number of iterations to O(nq) for a displacement step αP=12. For its part, Schrijver [ 7 ] , keeping the same Karmarkar’s potential function has been shown that the algorithm converges after n1log(2)log(ctenε) iterations for a displacement step αS=11+nr.

Our work falls within this framework, where drawing works of Karmarkar [ 1 ] and Shrijver [ 7 ] , we propose a new displacement step to improve the numerical behavior of the algorithm of Karmarkar.

The paper is organized as follows. In Section 2, we present the linear program treated by Karmarkar. Section 3 is the main objective of our work. At first, we propose a new displacement step and then we show that it is better than that of Schrijver. On the other hand, we give an improved polynomial convergence result compared to the Schrijver’s one. In Section 4, we present some computational results. Finally, a conclusion is provided in Section 5.

2 Linear programming problem treated by Karmarkar

In his article [ 1 ] , Karmarkar treats the linear programming problem in the following reduced form:

(PK){minctx=z=0Ax=0xSn={xR+n , entx=1},

where cRn, ARm×n is a matrix of full rank (rank(A)=m<n ) and  en=(1,1,....,1)tRn.

The vector x0=ennSn, is the center of the simplex Sn.

2.1 Karmarkar’s Transformation on Sn

The Karmarkar’s transformation is defined at each iteration by:

Tk:SnSn         xy=Tk(x)=Dk1xentDk1x

such that Dk=diag(xk) is the n×n diagonal matrix with the components of xk as the diagonal entries.

Similarly, the transformation Tk is invertible and we have:

 x=Tk1(y)=DkyentDky.

The transformed problem of (PK) by the transformation Tk is the linear programming problem:

(PKT){min(Dkc)tyADky=0ySn={yR+n:enty=1},

which can be written as:

(PKT){min(Dkc)ty[ADkent]y=[01]y0.

The transformed problem (PKT) is placed in the reduced form of Karmarkar, and the transformation Tk can bring the iterated point xk in the center of the simplex Sn, i.e., Tk(xk)=enn.

Before applying the optimality conditions, Karmarkar [ 1 ] relaxes the problem (PKT) by replacing the condition y0 by the sphere s(enn,αr), (it is easily shown in [ 2 ] that if ys(enn,αr) then y0) with  r=1n(n1) and 0<α<1.

The problem (PKT) becomes:

(PKT)r{min(Dkc)ty          [ADkent]y=[01]yenn2(αr )2.                         

Using the optimality conditions, the optimal solution of the problem (PKT)r is given by:

y=ennαrdk,

where dk =PkPk, such that Pk is the projection of the vector Dkc on the kernel of the constraint matrix ADk.

Returning by the inverse transform Tk1, a feasible solution xk+1 is obtained of the original problem (PK) such that xk+1=Tk1(yk)=DkykentDkyk.

Remark 1

Initially, Karmarkar has taken the displacement step α between 0 and 1. This ensures that all iterates remain inside the simplex and show the polynomial convergence of the algorithm. In numerical terms, it appears that more the displacement step α is large more the algorithm converges faster. For this reason, several researchers suggest to use variable displacement step through the line search method. Unfortunately, the procedure is expensive and there are no convergence results. Our aim is to improve the theoretical results given by Karmarkar, by using short step that depend on the size of the considered problem. This idea is inspired by the work of R. Naseri & A. Valinejad [ 5 ] and Schrijver [ 7 ] . For the moment, we give the polynomial convergence theorem of Karmarkar’s algorithm.â–¡

2.2 Convergence of Karmarkar’s algorithm

To study the polynomial convergence of the algorithm, Karmarkar used (in case z=0) the following potential function:

f(x)=nlogctxi=1nlogxi

defined on the set

Dx={xRn:x>0,Ax=0,entx=1}.

Theorem 2 Karmarkar’s convergence theorem

[ 1 ]

If   0<αK14 , then starting from x0=enn, after O(nq+ nlogn) iterations, the algorithm finds a feasible point x such that:

(1) ctx=0

or

(2) ctxctx02q, where q is a fixed precision.

Remark 3

In his work, Padberg [ 2 ] was able to improve the convergence of the classical algorithm of Karmarkar by replacing Karmarkar’s potential function as follows:

h(x)=ctx(i=1nxi)1n

defined on the set

Dx={xRn:x>0,Ax=0,entx=1}

and showed that for a displacement step αP=12, the algorithm converges after O(nq) iterations. In the same direction regardless of Padberg, Schrijver [ 7 ] has shown that for a displacement step αS=11+nr, the algorithm converges after n1log2logctenε iterations.â–¡

In the next section, we propose a new value of displacement step αB better than Schrijver’s one, whose aim is to improve further the speed of convergence of Karmarkar’s algorithm.

3 Improving convergence of Karmarkar’s algorithm

In this section, we propose a new displacement step αB better than the one which was introduced by Schrijver, in order to reduce the number of iterations and accelerate the convergence of Karmarkar’s algorithm and then, we give the corresponding results of the algorithm’s complexity. Before that, we present some lemmas below, in order to help the reader to understand the results that we have established.

Lemma 4

[ 6 ] Let ψn be a function defined by:

ψn:]1,+[n R+
 x   ψn(x)=entxi=1nlog(1+xi)

then for  x<1  we have:

ψ1(x)ψn(x).

Lemma 5

[ 6 ] If xDx then:

ctxexp(f(x)n)

where f is the Karmarkar’s potential function.

More Generally, Keraghel [ 2 ] showed the following Lemma:

Lemma 6

[ 2 ] Let xk be the kth iterate of Karmarkar’s algorithm, then:

ctxkctx0(exp[f(xk)f(x0)])1n,

where

x0=enn.

Lemma 7

[ 6 ] At each iteration k, the potential function decreases from the following quantity:

f(xk+1)f(xk)=Δ,

such that:

xk+1=Tk1(yk) and Δ=nlogctDkenctDkyk+i=1nlogyik

Lemma 8

[ 6 ] If 0<α<1nr then:

Δαn2r2+nψ1(αrR)ψ1(αnr),

where R=n1n

Based on the previous lemmas, we present the following results.

Lemma 9

For αB=5+nr25+(5+nr2)nr and αS=11+nr, we have:

a) αB>αS,

b) ψ1(αBnr2)>15αBn2r4,

c) Δψ1(15nr(5+nr2)).

Proof â–¼

a) Since 5+nr2>5, then

(5+nr2)(1+nr)>5+(5+nr2)nr(5+nr2)5+(5+nr2)nr>11+nr

which gives:

αB>αS.

b) By definition, we have:

ψ1(αBnr2)=αBnr2log(1αBnr2),

Since log(1x)<xx22 for 0<x<1 then:

ψ1(αBnr2)>αBnr2+αBnr2+αB2n2r42>αBn2r42αS>15αBn2r4

because:

αS=11+nr11+2>25

so:

ψ1(αBnr2)>15αBn2r4.

c) By Lemma 8 and since 1R=nr we have:

Δαn2r2+nψ1(αnr2)ψ1(αnr),   with 0<α<1nr,

for α=αB=5+nr25+(5+nr2)nr and by b) we get:

Δ>αBn2r2+n(15αBn2r4)ψ1(αBnr)>αBn2r2+15nr2(αBn2r2)+αBnr +log(1αBnr)>(n2r2+15nr2(n2r2)+nr)αB+log(1αBnr)>15(5n2r2+nr2(n2r2)+5nr)5+nr25+(5+nr2)nr+log(15+nr25+(5+nr2)nrnr)>15nr(5nr+nr2(nr)+5)5+nr25+(5+nr2)nr+log(15+nr25+(5+nr2)nrnr)>15nr((5+nr2)(nr)+5)5+nr25+(5+nr2)nr+log(55+(5+nr2)nr)>15nr(5+nr2)+log(55+(5+nr2)nr)>15nr(5+nr2)+log(11+(5+nr2)nr5)>15nr(5+nr2)log(1+15nr(5+nr2))=ψ1(15nr(5+nr2))

Hence:

Δ>ψ1(15nr(5+nr2)).

Remark 10

It is easily shown that ψ1 is increasing on [0,+[ and in particular:

ψ1(15nr(5+nr2))ψ1(nr)ψ1(1)=1log(2).

3.1 Improved displacement step

Recall that for αS=11+nr, Schrijver [ 7 ] showed that the algorithm converges after n1log(2)log(ctenε) iterations.

In this section, based on the previous Lemmas, we show in the following theorem that the new displacement step αB reduces the number of iterations required for the convergence of Karmarkar’s algorithm.

Theorem 11

If

αB=5+nr25+(5+nr2)nr,

then Karmarkar’s algorithm converges after

n1log(2)+(nr210)log(ctenε)

iterations.

Proof â–¼

By Lemma 9 c) we have:

Δ>ψ1(15nr(5+nr2))=ψ1(nr+nr5nr2)>ψ1(1+15nr2),

such as nr>1 and ψ1 is increasing, then

(3.1)Δ>ψ1(1+15nr2),
6

and since log(1+x)x, we have:

ψ1(1+15nr2)ψ1(1)>nr210.

So from (3.1), we get:

(3.2)Δ>ψ1(1)+nr210
7

Also from Lemma 7, we have:

f(xk)f(xk1)=Δ,\ \ \ \ kN,

using (3.2) we get:

f(xi)f(xi1)<(ψ1(1)+nr210),

accordingly:

i=1k(f(xi)f(xi1))<i=1k(ψ1(1)+nr210),

which gives:

f(xk)f(x0)<k(ψ1(1)+nr210),where x0=enn,

then:

f(xk)<k(ψ1(1)+nr210)+f(x0),

such that:

f(x0)=f(enn)=nlogctenn i=1nlog1n=nlogcten,

so:

(3.3)f(xk)n<k(ψ1(1)+nr210)+nlogctenn,
8

by Lemma 5 and (3.3), we have:

ctxk<exp(k(ψ1(1)+nr210)+nlogctenn).

Recall that the algorithm converges when ctxk is less than ε (ε>0 small enough). So, it returns to search k which satisfies:

exp(k(ψ1(1)+nr210)+nlogctenn)<ε,

which can be done:

(3.4)k>n1log2+nr210logctenε.
9

Finally ctxkε when k satisfies the inequality (3.4), hence the result.

Remark 12

The previous Theorem 11 gives a displacement step αB greater than the displacement step αS of Schrijver which improve the practical behavior of the algorithm and the theoretical performance results, in the sense that the number of iterations given in this theorem is less than the number of iterations given by Schrijver.â–¡

4 Numerical experiments

In this part, we present a comparative tests on different numerical examples of linear programming problem taken from the literature [ 3 ] . We tested the Ye-Lustig’s approach of Karmarkar’s algorithm (case z is unknown) using differently two alternatives, once the displacement step αS of (Schrijver) and again the displacement step αB of (Bouafia & Benterki). The precision ε is taken between 104 and 106.

In each case, k denotes the number of iterations required for the optimality.

Example

(m,n)

Number of iterations

αS Schrijver

Number of iterations

αB Bouafia & Benterki

1

(3,4)

k=18

k=17

2

(3,5)

k=26

k=25

3

(4,7)

k=36

k=35

4

(5,9)

k=47

k=46

5

(5,11)

k=46

k=46

6

(6,12)

k=48

k=47

7

(16,26)

k=71

k=70

Table 1 Comparative table

5 Conclusion

This study shows that the new displacement step αB has introduced an improvement in the polynomial convergence results, and a small reduction in the number of iterations.

Note that when the dimension of the problem becomes important, the displacement step αB and αS converge to 12. It will probably be interesting to focus much more on the analytical form of the potential function while preserving all of its good properties, in order to obtain a best complexity results.

Acknowledgements

The authors are grateful to the referee whose detailed comments greatly improve the presentation of the paper.

Bibliography

1

N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (1984), pp. 373–395. \includegraphics[scale=0.1]{ext-link.png}

2

A. Keraghel, Etude adaptative et comparative des principales variantes dans l’algorithme de Karmarkar, Dissertation thesis, Université Joseph Fourier, Grenoble, France, 1989.

3

A. Keraghel and D. Benterki, Sur les performances de l’algorithme de Karmarkar pour la programmation linéaire, Revue Roumaine des Sciences Techniques-Mécanique Appliquées, 46 (2001) No. 1, pp. 87–96. \includegraphics[scale=0.1]{ext-link.png}

4

I.J. Lustig, A practical approach to Karmarkar’s algorithm, Technical report sol 85-5 Systems optimization laboratory; Dept. of operations research. Stanford. Univ.; Stanford California 94305, 1985.

5

R. Naseri and A. Valinejad, An extended variant of Karmarkar’s interior point algorithm, Applied Mathematics and Computation 184 (2007), pp. 737–742. \includegraphics[scale=0.1]{ext-link.png}

6

C. Roos, T. Terlaki and J. Vial, Optimization Theory and Algorithm for Linear Programming Optimization, Princeton University, 2001.

7

A. Schrijver, Theory of linear and integer programming, John Wiley & Sons, New York, 1986.