IMPROVING COMPLEXITY OF KARMARKAR’S APPROACH FOR LINEAR PROGRAMMING

DJAMEL BENTERKI\(^{\ast }\) MOUSAAB BOUAFIA\(^{\S }\)

November 2, 2014.
Published online: January 23, 2015.

\(^{\ast }\)Laboratoire de Mathématiques Fondamentales et Numériques LMFN, Département de Mathématiques, Faculté des Sciences, Université Sétif-1, Algérie, e-mail: dj_benterki@yahoo.fr.

\(^{\S }\)LabCAV, Laboratoire de Contrôle Avancé, Département de Mathématiques, Faculté des Mathématiques et de l’Informatique et des Sciences de la Matière, Université de Guelma, Algérie, e-mail: mousaab84@yahoo.fr.

In this paper, we are interested in the performance of Karmarkar’s projective algorithm for linear programming. Based on the work of Schrijver, we offer a new displacement step better than Schrijver’s one which led to a moderate improvement in the behavior of the algorithm shift. We show later that the algorithm converges after \(\frac{n}{1-\log \left( 2\right) +(\frac{nr^{2}}{10})}\log \big( \frac{c^{t}e_{n}}{\varepsilon }\big) \) iterations.

This purpose is confirmed by numerical experiments showing the efficiency of the obtained algorithm, which are presented in the last of paper.

MSC. 90C05, 65K05

Keywords. Linear programming, interior point method, potential function.

1 Introduction

Karmarkar’s algorithm for solving linear optimization problems [ 1 ] is the first truly efficient algorithm that solves these problems in polynomial time. The ellipsoid method also works in polynomial time, but it is ineffective in practice. Using a potential function shows that the Karmarkar’s algorithm converges after a number \(O\left( nq\text{\ }+\text{ }n\log n\right) \) iterations for a displacement step \(\alpha _{K}=\frac{1}{4}\). Since then, many researchers are interested in this algorithm in order to more improve its numerical behavior. The two covered basic elements covered are the direction of displacement that dominates the cost computation at each iteration and the displacement step which plays an important role in the speed of convergence. Indeed, by changing the analytical form of the potential function, Padberg [ 2 ] was able to reduce the number of iterations to \({\mathcal O}\left( nq\right) \) for a displacement step \(\alpha _{P}=\frac{1}{2}\). For its part, Schrijver [ 7 ] , keeping the same Karmarkar’s potential function has been shown that the algorithm converges after \(\frac{n}{1-\log \left( 2\right) }\log \big( \frac{c^{t}e_{n}}{\varepsilon }\big) \) iterations for a displacement step \(\alpha _{S}=\frac{1}{1+nr}\).

Our work falls within this framework, where drawing works of Karmarkar [ 1 ] and Shrijver [ 7 ] , we propose a new displacement step to improve the numerical behavior of the algorithm of Karmarkar.

The paper is organized as follows. In Section 2, we present the linear program treated by Karmarkar. Section 3 is the main objective of our work. At first, we propose a new displacement step and then we show that it is better than that of Schrijver. On the other hand, we give an improved polynomial convergence result compared to the Schrijver’s one. In Section 4, we present some computational results. Finally, a conclusion is provided in Section 5.

2 Linear programming problem treated by Karmarkar

In his article [ 1 ] , Karmarkar treats the linear programming problem in the following reduced form:

\[ \left( PK\right) \left\{ \begin{array}[c]{l}\min c^{t}x=z^{\ast }=0\\[1mm] Ax=0\\[1mm] x\in S_{n}=\left\{ x\in \mathbb {R} _{+}^{n}\ ,\ e_{n}^{t}x=1\right\} , \end{array} \right. \]

where \(c\in \mathbb {R} ^{n},\ A\in \mathbb {R} ^{m\times n}\) is a matrix of full rank \(\left( \operatorname {rank}\left( A\right) =m{\lt}n\ \right) \) and \(\ e_{n}=\left( 1,1,....,1\right) ^{t}\in \mathbb {R} ^{n}.\)

The vector \(x^{0}=\frac{e_{n}}{n}\in S_{n},\) is the center of the simplex \(S_{n}\).

2.1 Karmarkar’s Transformation on \(S_{n}\)

The Karmarkar’s transformation is defined at each iteration by:

\begin{align} T_{k}:S_{n}\longrightarrow S_{n} & \\ \ \ \ \ \ \ \ \ \ x\mapsto y= & T_{k}\left( x\right) =\frac{D_{k}^{-1}x}{e_{n}^{t}D_{k}^{-1}x}\end{align}

such that \(D_{k}=\operatorname {diag}\left( x^{k}\right) \) is the \(n\times n\) diagonal matrix with the components of \(x^{k}\) as the diagonal entries.

Similarly, the transformation \(T_{k}\) is invertible and we have:

\[ \ x=T_{k}^{-1}\left( y\right) =\frac{D_{k}y}{e_{n}^{t}D_{k}y}. \]

The transformed problem of \(\left( PK\right) \) by the transformation \(T_{k}\) is the linear programming problem:

\[ \left( PKT\right) \left\{ \begin{array}[c]{l}\min \left( D_{k}c\right) ^{t}y\\[1mm] AD_{k}y=0\\[1mm] y\in S_{n}=\left\{ y\in \mathbb {R} _{+}^{n}:e_{n}^{t}y=1\right\} , \end{array} \right. \]

which can be written as:

\[ \left( PKT\right) \left\{ \begin{array}[c]{l}\min \left( D_{k}c\right) ^{t}y\\[2mm] \left[ \begin{array}[c]{c}AD_{k}\\ e_{n}^{t}\end{array} \right] y=\left[ \begin{array}[c]{c}0\\ 1 \end{array} \right] \\[4mm] y\geq 0. \end{array} \right. \]

The transformed problem \(\left( PKT\right) \) is placed in the reduced form of Karmarkar, and the transformation \(T_{k}\) can bring the iterated point \(x^{k}\) in the center of the simplex \(S_{n},\) i.e., \(T_{k}\left( x^{k}\right) =\frac{e_{n}}{n}.\)

Before applying the optimality conditions, Karmarkar [ 1 ] relaxes the problem \(\left( PKT\right) \) by replacing the condition \(y\geq 0\ \)by the sphere \(s\left( \frac{e_{n}}{n},\alpha r\right) \), (it is easily shown in [ 2 ] that if\(\ y\in s\left( \frac{e_{n}}{n},\alpha r\right) \) then \(y\geq 0\)) with \(\ r=\frac{1}{\sqrt{n\left( n-1\right) }}\) and \(0{\lt}\alpha {\lt}1\).

The problem \(\left( PKT\right) \) becomes:

\[ \left( PKT\right) _{r}\left\{ \begin{array}[c]{l}\min \left( D_{k}c\right) ^{t}y\ \ \ \ \ \ \ \ \ \ \\[2mm] \left[ \begin{array}[c]{c}AD_{k}\\ e_{n}^{t}\end{array} \right] y=\left[ \begin{array}[c]{c}0\\ 1 \end{array} \right] \\[4mm] \left\Vert y-\frac{e_{n}}{n}\right\Vert ^{2}\leq \left( \alpha r\ \right) ^{2}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{array} \right. \]

Using the optimality conditions, the optimal solution of the problem \(\left( PKT\right) _{r}\) is given by:

\[ y=\frac{e_{n}}{n}-\alpha rd_{k}, \]

where \(d_{k}\) \(=\frac{P_{k}}{\left\Vert P_{k}\right\Vert }\), such that \(P_{k} \) is the projection of the vector \(D_{k}c\) on the kernel of the constraint matrix \(AD_{k}.\)

Returning by the inverse transform \(T_{k}^{-1},\) a feasible solution \(x^{k+1} \) is obtained of the original problem \(\left( PK\right) \) such that \(x^{k+1}=T_{k}^{-1}\left( y^{k}\right) =\frac{D_{k}y^{k}}{e_{n}^{t}D_{k}y^{k}}.\)

Remark 1

Initially, Karmarkar has taken the displacement step \(\alpha \) between \(0\) and \(1\). This ensures that all iterates remain inside the simplex and show the polynomial convergence of the algorithm. In numerical terms, it appears that more the displacement step \(\alpha \) is large more the algorithm converges faster. For this reason, several researchers suggest to use variable displacement step through the line search method. Unfortunately, the procedure is expensive and there are no convergence results. Our aim is to improve the theoretical results given by Karmarkar, by using short step that depend on the size of the considered problem. This idea is inspired by the work of R. Naseri & A. Valinejad [ 5 ] and Schrijver [ 7 ] . For the moment, we give the polynomial convergence theorem of Karmarkar’s algorithm.â–¡

2.2 Convergence of Karmarkar’s algorithm

To study the polynomial convergence of the algorithm, Karmarkar used (in case \(z^{\ast }=0\)) the following potential function:

\[ f(x)=n\log c^{t}x-{\displaystyle \sum \limits _{i=1}^{n}} \log x_{i} \]

defined on the set

\[ D_{x}=\left\{ x\in \mathbb {R} ^{n}:x{\gt}0,Ax=0,e_{n}^{t}x=1\right\} . \]

Theorem 2 Karmarkar’s convergence theorem

[ 1 ]

If \(\ \ 0{\lt}\alpha _{K}\leq \frac{1}{4}\ \), then starting from \(x^{0}=\frac{e_{n}}{n},\ \)after \({\mathcal O}(nq+\) \(n\log n)\) iterations, the algorithm finds a feasible point \(x^{\ast }\) such that:

(1)\(\ c^{t}x^{\ast }=0\)

or

(2) \(\frac{c^{t}x^{\ast }}{c^{t}x^{0}}\leq 2^{-q},\) where \(q\ \)is a fixed precision.

Remark 3

In his work, Padberg [ 2 ] was able to improve the convergence of the classical algorithm of Karmarkar by replacing Karmarkar’s potential function as follows:

\[ h(x)=\tfrac {c^{t}x}{\bigg( {\textstyle \prod \limits _{i=1}^{n}} x_{i}\bigg) ^{\frac{1}{n}}} \]

defined on the set

\[ D_{x}=\left\{ x\in \mathbb {R} ^{n}:x{\gt}0,Ax=0,e_{n}^{t}x=1\right\} \]

and showed that for a displacement step \(\alpha _{P}=\frac{1}{2}\), the algorithm converges after \({\mathcal O}\left( nq\right) \) iterations. In the same direction regardless of Padberg, Schrijver [ 7 ] has shown that for a displacement step \(\alpha _{S}=\frac{1}{1+nr},\) the algorithm converges after \(\frac{n}{1-\log 2}\log \frac{c^{t}e_{n}}{\varepsilon }\) iterations.â–¡

In the next section, we propose a new value of displacement step \(\alpha _{B} \) better than Schrijver’s one, whose aim is to improve further the speed of convergence of Karmarkar’s algorithm.

3 Improving convergence of Karmarkar’s algorithm

In this section, we propose a new displacement step \(\alpha _{B}\) better than the one which was introduced by Schrijver, in order to reduce the number of iterations and accelerate the convergence of Karmarkar’s algorithm and then, we give the corresponding results of the algorithm’s complexity. Before that, we present some lemmas below, in order to help the reader to understand the results that we have established.

Lemma 4

[ 6 ] Let \(\psi _{n}\) be a function defined by:

\[ \psi _{n}:\left] -1,+\infty \right[ ^{n}\ \rightarrow \mathbb {R} _{+} \]
\[ \ x\ \ \mapsto \ \psi _{n}(x)=e_{n}^{t}x-{\displaystyle \sum \limits _{i=1}^{n}} \log \left( 1+x_{i}\right) \]

then for \(\ \left\Vert x\right\Vert {\lt}1\ \) we have:

\[ \psi _{1}\left( -\left\Vert x\right\Vert \right) \geq \psi _{n}\left( -x\right) . \]

Lemma 5

[ 6 ] If \(x\in D_{x}\) then:

\[ c^{t}x\leq \exp \big( \tfrac {f(x)}{n}\big) \]

where \(f\) is the Karmarkar’s potential function.

More Generally, Keraghel [ 2 ] showed the following Lemma:

Lemma 6

[ 2 ] Let \(x^{k}\) be the \(k-th\) iterate of Karmarkar’s algorithm, then:

\[ \frac{c^{t}x^{k}}{c^{t}x^{0}}\leq \left( \exp \big[ f(x^{k})-f(x^{0})\big] \right) ^{\frac{1}{n}}, \]

where

\[ x^{0}=\tfrac {e_{n}}{n}. \]

Lemma 7

[ 6 ] At each iteration \(k,\) the potential function decreases from the following quantity:

\[ f(x^{k+1})-f(x^{k})=-\Delta , \]

such that:

\[ x^{k+1}=T_{k}^{-1}( y^{k}) \quad \text{ and }\quad \Delta =n\log \frac{c^{t}D_{k}e_{n}}{c^{t}D_{k}y^{k}}+{\displaystyle \sum \limits _{i=1}^{n}} \log y_{i}^{k} \]

Lemma 8

[ 6 ] If \(0{\lt}\alpha {\lt}\frac{1}{nr}\) then:

\[ \Delta \geq \alpha n^{2}r^{2}+n\psi _{1}\left( -\alpha \tfrac {r}{R}\right) -\psi _{1}(-\alpha nr), \]

where \(R=\sqrt{\frac{n-1}{n}}\)

Based on the previous lemmas, we present the following results.

Lemma 9

For \(\alpha _{B}=\frac{5+nr^{2}}{5+(5+nr^{2})nr}\) and \(\alpha _{S}=\frac{1}{1+nr},\) we have:

a) \(\alpha _{B}{\gt}\alpha _{S},\)

b) \(\psi _{1}\left( -\alpha _{B}nr^{2}\right) {\gt}\frac{1}{5}\alpha _{B}n^{2}r^{4},\)

c) \(\Delta \geq \psi _{1}\left( \frac{1}{5}nr\left( 5+nr^{2}\right) \right) .\)

Proof â–¼

a) Since \(5+nr^{2}{\gt}5,\) then

\[ \left( 5+nr^{2}\right) \left( 1+nr\right) {\gt}5+\left( 5+nr^{2}\right) nr \Leftrightarrow \tfrac {\left( 5+nr^{2}\right) }{5+\left( 5+nr^{2}\right) nr}{\gt}\tfrac {1}{1+nr} \]

which gives:

\[ \alpha _{B}{\gt}\alpha _{S}. \]

b) By definition, we have:

\begin{align} \psi _{1}\left( -\alpha _{B}nr^{2}\right) & =-\alpha _{B}nr^{2}-\log \left( 1-\alpha _{B}nr^{2}\right) ,\end{align}

Since \(\log (1-x){\lt}-x-\frac{x^{2}}{2}\) for \(0{\lt}x{\lt}1\) then:

\begin{align} \psi _{1}\left( -\alpha _{B}nr^{2}\right) & {\gt}-\alpha _{B}nr^{2}+\alpha _{B}nr^{2}+\frac{\alpha _{B}^{2}n^{2}r^{4}}{2}\\ & {\gt}\frac{\alpha _{B}n^{2}r^{4}}{2}\alpha _{S}{\gt}\frac{1}{5}\alpha _{B}n^{2}r^{4}\end{align}

because:

\[ \alpha _{S}=\tfrac {1}{1+nr}\geq \tfrac {1}{1+\sqrt{2}}{\gt}\tfrac {2}{5} \]

so:

\[ \psi _{1}\left( -\alpha _{B}nr^{2}\right) {\gt}\tfrac {1}{5}\alpha _{B}n^{2}r^{4}. \]

c) By Lemma 8 and since \(\frac{1}{R}=nr\) we have:

\[ \Delta \geq \alpha n^{2}r^{2}+n\psi _{1}\left( -\alpha nr^{2}\right) -\psi _{1}\left( -\alpha nr\right) ,\ \ \ \text{with }0{\lt}\alpha {\lt}\tfrac {1}{nr}, \]

for \(\alpha =\alpha _{B}=\frac{5+nr^{2}}{5+(5+nr^{2})nr}\) and by b) we get:

\begin{align*} \Delta & {\gt}\alpha _{B}n^{2}r^{2}+n\left( \tfrac {1}{5}\alpha _{B}n^{2}r^{4}\right) -\psi _{1}\left( -\alpha _{B}nr\right) \\ & {\gt}\alpha _{B}n^{2}r^{2}+\tfrac {1}{5}nr^{2}\left( \alpha _{B}n^{2}r^{2}\right) +\alpha _{B}nr\ +\log \left( 1-\alpha _{B}nr\right) \\ & {\gt}\left( n^{2}r^{2}+\tfrac {1}{5}nr^{2}\left( n^{2}r^{2}\right) +nr \right) \alpha _{B}+\log \left( 1-\alpha _{B}nr \right) \\ & {\gt}\tfrac {1}{5}\left( 5n^{2}r^{2}+nr^{2}\left( n^{2}r^{2}\right) +5nr \right) \tfrac {5+nr^{2}}{5+(5+nr^{2})nr}+\log \big( 1-\tfrac {5+nr^{2}}{5+(5+nr^{2})nr}nr\big) \\ & {\gt}\tfrac {1}{5}nr\left( 5nr+nr^{2}\left( nr\right) +5\right) \tfrac {5+nr^{2}}{5+(5+nr^{2})nr}+\log \big( 1-\tfrac {5+nr^{2}}{5+(5+nr^{2})nr}nr\big) \\ & {\gt}\tfrac {1}{5}nr \left( (5+nr^{2})\left( nr\right) +5\right) \tfrac {5+nr^{2}}{5+(5+nr^{2})nr}+\log \big( \tfrac {5}{5+(5+nr^{2})nr}\big) \\ & {\gt}\tfrac {1}{5}nr\left( 5+nr^{2}\right) +\log \big( \tfrac {5}{5+(5+nr^{2})nr}\big) \\ & {\gt}\tfrac {1}{5}nr\left( 5+nr^{2}\right) +\log \big( \tfrac {1}{1+(5+nr^{2})\frac{nr}{5}}\big) \\ & {\gt}\tfrac {1}{5}nr\left( 5+nr^{2}\right) -\log (1+\tfrac {1}{5}nr\left( 5+nr^{2}\right) )=\psi _{1}\left( \tfrac {1}{5}nr\left( 5+nr^{2}\right) \right) \end{align*}

Hence:

\[ \Delta {\gt}\psi _{1}\left( \tfrac {1}{5}nr\left( 5+nr^{2}\right) \right) . \]

Remark 10

It is easily shown that \(\psi _{1}\) is increasing on \(\left[ 0,+\infty \right[ \) and in particular:

\[ \psi _{1}\left( \tfrac {1}{5}nr\left( 5+nr^{2}\right) \right) \geq \psi _{1}\left( nr\right) \geq \psi _{1}\left( 1\right) =1-\log \left( 2\right) . \]

3.1 Improved displacement step

Recall that for \(\alpha _{S}=\frac{1}{1+nr}\), Schrijver [ 7 ] showed that the algorithm converges after \(\frac{n}{1-\log \left( 2\right) }\log \big( \frac{c^{t}e_{n}}{\varepsilon }\big) \) iterations.

In this section, based on the previous Lemmas, we show in the following theorem that the new displacement step \(\alpha _{B}\) reduces the number of iterations required for the convergence of Karmarkar’s algorithm.

Theorem 11

If

\[ \alpha _{B}=\tfrac {5+nr^{2}}{5+(5+nr^{2})nr}, \]

then Karmarkar’s algorithm converges after

\[ \tfrac {n}{1-\log \left( 2\right) +(\frac{nr^{2}}{10})}\log \big( \tfrac {c^{t}e_{n}}{\varepsilon }\big) \]

iterations.

Proof â–¼

By Lemma 9 c) we have:

\[ \Delta {\gt}\psi _{1}\left( \tfrac {1}{5}nr\left( 5+nr^{2}\right) \right) =\psi _{1}\left( nr+\tfrac {nr}{5}nr^{2}\right) {\gt}\psi _{1}\left( 1+\tfrac {1}{5}nr^{2}\right) \text{,} \]

such as \(nr{\gt}1\) and \(\psi _{1}\) is increasing, then

\begin{equation} \Delta >\psi _{1}\left( 1+\tfrac {1}{5}nr^{2}\right) ,\tag {3.1}\end{equation}
6

and since \(\log (1+x)\leq x,\) we have:

\[ \psi _{1}\left( 1+\tfrac {1}{5}nr^{2}\right) -\psi _{1}(1){\gt}\tfrac {nr^{2}}{10}. \]

So from (3.1), we get:

\begin{equation} \Delta >\psi _{1}(1)+\tfrac {nr^{2}}{10}\tag {3.2}\end{equation}
7

Also from Lemma 7, we have:

\[ f(x^{k})-f(x^{k-1})=-\Delta ,\text{\ \ \ \ }\forall k\in \mathbb {N} ^{\ast }, \]

using (3.2) we get:

\[ f(x^{i})-f(x^{i-1}){\lt}-\left( \psi _{1}(1)+\tfrac {nr^{2}}{10}\right) , \]

accordingly:

\[ {\displaystyle \sum \limits _{i=1}^{k}} (f(x^{i})-f(x^{i-1})){\lt}-{\displaystyle \sum \limits _{i=1}^{k}} \left( \psi _{1}(1)+\tfrac {nr^{2}}{10}\right) , \]

which gives:

\[ f(x^{k})-f(x^{0}){\lt}-k\left( \psi _{1}(1)+\tfrac {nr^{2}}{10}\right) ,\quad \text{where }x^{0}=\tfrac {e_{n}}{n}, \]

then:

\[ f(x^{k}){\lt}-k\left( \psi _{1}(1)+\tfrac {nr^{2}}{10}\right) +f(x^{0}), \]

such that:

\[ f(x^{0})=f(\tfrac {e_{n}}{n})=n\log c^{t}\tfrac {e_{n}}{n}\ - {\displaystyle \sum \limits _{i=1}^{n}} \log \tfrac {1}{n}=n\log c^{t}e_{n}, \]

so:

\begin{equation} \tfrac {f(x^{k})}{n}<\tfrac {-k\big( \psi _{1}(1)+\tfrac {nr^{2}}{10}\big) +n\log c^{t}e_{n}}{n},\tag {3.3}\end{equation}
8

by Lemma 5 and (3.3), we have:

\[ c^{t}x^{k}{\lt}\exp \Big( \tfrac {-k\big( \psi _{1}(1)+\tfrac {nr^{2}}{10}\big) +n\log c^{t}e_{n}}{n}\Big) . \]

Recall that the algorithm converges when \(c^{t}x^{k}\) is less than \(\varepsilon \) (\(\varepsilon {\gt}0\) small enough). So, it returns to search \(k\) which satisfies:

\[ \exp \Big( \frac{-k\big( \psi _{1}(1)+\frac{nr^{2}}{10}\big) +n\log c^{t}e_{n}}{n}\Big) {\lt}\varepsilon , \]

which can be done:

\begin{equation} k>\tfrac {n}{1-\log 2+\frac{nr^{2}}{10}}\log \tfrac {c^{t}e_{n}}{\varepsilon }.\tag {3.4}\end{equation}
9

Finally \(c^{t}x^{k}\leq \varepsilon \) when \(k\) satisfies the inequality (3.4), hence the result.

Remark 12

The previous Theorem 11 gives a displacement step \(\alpha _{B}\) greater than the displacement step \(\alpha _{S}\) of Schrijver which improve the practical behavior of the algorithm and the theoretical performance results, in the sense that the number of iterations given in this theorem is less than the number of iterations given by Schrijver.â–¡

4 Numerical experiments

In this part, we present a comparative tests on different numerical examples of linear programming problem taken from the literature [ 3 ] . We tested the Ye-Lustig’s approach of Karmarkar’s algorithm (case \(z^{\ast }\) is unknown) using differently two alternatives, once the displacement step \(\alpha _{S}\) of (Schrijver) and again the displacement step \(\alpha _{B}\) of (Bouafia & Benterki). The precision \(\varepsilon \) is taken between \(10^{-4} \) and \(10^{-6}.\)

In each case, \(k\) denotes the number of iterations required for the optimality.

Example

\(\left( m,n\right) \)

Number of iterations

\(\alpha _{S}\) Schrijver

Number of iterations

\(\alpha _{B}\) Bouafia & Benterki

\(1\)

\(\left( 3,4\right)\)

\(k=18\)

\(k=17\)

\(2\)

\(\left( 3,5\right)\)

\(k=26\)

\(k=25\)

\(3\)

\(\left( 4,7\right)\)

\(k=36\)

\(k=35\)

\(4\)

\(\left( 5,9\right)\)

\(k=47\)

\(k=46\)

\(5\)

\(\left( 5,11\right)\)

\(k=46\)

\(k=46\)

\(6\)

\(\left( 6,12\right)\)

\(k=48\)

\(k=47\)

\(7\)

\(\left( 16,26\right)\)

\(k=71\)

\(k=70\)

Table 1 Comparative table

5 Conclusion

This study shows that the new displacement step \(\alpha _{B}\) has introduced an improvement in the polynomial convergence results, and a small reduction in the number of iterations.

Note that when the dimension of the problem becomes important, the displacement step \(\alpha _{B}\) and \(\alpha _{S}\) converge to \(\frac{1}{2}\). It will probably be interesting to focus much more on the analytical form of the potential function while preserving all of its good properties, in order to obtain a best complexity results.

Acknowledgements

The authors are grateful to the referee whose detailed comments greatly improve the presentation of the paper.

Bibliography

1

N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (1984), pp. 373–395. \includegraphics[scale=0.1]{ext-link.png}

2

A. Keraghel, Etude adaptative et comparative des principales variantes dans l’algorithme de Karmarkar, Dissertation thesis, Université Joseph Fourier, Grenoble, France, 1989.

3

A. Keraghel and D. Benterki, Sur les performances de l’algorithme de Karmarkar pour la programmation linéaire, Revue Roumaine des Sciences Techniques-Mécanique Appliquées, 46 (2001) No. 1, pp. 87–96. \includegraphics[scale=0.1]{ext-link.png}

4

I.J. Lustig, A practical approach to Karmarkar’s algorithm, Technical report sol 85-5 Systems optimization laboratory; Dept. of operations research. Stanford. Univ.; Stanford California 94305, 1985.

5

R. Naseri and A. Valinejad, An extended variant of Karmarkar’s interior point algorithm, Applied Mathematics and Computation 184 (2007), pp. 737–742. \includegraphics[scale=0.1]{ext-link.png}

6

C. Roos, T. Terlaki and J. Vial, Optimization Theory and Algorithm for Linear Programming Optimization, Princeton University, 2001.

7

A. Schrijver, Theory of linear and integer programming, John Wiley & Sons, New York, 1986.