Extension of primal-dual interior point method
based on a kernel function
for linear fractional problem

Mousaab Bouafia\(^{1}\) Adnan Yassine\(^{2}\)

June 23, 2023; accepted: September 23, 2023; published online: December 22, 2023.

\(^{1}\)University of 8 May 1945 Guelma. BP 401, 24000 Guelma, Algerie. LMAH, FR-CNRS-3335, ISCN, 76600 Le Havre, France, e-mail: bouafia.mousaab@univ-guelma.dz.
\(^{2}\)Normandie University, UNIHAVRE, LMAH, FR-CNRS-3335, ISCN, 76600 Le Havre, France, e-mail: adnan.yassine@univ-lehavre.fr.

Our aim in this work is to extend the primal-dual interior point method based on a kernel function for linear fractional problem. We apply the techniques of kernel function-based interior point methods to solve a standard linear fractional program. By relying on the method of Charnes and Cooper [ 8 ] , we transform the standard linear fractional problem into a linear program. This transformation will allow us to define the associated linear program and solve it efficiently using an appropriate kernel function. To show the efficiency of our approach, we apply our algorithm on the standard linear fractional programming found in numerical tests in the paper of A. Bennani et al. [ 3 ] , we introduce the linear programming associated with this problem. We give three interior point conditions on this example, which depend on the dimension of the problem. We give the optimal solution for each linear program and each linear fractional program. We also obtain, using the new algorithm, the optimal solutions for the previous two problems. Moreover, some numerical results are illustrated to show the effectiveness of the method.

MSC. 90C05, 90C31, 90C51.

Keywords. linear optimization, recurrent sequence, kernel function, interior point methods, complexity bound.

1 Introduction

Linear programming (LP) is the most famous technique in operations research aimed at finding a global minimum or maximum of a linear program subject to certain constraints [ 11 ] . While a linear fractional programming (LFP) problem is one whose objective function has a numerator and a denominator and are very useful in production planning, financial and corporate planning, health care and hospital planning. Several methods to solve this problem have been proposed [ 14 ] . The linear fractional programming (LFP) is a special class of fractional programming which can be transformed into a linear programming problem by the method of Charnes and Cooper [ 8 ] .

Polynomial time Interior Point Methods (IPMs) for solving linear programming were first proposed by Karmarkar [ 10 ] . This method, and its variants that were developed subsequently, are now called interior-point methods (IPMs). For a survey, we refer to recent books on the subject, as Bai et al. [ 2 ] , Peng et al. [ 12 ] , Roos et al. [ 13 ] and Ye [ 16 ] . In order to describe the idea of this paper, we need to recall some ideas underlying new primal-dual (IPMs). The kernel functions play an important role in the design and analysis of interior-point methods (IPMs). They are not only used for determining the search directions but also for measuring the distance between the given iterate and the \(\mu \)-center for the algorithms. Currently, (IPMs) based on kernel function is one of the most effective methods for solving linear optimization problem (LO) and other convex optimization problems and is a very active research area in mathematical programming [ 1 , 2 , 4 , 5 , 6 , 7 , 9 ] . Recently in 2021, A. Bennani et al. [ 3 ] , is to solve (LFP) it by the projection method of interior points introduced by Ye–Lustig [ 15 ] . The purpose of this work is to present primal-dual interior-point methods (IPMs) based on a kernel function for solving the standard (LFP).

The paper is organized as follows: In section 2 we present our new results is generic primal-dual \(\mbox{IPMs}\) based on a kernel function for solving the standard (LFP). Some interesting and useful properties of the kernel function are provided in section 3. In section 4, we present the numerical examples are illustrated to show the efficiency of the method. Finally, we are finishing the paper with some remarks and a general conclusion showing the added value of our work.

2 Generic primal-dual \(\mbox{IPMs}\) for \(\mbox{LFP}\)

We consider the linear fractional problem (LFP) in standard format:

\begin{equation} {\rm (LFP)} \; \min \Big\{ \tfrac {c^{T}x+\alpha }{d^{T}x+\beta }:Ax=b, x\geq 0\Big\} , \end{equation}
1

where \(A\in \mathbb {R}^{m\times n}\), \(\operatorname {rank}(A)=m\), \(b\in \mathbb {R}^{m}\), \(c,d\in \mathbb {R}^{n}\), and \( \alpha ,\beta \in \mathbb {R}\) with \(d^{T}x+\beta {\gt}0\) Charnes and Cooper [ 8 ] , considered (LFP) by introducing the following change of variables:

\[ y=tx \texttt{ and } t=\tfrac {1}{d^{T}x+\beta }{\gt}0. \]

We have

\begin{equation} \begin{array}{ll} \ \frac{c^{T}x+\alpha }{d^{T}x+\beta }& =(c^{T}x+\alpha )\frac{1}{d^{T}x+\beta } \\ \ & =(c^{T}x+\alpha )t \\ \ & =c^{T}xt+\alpha t\\ \ & =c^{T}y+\alpha t\\ \end{array} \end{equation}
2

Now we transform the constraints \( Ax=b \) We replace \(x\) by \(\tfrac {y}{t}\); we obtain

\begin{equation} Ax=b \Leftrightarrow A\tfrac {y}{t}=b\Leftrightarrow Ay-bt=0. \end{equation}
3

We also have

\begin{equation} t=\tfrac {1}{d^{T}x+\beta } \Leftrightarrow d^{T}xt+\beta t=1\Leftrightarrow d^{T}y+\beta t=1. \end{equation}
4

Then, (LFP) becomes equivalent to the following linear program (P)

\begin{equation} \mbox{(P)}\left\{ \begin{array}{l} \min c^{T}y+\alpha t\\ Ay-bt=0\\ d^{T}y+\beta t=1\\ y,t\geq 0\end{array}\right. \end{equation}
5

which can be written in the following standard form

\begin{equation} \mbox{(P)}\left\{ \begin{array}{l} \min (c^{T},\alpha )\left( \begin{array}{c} y \\ t \\ \end{array} \right)\\ \left( \begin{array}{cc} A & -b \\ d^{T} & \beta \\ \end{array} \right)\left( \begin{array}{c} y \\ t \\ \end{array} \right)=\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right)\\ (y,t)\geq 0\end{array}\right. \end{equation}
6

and its dual problem

\begin{equation} \mbox{(D)}\left\{ \begin{array}{l} \max z(m+1)\\ \left( \begin{array}{cc} A^{T} & d \\ -b^{T} & \beta \\ \end{array} \right)z+s=\left( \begin{array}{c} c \\ \alpha \\ \end{array} \right)\\ s\geq 0\end{array}\right. \end{equation}
7

with \(z(m+1)\) is the component \(m+1\) of the vector \(z\in \mathbb {R}^{m+1}\).

To make it easier to write, we put

\begin{equation} \begin{array}{c} B=\left( \begin{array}{cc} A & -b \\ d^{T} & \beta \\ \end{array} \right)\in \mathbb {R}^{(m+1)\times (n+1)}, B^{T}=\left( \begin{array}{cc} A^{T} & d \\ -b^{T} & \beta \\ \end{array} \right)\in \mathbb {R}^{(n+1)\times (m+1)}\\ \textrm{ and }(c^{T},\alpha )=C^{T} \end{array} \end{equation}
8

where \(Y[n]=y\), is the vector of the first \(n\) components of \(Y\) and \(Y(n+1)=t\) is the component \(n+1\) of the vector \(Y\in \mathbb {R}^{n+1}\). Then each (P) and (D) will be as follows

\begin{equation} \mbox{(P)} \left\{ \begin{array}{l} \min C^{T}Y \\ BY=\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right)\\ Y\geq 0\end{array}\right. \end{equation}
9

and

\begin{equation} \mbox{(D)} \left\{ \begin{array}{l} \max z(m+1)\\ B^{T}z+s=C\\ s\geq 0.\end{array}\right. \end{equation}
10

Throughout of this paper, we assume that:

\((H1)\): The matrix \(B\) has full ranked, i.e., \( \operatorname {rank} (B) = m+1 \leq n+1;\)

\((H2)\): (P) and (D) satisfy the interior point condition IPC, i.e., there exist \((Y^{0},z^{0},s^{0})\) such that

\begin{equation} \begin{array}{c} BY^{0}=\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right),\ Y^{0}>0,\ B^{T}z^{0}+s^{0}=C,\ s^{0}>0. \end{array} \end{equation}
11

It is well known that the triple \((Y, z, s)\) is optimal for (P) and (D) if and only if

\begin{equation} \mbox{(PD)}\left\{ \begin{array}{l} BY=\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right),\ x\geq 0, \\ B^{T}z+s=C,\ s\geq 0, \\ Yz=0, \end{array}\right. \end{equation}
12

where the vector \(Yz\) denotes the componentwise product of the vectors \(Y\) and \(z\). The basic idea of primal-dual interior-point algorithms is to replace the third equation in \( (12)\) which is commonly known as complementarity condition for (P) and (D), by the parameterized equation \(Yz = \mu e\), with \(\mu {\gt} 0\) where \(e\) denotes the \(n\)-dimensional vector of ones. Thus, we consider the system

\begin{equation} \mbox{(PD)}_{\mu }\left\{ \begin{array}{l} BY=\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right),\ x\geq 0, \\ B^{T}z+s=C,\ s\geq 0, \\ Yz=\mu e. \end{array}\right. \end{equation}
13

Due to assumptions \((H1)\) and \((H2)\), system \((13)\) has a unique solution for each \(\mu {\gt} 0\), denoted \((Y_{\mu }, z_{\mu }, s_{\mu })\). The set of all solutions is called the \( \mu \)-center (or the central path) of (P) and (D). It has been shown that when \(\mu \) tends to zero, the limit of the central path exists and converges to the optimal solutions of (P) and (D).

Now, applying Newton’s method to system \((13)\) for computing the search direction

\[ (\Delta Y_{\mu }, \Delta z_{\mu }, \Delta s_{\mu }), \]

leads to the following linear system

\begin{equation} \left\{ \begin{array}{l} B\Delta Y=0, \\ B^{T}\Delta z+\Delta s=0, \\ s\Delta Y+Y\Delta s=\mu e-Ys.\end{array}\right. \end{equation}
14

Let us define the scaled vector \(v\) and the scaled search directions vectors \(d_{Y}\) and \(d_{s}\) as follows

\begin{equation} \begin{array}{ccc} v=\sqrt{\frac{Ys}{\mu }}, & d_{Y}=\frac{v\Delta Y}{Y}, & d_{s}=\frac{v\Delta s}{s}.\end{array} \end{equation}
15

System \((15)\) can be rewritten as follows:

\begin{equation} \left\{ \begin{array}{l} \overline{A}d_{Y}=0, \\ \overline{A}^{t}\Delta z+d_{s}=0, \\ d_{Y}+d_{s}=v^{-1}-v=-\nabla \Phi _{c}\left( v\right).\end{array}\right. \end{equation}
16

where \(\overline{A}=\frac{1}{\mu }AV^{-1}X,V=\operatorname {diag}(v),X=\operatorname {diag}(x)\), and

\begin{equation} \Phi _{c}\left( v\right) =\Phi _{c}\left( Y,s;\mu \right) = \sum _{i=1}^{n}\psi _{c} \left( v_{i}\right), \end{equation}
17

is the proximity barrier function of the classical kernel function \(\psi _{c} \left( t\right)\)

\begin{equation} \psi _{c} \left( t\right) =\tfrac {t^{2}-1}{2}-\log t. \end{equation}
18

The Newton iterate with step size \(\alpha \) is constructed according to

\begin{equation} \begin{array}{ccc} Y_{+}=Y+\alpha \Delta Y, & z_{+}=z+\alpha \Delta z, & s_{+}=s+\alpha \Delta s.\end{array} \end{equation}
19

where, the step size \(\alpha \) satisfies (\(0{\lt}\alpha \leq 1\)).

Now, we can define the norm-based proximity measure \(\delta (v)\) as

\begin{equation} \delta \left( v\right) =\tfrac {1}{2}\left\Vert \nabla \Phi _{c}\left( v\right) \right\Vert =\tfrac {1}{2}\left\Vert d_{Y}+d_{s}\right\Vert . \end{equation}
20

The generic of interior point methods (IPMs) for linear fractional problem (LFP) outlined above can be summarized in the following algorithm is shown in figure 1.

0.99

Generic Primal-dual \(\mbox{IPMs}\) for \(\mbox{LFP}\)

Input:

A proximity the new function \(\Phi _{N}(v)\);

a threshold parameter \(\tau {\gt}1\);

an accuracy parameter \(\epsilon {\gt}0\);

a fixed barrier update parameter \(\theta ,0{\lt}\theta {\lt}1\);

\((Y^{0},z^{0},s^{0})\) is a strictly feasible point and \(\mu _{0}=1\)

begin

\(\ Y=Y^{0};\) \(z=z^{0};\) \(s=s^{0};\) \(\mu =\mu _{0};\) \(v=\sqrt{\frac{Ys}{\mu }}\).

while \(Y^{T}s \geq \epsilon \) do

begin (outer iteration)

\(\ \ \ \ \mu =\left( 1-\theta \right) \mu \);

while \(\Phi _{N}\left( Y,s;\mu \right) {\gt}\tau \) do

begin (inner iteration)

solve the system \((14)\),

the proximity classical barrier function \(\Phi _{c}\left( v\right)\)

replaced by the new a proximity barrier function \(\Phi _{N}\left(v\right)\)

to obtain \(\left( \Delta Y, \Delta z, \Delta s\right) \);

choose a suitable step size \(\alpha \);

\(\ \ \ \ \ Y=Y+\alpha \Delta Y; z=z+\alpha \Delta z; s=s+\alpha \Delta s;\)

\(\ \ \ \ \ v=\sqrt{\frac{Ys}{\mu }}; \)

end (inner iteration)

end (outer iteration)

end.

Figure 1 Generic algorithm.

3 Scheme for analyzing a kernel-function-based algorithm

In this section, we investigate some properties of the kernel function which are essential to our complexity analysis. We call \(\psi _{N} \left( t\right) \) \(:\mathbb {R} _{++}\rightarrow \mathbb {R} _{+}\) a kernel function if \(\psi _{N} \left( t\right) \) is twice differentiable and satisfies the following conditions [ 1 , 9 ] :

\[ \begin{array}{l} \psi _{N}^{\prime }(1)=\psi (1)=0\text{,} \\ \psi _{N}^{\prime \prime }(t){\gt}0\text{,} \\ \underset {t\rightarrow 0^{+}}{\lim }\psi _{N} \left( t\right) =\underset {t\rightarrow +\infty }{\lim }\psi _{N} \left( t\right) =+\infty \text{.}\end{array} \]

We derive complexity limits for large and small update methods. By summarizing a theoretical scheme to determine the algorithmic complexity, this is based on some lemmas and the scheme for analyzing a kernel-function-based algorithm that will turn out to be useful. Which we summarize in the following algorithm [ 1 , 2 , 4 , 5 , 6 , 7 , 9 ] :

Step 1: Specify a kernel function \(\psi _{N}\left( t\right)\);

an update parameter \(\mu \), \(0 {\lt} \mu {\lt} 1\);

a threshold parameter \(\tau {\gt}1\); and an accuracy parameter \(\epsilon {\gt}0\).

Step 2: Solve the equation \(\tfrac {-1}{2}\psi _{N}'\left( t\right)=s\) to get \( \rho (s)\), the inverse function of

\[ \tfrac {-1}{2}\psi _{N}'\left( t\right), t\in ]0,1]. \]

If the equation is hard to solve, derive a lower bound for \( \rho (s)\).

Step 3: Calculate the decrease of \(\Phi _{N}(v)\) during an inner iteration in terms of \(\delta \) for the default step size \(\overset {\sim }{\alpha }\) from

\begin{equation*} \Phi _{N}(v_{+})-\Phi _{N}(v)=f(\overset {\sim }{\alpha })\leq \tfrac {-\delta ^{2}}{\psi _{N}"(\rho (2\delta ))} \end{equation*}

Step 4: Solve the equation \(\psi _{N}\left( t\right)=s\) to get \(\varrho (s)\), the inverse function of

\[ \psi _{N}\left( t\right), t\geq 1. \]

If the equation is hard to solve, derive lower and upper bounds for \(\varrho (s)\).

Step 5: Derive a lower bound for ± in terms of \(\Phi _{N}(v)\) by using

\begin{equation*} \delta (v)\geq \tfrac {1}{2}\psi _{N}’(\varrho (\Phi _{N}(v)) ). \end{equation*}

Step 6: Using the results of step 4 and step 5 find a valid inequality of the form

\begin{equation*} f(\overset {\sim }{\alpha })\leq -\kappa \Phi _{N}(v)^{1-\gamma } \end{equation*}

for some positive constants \(\kappa \) and \(\gamma \), with \( \gamma \in ]0,1]\) as small as possible.

Step 7: Calculate the upper bound of \((\Phi _{N})_{0}\) from

\begin{equation*} (\Phi _{N})_{0}\leq n \psi _{N}(\tfrac {\varrho (\frac{\tau }{n})}{\sqrt{1-\theta }})\leq \tfrac {n}{2}\psi _{N}"(1)(\tfrac {\varrho (\frac{\tau }{n})}{\sqrt{1-\theta }}-1)^{2}. \end{equation*}

Step 8: Derive an upper bound for the total number of iterations by using that

\begin{equation*} \tfrac {(\Phi _{N})^{\gamma }_{0}}{\theta \kappa \gamma }\log (\tfrac {n}{\epsilon }) \end{equation*}

is an upper bound for this number.

Step 9: Set \(\tau = {\mathcal O}(n)\) and \( \theta = \Theta (1)\) to calculate a complexity bound for large-update methods. And set \(\tau = {\mathcal O}(1)\) and \( \theta = \Theta (\frac{1}{\sqrt{n}})\) to calculate a complexity bound for small-update methods.

4 Numerical tests

Consider the following problem

\begin{equation*} \mbox{(LFP)}_{m}\left\{ \begin{array}{l} \min \frac{c^{T}x+2m}{d^{T}x+1}, \\ Ax=b, \\ x\geq 0,\end{array}\right. \end{equation*}

where \( n=2m,\ A\left( i,j\right) =\left\{ \begin{array}{lll} 0 & \mbox{ if }& i\neq j \ \ \mbox{and} \ \ j\neq i+m \\ 1 & \mbox{ if }& i=j \ \ \mbox{or} \ \ j=i+m\end{array}\right.\)

\(c\left( i\right) =2m-1,\) \(c\left( i+m\right) =2m,\) \(b\left( i\right) =2, d(i)=d(i+m)=1,\) for \(i\ =1,\ldots ,m\).

The linear programming (P)\(_{m}\) associated with this problem is

\begin{equation*} \mbox{(P)}_{m}\left\{ \begin{array}{l} \min (c^{T},2m)\left( \begin{array}{c} y \\ t \\ \end{array} \right)\\ \left( \begin{array}{cc} A & -b \\ d^{T} & 1 \\ \end{array} \right)\left( \begin{array}{c} y \\ t \\ \end{array} \right)=\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right)\\ (y,t)\geq 0\end{array}\right. \end{equation*}

and the interior-point condition IPC, we give three interior-point condition \(\mbox{IPC}_{1}:\)

\begin{align*} y^{0}(i)& =y^{0}(i+m)=\tfrac {1}{2m+1}, z^{0}(i)=0, s^{0}(i)=c(i)=2m-1, \\ s^{0}(i+m)& =c(i+m)=2m, \ \mbox{ for } i\ =1,\ldots ,m. \\ t& =\tfrac {1}{2m+1}, z^{0}(m+1)=0,s^{0}(2m+1)=2m. \end{align*}
\begin{align*} x \end{align*}
\[ \begin{array}{c} \mbox{IPC}_{2}: \\ y^{0}(i)=\frac{2}{3(2m+1)}, y^{0}(i+m)=\frac{4}{3(2m+1)},\\ z^{0}(i)=0, s^{0}(i)=c(i)=2m-1, s^{0}(i+m)=c(i+m)=2m \mbox{ for } i\ =1,\ldots ,m. \\ t=\frac{1}{2m+1}, z^{0}(m+1)=0,s^{0}(2m+1)=2m. \end{array} \]
\[ \begin{array}{c} \mbox{IPC}_{3}: \\ y^{0}(i)=\frac{2-\frac{1}{m}}{(2m+1)}, y^{0}(i+m)=\frac{1}{m(2m+1)},\\ z^{0}(i)=0, s^{0}(i)=c(i)=2m-1, s^{0}(i+m)=c(i+m)=2m \mbox{ for } i\ =1,\ldots ,m. \\ t=\frac{1}{2m+1}, z^{0}(m+1)=0,s^{0}(2m+1)=2m. \end{array} \]

The optimal solution of (P)\(_{m}\) is

\[ \begin{array}{c} y^{*}(i)=\tfrac {2}{2m+1}, y^{*}(i+m)=0,\mbox{ for } i\ =1,\ldots ,m. \\ t^{*}=\tfrac {1}{2m+1} \\ \end{array} \]

then the optimal solution of (LFP)\(_{m}\) is

\[ x^{*}(i)=\tfrac {y^{*}(i)}{t^{*}}=2, x^{*}(i+m)=\tfrac {y^{*}(i+m)}{t^{*}}=0,\mbox{ for } i\ =1,\ldots ,m \]

To prove the effectiveness of the interior point methods (IPMs) for linear fractional problem (LFP) defined in the figure 1 Generic algorithm. Our experiments of the above problem from, were performed on a standard PC. The kernel function translates the classic central path method is (see [ 9 ] ),

\[ \psi _{c}(t)=\tfrac {t^{2}-1}{2}-\log (t) \]

and the step size \(\alpha \), we take \(0{\lt}\alpha {\lt}\overline{\alpha }: \alpha =\tfrac {1}{1+(2\delta +\sqrt{1+4\delta })^{2}}\). To be solved, we used the software Dev Pascal. We have taken \(\epsilon =10^{-4}\), \(\mu ^{0}=1\), \(\theta =\frac{1}{2}\), \(\tau =n\).

In the table of results, (ex \((m,n)\)): \(m\) is the number of constraints and \(n\) is the number of variables, (Itr O) represents the number of outer iteration necessary to obtain the optimal solution. (Itr T) represents the number of total iteration iteration necessary to obtain the optimal solution. (\( \texttt{Med}=\frac{\mbox{total iteration}}{\mbox{outer iteration}}\)) represent the median value of the number of internal iterations. And (time(s)) represents the computation time. We summarize this numerical study in ??

ex (m,n)

Itr O

Itr T

Med

time (s)

\((5,10)\)

19

665

35

0.02

\((25,50)\)

21

1730

82.38

2.19

\((50,100)\)

22

2789

126.77

25.10

\((75,150)\)

22

3648

202.39

113.90

\((100,200)\)

23

4655

202.39

351.83

 
Table 1 Results of using \(IPC_{1}.\)

ex (m,n)

Itr O

Itr T

Med

time (s)

\((5,10)\)

19

663

34.89

0.02

\((25,50)\)

21

1734

82.57

2.10

\((50,100)\)

22

2810

127.73

25.48

\((75,150)\)

22

3681

167.32

116.80

\((100,200)\)

23

4695

204.13

368.04

 
Table 2 Results of using \(IPC_{2}.\)

ex (m,n)

Itr O

Itr T

Med

time (s)

\((5,10)\)

19

676

35.58

0.02

\((25,50)\)

21

1867

88.90

2.36

\((50,100)\)

22

3103

141.05

27.89

\((75,150)\)

22

4122

187.36

130.00

\((100,200)\)

23

5307

230.74

401.94

 
Table 3 Results of using \(IPC_{3}.\)

Comments. The numerical experiments carried out show the effectiveness of our method, applied to the problem cited above, on all instances used.

Also, these results have confirmed that the difference in time is due to the solve the system \((14)\), this is because the results of solving a linear fractional programming problem implementing three interior point conditions.

We also point out that when the dimension of the problem becomes large, the computation time increases, the biggest of these times as shown in tables (The first table is \(351.83 s\); The second table is \(368.04 s\); The third table is \(401.94 s\)).

These numerical results consolidate and confirm our theoretical results.

5 Conclusion

We have applied a new approach, which consists in transforming a fractional linear program into a linear program and solving it efficiently using a primal-dual interior points method (IPMs) based on a kernel function. Solving the associated linear program gives us the optimal solution of the linear fractional program (LFP). The method seems simple to solve any linear fractional problem of any dimension and any size. We prove the efficiency of the interior point methods (IPMs) for the linear fractional problem (LFP) defined in figure 1 Generic algorithm. To prove the efficiency of our approach, we applied the new algorithm on the standard linear fractional problem presented in the tests in [ 3 ] giving three interior point conditions. We obtained the optimal solutions for each associated linear program and of the linear fractional program. The numerical results of our experimental tests show the effectiveness of our approach and constitute an original and important theoretical contribution to improve the computational complexity of the problem studied.

Acknowledgements

The authors are very grateful to the anonymous referees for their suggestions and helpful comments, which significantly improved the the presentation of this paper.

Q. Bai, M. El Ghami, C. Roos, A comparative study of kernel functions for primal-dual interior point algorithms in linear optimization, SIAM J. Optim., 15 (2004) no. 1, pp. 101–128. https://doi.org/10.1137/S1052623403423114

Y.Q. Bai, C. Roos, A primal-dual interior point method based on a new kernel function with linear growth rate, in: Proceedings of the 9\(^{\text{th}}\) Australian Optimization Day, Perth, Australia, 2002.

A. Bennani, D. Benterki, H. Grar, Adaptive projection methods for linear fractional programming, RAIRO Oper. Res., 55 (2021), pp. 2383–2392, https://doi.org/10.1051/ro/2020091

M. Bouafia, D. Benterki, A. Yassine, An efficient primal-dual Interior Point Method for linear programming problems based on a new kernel function with a trigonometric barrier term, J. Optim. Theory Appl., 170 (2016), pp. 528–545, https://doi.org/10.1007/s10957-016-0895-0

M. Bouafia, D. Benterki, A. Yassine, An efficient parameterized logarithmic kernel function for linear optimization, Optim. Lett., 12 (2018) no. 5, pp. 1079–1097, https://doi.org/10.1007/s11590-017-1170-5

M. Bouafia, A. Yassine, An efficient twice parameterized trigonometric kernel function for linear optimization, Optim. Eng., 21 (2020), pp. 651–672, https://doi.org/10.1007/s11081-019-09467-w

M. Bouafia, A. Yassine, Complexity analysis of primal-dual interior-point methods for linear optimization based on a new efficient Bi-parameterized kernel function with a trigonometric barrier term, RAIRO Oper. Res., 56 (2022), pp. 731–750, https://doi.org/10.1051/ro/2022032

A. Charnes, W.W. Cooper, Programming with linear fractional functionals, Naval. Res. Logis. Quart., 9 (1962) nos. 3–4, pp. 181–186, https://doi.org/10.1002/nav.3800090303

M. El Ghami, New primal-dual interior-point methods based on kernel functions, PhD Thesis, TU Delft, The Netherlands, 2005.

N.K. Karmarkar, A new polynomial-time algorithm for linear programming, Proceedings of the 16\(^{\textsl{th}}\) Annual ACM Symposium on Theory of Computing, 4 (1984), pp. 373–395, https://doi.org/10.1007/BF02579150

A.O. Odior, Mathematics for Science and Engineering Students, 1 (2003), 3rd ed., Ambik Press, Benin City, Nigeria.

J. Peng, C. Roos, T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior Point Algorithms, Princeton University Press, Princeton, NJ, 2002.

C. Roos, T. Terlaky, J.Ph. Vial, Theory and Algorithms for Linear Optimization, an Interior Point Approach, John Wiley & Sons, Chichester, UK, 1997.

S.F. Tantawy, A new method for solving linear fractional programming problems, Austral. J. Basic Appl. Sci., 1 (2007) no. 2, pp. 105–108.

I.J. Lustig, A practical approach to Karmarkar’s algorithm, Technical report sol 85-5 System optimization laboratory, Department of Operations Research Stanford, University of Stanford, CA, 1985.

Y. Ye, Interior Point Algorithms, Theory and Analysis, John-Wiley Sons, Chichester, UK, 1997.