An infeasible interior point method
for convex quadratic problems

Hayet Roumili\(^\ast \) Nawel Boudjellal\(^\bullet \)

January 25, 2018. Accepted: June 12, 2018. Published online: February 7, 2019.

\(^\ast \)Department of Mathematics, University of Setif, Algeria, e-mail: h_roumili@yahoo.fr.

\(^\bullet \)Department of Mathematics, University of Setif, Algeria, e-mail: nawelboudjellal2016@gmail.com. The work of these authors has been supported by LMNF Laboratory

In this paper, we deal with the study and implementation of an infeasible interior point method for convex quadratic problems \(\left(CQP\right)\) . The algorithm uses a Newton step and suitable proximity measure for approximately tracing the central path and guarantees that after one feasibility step, the new iterate is feasible and sufficiently close to the central path. For its complexity analysis, we reconsider the analysis used by the authors for linear optimization \(\left( LO\right) \) and linear complementarity problems \(\left( LCP\right) \). We show that the algorithm has the best known iteration bound, namely \(n\log \frac{\left( n+1\right)}{\epsilon }.\) Finally, to measure the numerical performance of this algorithm, it was tested on convex quadratic and linear problems.

MSC. 90C05. 90C51.

Keywords. Convex quadratic programs, Infeasible interior-point method, Newton step.

1 Introduction

Convex quadratic programs appear in many areas of application. For example, they appear in finance and as sub-problems in sequential Quadratic Programming. Interior point methods (IPMs) are among the most effective methods for solving a wide range of optimization problems. Today, the most popular and robust of them are primal-dual path-following algorithms owing to their numerical efficiency and theoretical polynomial complexity [ 2 ] , [ 3 ] , [ 7 ] [ 12 ] . Two types of (IPMs) exist, feasible and infeasible IPMs (IIPMs). Feasible IPMs start from a strictly feasible point for the problem at hand, while IIPMs start from an arbitrary positive point. Owing to the difficulty of finding a feasible starting point for various problems, the use of IIPMs is unavoidable. Roos [ 9 ] , proposed the first full-Newton step IIPM for LO. This algorithm starts from strictly feasible iterates on the central path of the intermediate problems produced by suitable perturbations in the LO problem. Then, it uses so-called feasibility steps that serve to generate strictly feasible iterates for the next perturbed problems. After accomplishing a few centring steps for the new perturbed problem, it obtains strictly feasible iterates close enough to the central path of the new perturbed problems. This algorithm has been extensively extended to other optimization problems, e.g., [ 3 ] , [ 8 ] , [ 12 ] [ 14 ] . Subsequently, some authors have tried to improve Roos’s infeasible algorithm [ 5 ] , [ 6 ] . Some improvements have been done to reduce or remove the centring steps [ 2 ] , [ 7 ] , [ 10 ] . Recently, Mansouri et al. [ 7 ] proposed an IIPM for solving LO problems with a reformulation of the central path. Their algorithm need not perform a centring step to reach the closeness of the iterates to their \(\mu \)-centres. To reach this target, they select a (rather small) fixed default barrier update parameter in their algorithm to return the iterates to the neighbourhood of the central path without doing a centring step. This value seems to be undesirable for practical purposes.

In this paper, we present an IIPM for convex quadratic programs (CQP). The algorithm uses a short-step and a suitable proximity measure for approximately tracing the central path. The rest of the paper is organized as follows. In section \(2\), we introduce the perturbed problems pertaining to the original primal-dual pair. In section \(3 \), the complexity analysis of the algorithm is discussed. In section \(4\), we deal with the numerical implementation of the infeasible interior point algorithm applied to convex quadratic and linear problems. In section \(5\), a conclusion is stated.

2 The perturbed problems

For any \(\mu \) with \(0{\lt}\mu \leq 1\) we consider the perturbed problem \((QP)_{\mu }\) defined by

\[ (QP)_{\mu }\left\{ \begin{array}{l} \min \left( c-\mu \left( c-A^{T}y^{0}+Qx^{0}-s^{0}\right) \right) ^{T}x+\frac{1}{2}x^{T}Qx \\ Ax=b-\mu \left( b-Ax^{0}\right) \\ x\geq 0\end{array}\right. \]

and its dual problem \((QD)_{\mu }\), which is given by

\[ (QD)_{\mu }\left\{ \begin{array}{l} \max \left( b-\mu \left( b-Ax^{0}\right) \right) ^{T}y-\tfrac {1}{2}x^{T}Qx \\ A^{T}y+s-Qx=c-\mu \left( c-A^{T}y^{0}+Qx^{0}-s^{0}\right) \\ s\geq 0,y\in \mathbb {R} ^{m}\end{array}\right. \]

The solution \((x(\mu ),y(\mu ),s(\mu ))\) of \((QP)_{\mu }\) and \((QD)_{\mu }\) converges to the primal-dual optimal solution of \(\left( QP\right) \) and \(\left( QD\right) \) as \(\mu \rightarrow 0\).

We start with arbitrarily choosing \(x^{0}{\gt}0\) and \(y^{0},s^{0}{\gt}0\) such that \(x^{0}s^{0}=\mu \)e, if \(\mu =\tfrac {( x^{0}) ^{T}s^{0}}{n}=1\) then \(x^{0}=e\) yields a strictly feasible solution of \((QP)_{1 }\) and \(y^{0}=0\), \(s^{0}=e\) yields a strictly feasible solution of \((QD)_{1 }\). We conclude that \((QP)_{\mu }\) and \((QD)_{\mu }\) satisfy the interior point conditions \(IPC\).

Lemma 1

The original problems \((QP)\) and \((QD)\) are feasible if and only if for each \(\mu \) satisfying \(0{\lt}\mu \leq 1\) the perturbed problems \((QP)_{\mu }\) and \((QD)_{\mu }\) satisfy the \(IPC\).

Proof â–¼
Let \(\bar{x}\) be a feasible solution of \((QP)\) and \((\bar{y},\bar{s})\) be a feasible solution of \((QD)\). Then \(A\bar{x}=b\) and \(A^{T}\bar{y}-Q\bar{x}+\bar{s}=c\), with \(\bar{x}\geq 0\) and \(\bar{s}\geq 0\). Now let \(0{\lt}\mu \leq 1\), and consider: \(x=(1-\mu )\bar{x}+\mu x^{0},\) \(y=(1-\mu )\bar{y}+\mu y^{0},\) \(s=(1-\mu )\bar{s}+\mu s^{0}.\)

One has

\begin{align*} Ax& =A\left( (1-\mu )\bar{x}+\mu x^{0}\right) =(1-\mu )A\bar{x}+\mu Ax^{0} \\ & =(1-\mu )b+\mu Ax^{0} =b-\mu \left( b-Ax^{0}\right) =b-\mu r_{b}^{0} \end{align*}

showing that \(x\) is feasible for \((QP)_{\mu }\).

Similarly,

\begin{align*} A^{T}y-Qx+s& =(1-\mu )\left( A^{T}\bar{y}-Q\bar{x}+\bar{s}\right) +\mu \left( A^{T}y^{0}-Qx^{0}+s^{0}\right) \\ & =(1-\mu )c+\mu \left( A^{T}y^{0}-Qx^{0}+s^{0}\right) \\ & =c-\mu \left( c-A^{T}y^{0}+Qx^{0}-s^{0}\right) =c-\mu r_{c}^{0} \end{align*}

showing that \(y,s\) are feasible for \((QD)_{\mu }\). Because \(\mu \) \({\gt}0\), \(x\) and \(s\) are positive, thus proving that \((QP)_{\mu }\) and \((QD)_{\mu }\) satisfy the \(IPC\). To prove the inverse implication, suppose that \((QP)_{\mu }\) and \((QD)_{\mu }\) satisfy the \(IPC\) for each \(\mu \) satisfying \(0{\lt}\mu \leq 1\). Letting \(\mu \) go to zero, it follows that \((QP)\) and \((QD)\) are feasible.

Proof â–¼

Let \((QP)\) and \((QD)\) be feasible and \(0{\lt}\mu \leq 1\). Then Lemma 1 implies that the problems \((QP)_{\mu }\) and \((QD)_{\mu }\) satisfy the \(IPC\), hence their central paths exist. This means that the system

\begin{equation} \left\{ \begin{array}{c} b-Ax=\mu r_{b}^{0} \\ c-A^{T}y+Qx-s=\mu r_{c}^{0} \\ xs=\mu e,\text{ \ \ \ \ \ \ }x>0,s>0\end{array}\right. \end{equation}
1

has a unique solution, for every \(\mu {\gt}0.\)

In the sequel, this unique solution is denoted by \((x(\mu ),y(\mu ),s(\mu ))\). These are the \(\mu \)-centres of the perturbed problems \((QP)_{\mu }\) and \((QD)_{\mu }\). Note that because \(x^{0}s^{0}=e\), \(x^{0}\) is the \(\mu \)-center of the perturbed problem \((QP)_{1\text{ }}\) and \((y^{0},s^{0})\) the \(\mu \)-center of \((QD)_{1\text{ }}\).

To get iterates that are feasible for \((QP)_{\mu ^{+}}\) and \((QD)_{\mu ^{+}}\) we need search directions \(\Delta x,\Delta y\) and \(\Delta s\) such that

\begin{equation} \left\{ \begin{array}{l} A\left( x+\Delta x\right) =b-\mu ^{+}r_{b}^{0} \\ A^{T}\left( y+\Delta y\right) -Q\left( x+\Delta x\right) +\left( s+\Delta s\right)=c-\mu ^{+}r_{c}^{0} \\ \end{array}\right. \end{equation}
2

where \(\mu ^{+}=\left( 1-\theta \right) \mu \) with \(0{\lt}\theta {\lt}1\).

Therefore, the following system is used to define \(\Delta x,\Delta y\) and \(\Delta s:\)

\begin{equation} \left\{ \begin{array}{l} A\Delta x=\theta \mu r_{b}^{0} \\ A^{T}\Delta y-Q\Delta x+\Delta s=\theta \mu r_{c}^{0} \\ s\Delta x+x\Delta s=\left( 1-\theta \right) \mu e-xs \end{array}\right. \end{equation}
3

Now, we introduce a norm-based proximity measure as:

\begin{equation} \delta (x,z,\mu )=\tfrac {1}{2} \left\Vert \mu e-xs\right\Vert \end{equation}
4

to measure the closeness of the points \((x,y,s)\) to the central path. We suppose that the initial point \((x^{0},y^{0},s^{0})\) verified \(\delta (x^{0},y^{0},s^{0}){\lt}1 \) for certain \(\mu \), is known.

Algorithm 2
Primal-dual Infeasible IPM algorithm

Input

  - An accuracy parameter \(\varepsilon {\gt}0\).

  - barrier update parameter \(0{\lt}\theta {\lt}1\) .

  - a strictly feasible point \((x^{0},y^{0},s^{0})=\left(e,0,e\right) \) and \(\mu ^{0}=1\)

  such that \( \delta (x^{0},y^{0},s^{0})=0{\lt}1 .\)

begin

\(x=x^{0},y=y^{0},s=s^{0}, \mu =\mu ^{0}.\)

while \(\left\Vert r_{b}\right\Vert +\left\Vert r_{c}\right\Vert +x^{T}s{\gt}\varepsilon \) do

\(\left( r_{b}=Ax-b,r_{c}=c-A^{T}y+Qx-s\right) \)

\(\qquad \)solve the system \((5)\) to obtain: \((\Delta x,\Delta y,\Delta s).\)

  update \(x:=x+\Delta x,\ y:=y+\Delta y,\ z:=z+\Delta z,\)

  update \(\mu :=\left( 1-\theta \right) \mu .\)

end

3 Complexity analysis

Now the following notations are useful for studying the complexity of the proposed algorithm.

\begin{equation} \label{eq 5} \upsilon =\sqrt{\tfrac {xs}{\mu }},d_{x}=\tfrac {\upsilon \Delta x}{x},d_{s}=\tfrac {\upsilon \Delta s}{s}, d=\sqrt{\tfrac {x}{s}} \end{equation}
5

In addition, we obtain:

\begin{equation} s\Delta x+x\Delta s=\mu \upsilon \left( d_{x}+d_{s}\right) \end{equation}
6

and

\begin{equation} \Delta x\Delta s=\mu d_{x}d_{s} \end{equation}
7

By using these notations the linear system in 5 and the proximity become:

\begin{equation} \left\{ \begin{array}{l} \bar{A}d_{x}=\mu ^{-\frac{1}{2}}\theta \mu r_{b}^{0}=\mu ^{\frac{1}{2}}\theta r_{b}^{0} \\ \bar{A}^{T}d_{y}-\bar{Q}d_{x}+d_{s}=\mu ^{-\frac{1}{2}}\theta \mu D r_{c}^{0} =\mu ^{\frac{1}{2}}\theta D r_{c}^{0}\\ d_{x}+d_{s}=\left( 1-\theta \right) \upsilon ^{-1}-\upsilon \end{array}\right. \end{equation}
8

where

\[ D=\operatorname {diag}\left( d_{i}\right) , \quad \bar{A}=AD,\quad \bar{Q}=DQD, \quad d_{y}=, \quad \mu ^{-\frac{1}{2}}\Delta y \]

and

\begin{equation*} \delta (\upsilon )=\tfrac {1}{2} \Vert \upsilon ^{-1}-\upsilon \Vert \end{equation*}

we have:

\begin{align*} x^{+}s^{+}& =xs+\left( s\Delta x+x\Delta s\right) +\Delta x\Delta s =\left( 1-\theta \right) \mu e+\Delta x\Delta s \\ & =\left( 1-\theta \right) \mu e+\tfrac {xs}{\upsilon ^{2}}=\mu \left( \left( 1-\theta \right) e+d_{x}d_{s}\right). \end{align*}

Lemma 3

The iterates \((x^{+},\) \(s^{+})\) are strictly feasible if and only if:

\[ \left( 1-\theta \right) e+ d_{x} d_{s}{\gt}0. \]

Corollary 4

The iterates \((x^{+},\) \(s^{+})\) are strictly feasible if \(\Vert d_{x} d_{s}\Vert _{\infty }{\lt}1-\theta .\)

Proof â–¼
By the definition of \(\infty \)-norm i.e., \(\left\Vert d_{x} d_{s}\right\Vert _{\infty } = \max \{ \vert d_{x_{i}} d_{s_{i}}\vert : i=1,2,\) \(\ldots ,n\} \) and from the assumption, we have that \(\left\vert d_{x_{i}} d_{s_{i}}\right\vert {\lt} 1-\theta \) . Equivalently, we have:
\begin{equation*} -\left( 1-\theta \right) {\lt}d_{x_{i}} d_{s_{i}}{\lt}1-\theta \end{equation*}

which implies that:

\begin{equation*} d_{x} d_{s}+\left( 1-\theta \right) e{\gt}0 \end{equation*}

Thus, by Lemma 3, \((x^{+},\) \(s^{+})\) are strictly feasible.

Proof â–¼
Our aim is to find an upper bound for \(\delta (\upsilon ^{+})\) such that the proximity of the new iterates \((x^{+},\) \(s^{+})\) is always less than 1. For each iteration we have:
\begin{equation*} \Vert d_{x} d_{s}\Vert _{\infty }\leq \Vert d_{x} d_{s}\Vert \leq \Vert d_{x}\Vert \Vert d_{s}\Vert \leq \tfrac {1}{2}\left( \Vert d_{x}\Vert ^{2}+\Vert d_{s}\Vert ^{2}\right) = w(\upsilon ). \end{equation*}

Corollary 5

if \(w(\upsilon ){\lt}1-\theta \) then \((x^{+},\) \(s^{+})\) are strictly feasible\(.\)

Proof â–¼
We have:
\begin{equation*} \Vert d_{x} d_{s}\Vert _{\infty }\leq w(\upsilon ) {\lt} 1-\theta . \end{equation*}

by Corollary 1, \((x^{+},\) \(s^{+})\) are strictly feasible.

Proof â–¼

The following Lemma proved by C. Roos in 2015 [ 10 ] leads us to find an upper bound for \(\delta (\upsilon ^{+}).\)

Lemma 6

Let a,b \(\in \mathbb {R} ^{n},r\in \left( 0,1\right] \) and \(f\left( a,b\right) =\sum \limits _{i=1}^{n}\) \(\zeta (a_{i}b_{i}).\) If

\[ \left\Vert a\right\Vert ^{2}+\left\Vert b\right\Vert ^{2}\leq 2r^{2}, \]

then

\[ f\left( a,b\right) \leq \left( n-1\right) \zeta (0)+\max \left\{ \zeta (r^{2}),\zeta (-r^{2})\right\} \]

where

\[ \zeta (t)=\tfrac {1+t}{1-\theta }+\tfrac {1-\theta }{1+t}-2=\tfrac {\left( \theta +t\right) ^{2}}{\left( 1-\theta \right) \left( 1+t\right) }\geq 0,\qquad \forall t{\gt}-1,0{\lt}\theta {\lt}1. \]

Lemma 7

If \(w(\upsilon ){\lt}1-\theta ,\) then

\begin{equation*} 4\delta (\upsilon ^{+})^{2}\leq (n-1)\zeta (0)+\max \{ \zeta (w(\upsilon )),\zeta (-w(\upsilon )\} . \end{equation*}

Proof â–¼
We have:
\[ \upsilon ^{+^{2}}=\tfrac {x^{+}s^{+}}{\mu ^{+}}=\tfrac {\mu (\left( 1-\theta \right) e+ d_{x} d_{s})}{(1-\theta )\mu }=e+\tfrac { d_{x} d_{s}}{1-\theta } \]

Because

\begin{align*} 4\delta (\upsilon ^{+})^{2} & =\Vert ( \upsilon ^{+}) ^{-1}-\upsilon ^{+}\Vert ^{2} =\sum \limits _{i=1}^{n}(\tfrac {1}{\upsilon _{i}^{+}}-\upsilon _{i}^{+})^{2} \\ & =\sum \limits _{i=1}^{n}(\tfrac {1}{\left( \upsilon _{i}^{+}\right) ^{2}}+\left( \upsilon _{i}^{+}\right) ^{2}-2) \\ & =\sum \limits _{i=1}^{n}[\tfrac {(1-\theta )}{(1-\theta )+d_{x_{i}}d_{s_{i}}}+\tfrac {(1-\theta )+d_{x_{i}}d_{s_{i}}}{(1-\theta )}-2] \\ & =\sum \limits _{i=1}^{n}\zeta (d_{x_{i}}d_{s_{i}}-\theta )\leq \sum \limits _{i=1}^{n}\zeta \left(d_{x_{i}}d_{s_{i}}\right) , \zeta (t) \end{align*}

is an increasing function.

Then, by Lemma 6 we have the result which is:

\begin{equation*} 4\delta (\upsilon ^{+})^{2}\leq (n-1)\zeta (0)+\max \{ \zeta (w(\upsilon )),\zeta (-w(\upsilon )\} \end{equation*}

Corollary 8

If \(\theta =\frac{1}{n}\) then:

\begin{equation*} 4\delta (\upsilon ^{+})^{2}{\lt}4\Rightarrow \delta (\upsilon ^{+}){\lt}1 \end{equation*}

Proof â–¼
By Lemma 6 and Lemma 7
Proof â–¼

Lemma 9

The following equation and inequalities hold:
i/ \(\Delta x^{T}\Delta s\leq \mu \delta ^{2}\)
ii/ \(x^{+}s^{+}=\mu (1-\theta ) e+\Delta x\Delta s\)
iii/ \(\left( x^{+}\right) ^{T}s^{+}\leq \mu \left( n+\delta ^{2}\right)\)

Proof â–¼

i/

\begin{align*} \Delta x^{T}\Delta s & =( x\upsilon ^{-1}d_{x}) ^{T} ( s\upsilon ^{-1}d_{s}) \\ & =\left( \sqrt{\tfrac {x\mu }{s}}d_{x}\right) ^{T}\left( \sqrt{\tfrac {s\mu }{x}}d_{s}\right) =\mu \left( d_{x}\right) ^{T}d_{s}\leq \mu \delta ^{2}. \end{align*}

ii/

\begin{align*} x^{+}s^{+} & =\left( x+\Delta x\right) \left( s+\Delta s\right) =xs+x\Delta x+s\Delta s+\Delta x\Delta s \\ & =\mu (1-\theta ) e+\Delta x\Delta s \end{align*}

iii/

\begin{align*} (x^{+}) ^{T}s^{+} & =e^{T}( x^{+}s^{+}) =e^{T}( \mu (1-\theta ) e+\Delta x\Delta s) \\ & =\mu (1-\theta ) n+\Delta x^{T}\Delta s =\mu (1-\theta ) n+\mu d_{x}^{T}d_{s} \\ & \leq \mu ( n+\delta ^{2}) \end{align*}

Theorem 10

Let \(\theta =\tfrac {1}{n},\) \(x^{0}=e,y^{0}=0,s^{0}=e\) and \(\mu ^{0}=\tfrac {1}{n}\left( x^{0}\right) ^{T}s^{0}=1\) with \(\delta (\upsilon ^{0})=0{\lt}1\) . Then, the full Newton step IIPM algorithm requires at most \(n\log \tfrac {n}{ \varepsilon }\) iterations to reach the \(\varepsilon \)-solution of \(\left( QP\right) \) and \(\left( QD\right) .\)

Proof â–¼
Let \(x_{k}\) and \(s_{k}\) be the \(k\)-th iterates of Algorithm 2. Then

\begin{align*} ( x^{+}) ^{T}s^{+} \leq \mu _{k}( n+\delta ^{2}) \leq \mu _{k}\left( n+1\right) = \left( 1-\theta \right) ^{k}\mu _{0}\left( n+1\right) {\lt}\varepsilon \end{align*}

Then by taking logarithm of both sides, we have:

\begin{align*} \log \left( \left( n+1\right)\left( 1-\theta \right) ^{k}\right) & {\lt}\log \varepsilon \\ \log \left( n+1\right)+\log \left( 1-\theta \right) ^{k} & {\lt}\log \varepsilon \\ k\log \left( 1-\theta \right) & {\lt}\log \varepsilon -\log \left( n+1\right) =\log \tfrac {\varepsilon }{\left( n+1\right)} \\ -k\theta & {\lt}\log \tfrac {\varepsilon }{\left( n+1\right)} \\ k & {\gt}\tfrac {1}{\theta }\log \tfrac {\left( n+1\right)}{\varepsilon }=n\log \tfrac {\left( n+1\right)}{\varepsilon } \text{ \ \ \ }\left( \text{because }\theta =\tfrac {1}{n}\right) . \end{align*}

4 Numerical implementation

In this section, we deal with the numerical implementation of the infeasible interior point algorithm applied to convex quadratic and linear problems. The feasible starting point is \( (x^{0},y^{0},s^{0}) =(e,0,e)\), the proximity condition \(\delta (x^{0},z^{0},\mu ^{0} )=0\), \(x^{*}\) means the optimal solution of \((QP)\), \((y^{*},s^{*})\) means the optimal solution of \((QD)\), \(f_{PQ}(opt)\) and \(f_{DQ}(opt)\) means the optimal objective values of \((QP)\) and \((QD)\) respectively. ‘Iter’ refers to the number of iterations produced by Algorithm 2. Our accuracy parameter is \(\varepsilon =10^{-4}\). For the update parameter \( \theta \) we have first used the theoretical strategy, followed by a practical strategy with different values.

Example 11

Let

\[ A= \left( \begin{array}{ccc} -1 & 1 & 0 \\ 1 & 1 & 0\end{array}\right) ,\quad Q=\left( \begin{array}{ccc} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0\end{array}\right) ,\quad c=\left( \begin{array}{c} 0 \\ 0 \\ 0\end{array}\right) , \quad b=\left( \begin{array}{c} 1 \\ 2\end{array}\right) \]

\(x^{\ast }=\left( 0.5,1.5,0\right) ^{T}\), \(y^{\ast }=\left( 0,-0.9999\right) ^{T}\), \(z^{\ast }=\left( 0,0,0\right) ^{T}\), \(f_{PQ}(opt)=-4.5\), \(f_{DQ}(opt)=-4.4999\). â–¡

Example 12

Let

\[ A=\left( \begin{array}{cccc} 1 & 1 & 1 & 0 \\ 1 & 5 & 0 & 1\end{array}\right) ,\quad Q=\left( \begin{array}{cccc} 4 & -2 & 0 & 0 \\ -2 & 4 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{array}\right) ,\quad c=\left( \begin{array}{c} -4 \\ -6 \\ 0 \\ 0\end{array} \right) , \]

\(b=\left( \begin{array}{c} 4 \\ 8\end{array}\right) \), \(x^{\ast }=\left( 0.9999,2.9999,0,0\right) ^{T}, \quad y^{\ast }=\left( -0.9999,-0.9999\right) ^{T}\),
\(z^{\ast }=\left( 0,0,0.9996,0.9998\right) ^{T}\), \(f_{PQ}(opt)=-22.9995\), \(f_{DQ}(opt)=-22.9988\). â–¡

Example 13

Let

\[ A=\left( \begin{array}{cccccccccc} 1.5 & 1 & 1 & 0.5 & 0.5 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 & -0.5 & -0.5 & 1 & -1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{array} \right) , \hspace{0.2cm} b=\left( \begin{array}{c} 5.5 \\ 2 \\ 10 \\ 15 \end{array} \right), \]

\(c=0\in \mathbb {R}^{10}, \) \(Q=2I\in \mathbb {R}^{10}\) (\(I\) the identity matrix), \(x^{\ast }=\)
\(( 0.1868,1.5938,1.2060,2.6132,2.2254,3.1898,3.3555,3.7433,3.0233,3.8540)^{T}\\ y^{\ast }=\left( -4.0776,-0.4430,6.4888,7.2646\right) ^{T}, z^{\ast }=\left( 0,0,0,0,0,0,0,0,0,0\right) ^{T}\\ f_{PQ}(opt)=75.25, f_{DQ}(opt)=75.29\). â–¡

Linear programming is a special case of convex quadratic programming.

Example 14

Consider the linear problem where:

\begin{align*} A[i,j]=& \left\{ \begin{array}{c} 1,\text{ if }j=i+m\text{ or }i=j \\ 0,\text{ if }j\neq i+m\text{ or }i\neq j \end{array} \right. \\ b[i]=& 2, \quad i=1,...,m, \\ c[i]=& -1, \quad i=1,...,n. \end{align*}

4-1) \(m=5,n=10\)

\(x^{\ast }=\) \(1,\forall i=1:10\), \(y^{\ast }=0.9999,\forall i=1:5\), \(z^{\ast }=0,\forall i=1:10\),

\(f_{PQ}(opt)=-10\), \(f_{DQ}(opt)=-9.9998.\)

4-2) \(m=10,n=20\)

\(x^{\ast }=1\) \(,\forall i=1:20\), \(y^{\ast }=0.9999,\forall i=1:10\), \(z^{\ast }=0,\forall i=1:20\)

\(f_{PQ}(opt)=-20\), \(f_{DQ}(opt)=-19.9998\)

4-3) \(m=20,n=40\)

\(x^{\ast }=1\), \(\forall i=1:40\), \(y^{\ast }=0.9999,\forall i=1:20\), \(z^{\ast }=0,\forall i=1:40\),

\(f_{PQ}(opt)=-40\), \(f_{DQ}(opt)=-39.9998\)

4-4) \(m=50,n=100\)

\(x^{\ast }=1\), \(\forall i=1:100\), \(y^{\ast }=0.9999,\forall i=1:50\), \(z^{\ast }=0,\forall i=1:100\),

\(f_{PQ}(opt)=-100\), \(f_{DQ}(opt)=-99.9998\)

4-5) \(m=250,n=500\)

\(x^{\ast }=1\), \(\forall i=1:500\), \(y^{\ast }=0.9999,\forall i=1:250\), \(z^{\ast }=0,\forall i=1:500\)

\(f_{PQ}(opt)=-500\), \(f_{DQ}(opt)=-499.9998\)

4-6) \(m=500,n=1000\)

\(x^{\ast }=1\), \(\forall i=1:1000\), \(y^{\ast }=0.9999,\forall i=1:500\), \(z^{\ast }=0,\forall i=1:1000\),

\(f_{PQ}(opt)=-1000\), \(f_{DQ}(opt)=-999.9998\). â–¡

We summarize the number of iterations and the computation time in the following two tables:

example

size

\(\theta =1/n\)

\(\theta =0.5\)

\(\theta =0.9\)

1

(2,3)

22

14

6

2

(2,4)

31

14

6

3

(4,10)

70

16

7

4-1

(5,10)

77

13

7

4-2

(10,20)

105

14

10

4-3

(20,40)

193

16

11

4-4

(50,100)

281

17

11

4-5

(250,500)

691

24

13

4-6

(500,1000)

8765

30

13

Table 1 Number of iterations

example

\(\theta =1/n\)

\(\theta =0.5\)

\(\theta =0.9\)

1

0.000000

0.000000

0.000000

2

0.000000

0.000000

0.000000

3

0.049826

0.000000

0.000000

4-1

0.000000

0.000000

0.000000

4-2

0.091239

0.000000

0.000000

4-3

0.89966

0.031780

0.028232

4-4

4.814184

0.149128

0.065752

4-5

1857.001255

16.50277

3.889985

4-6

15176.701324

86.074864

27.313875

Table 2 Computation time (s)

The numerical results show that the number of iterations of Algorithm 2 depends on the values of the parameter \( \theta \). It is quite surprising that \( \theta \) = 0.9 gives the lowest iteration and time.

5 Conclusion

In this paper we extended results from [ 8 ] and [ 9 ] to solve the convex quadratic problems \(CQP\). These methods were based on modified search directions such that only one full-Newton step is needed in each main iteration. We can conclude that these methods constitute a valid solution to the algorithm initialization problem. Our numerical results are acceptable, whereas getting a starting feasible centered point for these algorithms is a challenging task. Finally, we point out that the implementation with the update parameter \(\theta \) significantly reduces the number of iterations produced by this algorithm and enables these algorithms to reach their real numerical performance. Future research might extend the algorithm for other optimization problems.

Bibliography

1

M. Achache, M. Goutal, A primal-dual interior point algorithm for convex quadratic programs, Studia Univ. Babes-Bolyai Informatica. LVII (2012) no. 1, pp. 48–58.

2

Zs. Darvay, I.-M. Papp, P.-R. Takacs, An infeasible full-Newton step algorithm for linear optimization with one centering step in major iteration , Studia Univ. Babes-Bolyai Informatica, 59 (2014), pp. 28-45.

3

G. Gu, M. Zangiabadi, C. Roos, Full Nesterov-Todd step infeasible interior-point methods for symmetric optimization, European J. Oper. Res, 214 (2011) no. 3, pp. 473-484. \includegraphics[scale=0.1]{ext-link.png}

4

N.K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (1984) no. 4, pp. 373-395. \includegraphics[scale=0.1]{ext-link.png}

5

H. Mansouri, C. Roos, A simplified O(nL) infeasible interior-point algorithm for linear optimization using full Newton steps, Optim. Methods Softw, 22 (2007) no. 3, pp. 519-530. \includegraphics[scale=0.1]{ext-link.png}

6

H. Mansouri, M. Zangiabad, An adaptive infeasible interior-point algorithm with full-Newton step for linear optimization, Optimization, 62 (2013) no. 2, pp. 285-297. \includegraphics[scale=0.1]{ext-link.png}

7

H. Mansouri, M. Zangiabadi, M. Arzani, A modified infeasible-interior-point algorithm for linear optimization problems, J. Optim. Theory Appl., 166 (2015) no. 2, pp. 605-618. \includegraphics[scale=0.1]{ext-link.png}

8

H. Mansouri, M. Zangiabadi, M. Pirhaji, A full-Newton step O(n) infeasible interior-point algorithm for linear complementarity problems, Nonlinear Anal. Real World Appl. 12 (2011) no. 1, pp. 545-561. \includegraphics[scale=0.1]{ext-link.png}

9

C. Roos, A full-Newton step O(n) infeasible interior-point algorithm for linear optimization, SIAM J. Optim, 16 (2006) no. 4, pp. 1110-1136. \includegraphics[scale=0.1]{ext-link.png}

10

C. Roos, An improved and simplified full-Newton step O(n) infeasible interior-point method for linear optimization, SIAM J. Optim, 25 (2015) no. 1, pp. 102-114. \includegraphics[scale=0.1]{ext-link.png}

11

Gy. Sonnevend, An Analytic Center for Polyhedrons and New Classes of Global Algorithms for Linear (Smooth, Convex) Programming, Lecture Notes in Control and Information Sciences, Springer, Berlin, 84 (1985), pp.866-876. \includegraphics[scale=0.1]{ext-link.png}

12

G.Q. Wang, Y.Q. Ba, A new full Nesterov-Todd step primal-dual path-following interior-point algorithm for symmetric optimization, J. Optim. Theory Appl., 154 (2012) no. 3, pp. 966-985. \includegraphics[scale=0.1]{ext-link.png}

13

G.Q. Wang, Y.Q. Bai, X.Y. Gao, D.Z. Wang, Improved complexity analysis of full Nesterov-Todd step interior-point methods for semidefinite optimization, J. Optim. Theory Appl., 165 (2014) no. 1, pp. 242-262. \includegraphics[scale=0.1]{ext-link.png}

14

G.Q. Wang, G. Lesaja, Full Nesterov-Todd step feasible interior-point method for the Cartesian P(*)-SCLCP, Optim. Methods Softw, 28 (2013) no. 3, pp. 600-618. \includegraphics[scale=0.1]{ext-link.png}