Return to Article Details An infeasible interior point methods for convex quadratic problems

An infeasible interior point method
for convex quadratic problems

Hayet Roumili Nawel Boudjellal

January 25, 2018. Accepted: June 12, 2018. Published online: February 7, 2019.

Department of Mathematics, University of Setif, Algeria, e-mail: h_roumili@yahoo.fr.

Department of Mathematics, University of Setif, Algeria, e-mail: nawelboudjellal2016@gmail.com. The work of these authors has been supported by LMNF Laboratory

In this paper, we deal with the study and implementation of an infeasible interior point method for convex quadratic problems (CQP) . The algorithm uses a Newton step and suitable proximity measure for approximately tracing the central path and guarantees that after one feasibility step, the new iterate is feasible and sufficiently close to the central path. For its complexity analysis, we reconsider the analysis used by the authors for linear optimization (LO) and linear complementarity problems (LCP). We show that the algorithm has the best known iteration bound, namely nlog(n+1)ϵ. Finally, to measure the numerical performance of this algorithm, it was tested on convex quadratic and linear problems.

MSC. 90C05. 90C51.

Keywords. Convex quadratic programs, Infeasible interior-point method, Newton step.

1 Introduction

Convex quadratic programs appear in many areas of application. For example, they appear in finance and as sub-problems in sequential Quadratic Programming. Interior point methods (IPMs) are among the most effective methods for solving a wide range of optimization problems. Today, the most popular and robust of them are primal-dual path-following algorithms owing to their numerical efficiency and theoretical polynomial complexity [ 2 ] , [ 3 ] , [ 7 ] [ 12 ] . Two types of (IPMs) exist, feasible and infeasible IPMs (IIPMs). Feasible IPMs start from a strictly feasible point for the problem at hand, while IIPMs start from an arbitrary positive point. Owing to the difficulty of finding a feasible starting point for various problems, the use of IIPMs is unavoidable. Roos [ 9 ] , proposed the first full-Newton step IIPM for LO. This algorithm starts from strictly feasible iterates on the central path of the intermediate problems produced by suitable perturbations in the LO problem. Then, it uses so-called feasibility steps that serve to generate strictly feasible iterates for the next perturbed problems. After accomplishing a few centring steps for the new perturbed problem, it obtains strictly feasible iterates close enough to the central path of the new perturbed problems. This algorithm has been extensively extended to other optimization problems, e.g., [ 3 ] , [ 8 ] , [ 12 ] [ 14 ] . Subsequently, some authors have tried to improve Roos’s infeasible algorithm [ 5 ] , [ 6 ] . Some improvements have been done to reduce or remove the centring steps [ 2 ] , [ 7 ] , [ 10 ] . Recently, Mansouri et al. [ 7 ] proposed an IIPM for solving LO problems with a reformulation of the central path. Their algorithm need not perform a centring step to reach the closeness of the iterates to their μ-centres. To reach this target, they select a (rather small) fixed default barrier update parameter in their algorithm to return the iterates to the neighbourhood of the central path without doing a centring step. This value seems to be undesirable for practical purposes.

In this paper, we present an IIPM for convex quadratic programs (CQP). The algorithm uses a short-step and a suitable proximity measure for approximately tracing the central path. The rest of the paper is organized as follows. In section 2, we introduce the perturbed problems pertaining to the original primal-dual pair. In section 3, the complexity analysis of the algorithm is discussed. In section 4, we deal with the numerical implementation of the infeasible interior point algorithm applied to convex quadratic and linear problems. In section 5, a conclusion is stated.

2 The perturbed problems

For any μ with 0<μ1 we consider the perturbed problem (QP)μ defined by

(QP)μ{min(cμ(cATy0+Qx0s0))Tx+12xTQxAx=bμ(bAx0)x0

and its dual problem (QD)μ, which is given by

(QD)μ{max(bμ(bAx0))Ty12xTQxATy+sQx=cμ(cATy0+Qx0s0)s0,yRm

The solution (x(μ),y(μ),s(μ)) of (QP)μ and (QD)μ converges to the primal-dual optimal solution of (QP) and (QD) as μ0.

We start with arbitrarily choosing x0>0 and y0,s0>0 such that x0s0=μe, if μ=(x0)Ts0n=1 then x0=e yields a strictly feasible solution of (QP)1 and y0=0, s0=e yields a strictly feasible solution of (QD)1. We conclude that (QP)μ and (QD)μ satisfy the interior point conditions IPC.

Lemma 1

The original problems (QP) and (QD) are feasible if and only if for each μ satisfying 0<μ1 the perturbed problems (QP)μ and (QD)μ satisfy the IPC.

Proof â–¼
Let x¯ be a feasible solution of (QP) and (y¯,s¯) be a feasible solution of (QD). Then Ax¯=b and ATy¯Qx¯+s¯=c, with x¯0 and s¯0. Now let 0<μ1, and consider: x=(1μ)x¯+μx0, y=(1μ)y¯+μy0, s=(1μ)s¯+μs0.

One has

Ax=A((1μ)x¯+μx0)=(1μ)Ax¯+μAx0=(1μ)b+μAx0=bμ(bAx0)=bμrb0

showing that x is feasible for (QP)μ.

Similarly,

ATyQx+s=(1μ)(ATy¯Qx¯+s¯)+μ(ATy0Qx0+s0)=(1μ)c+μ(ATy0Qx0+s0)=cμ(cATy0+Qx0s0)=cμrc0

showing that y,s are feasible for (QD)μ. Because μ >0, x and s are positive, thus proving that (QP)μ and (QD)μ satisfy the IPC. To prove the inverse implication, suppose that (QP)μ and (QD)μ satisfy the IPC for each μ satisfying 0<μ1. Letting μ go to zero, it follows that (QP) and (QD) are feasible.

Proof â–¼

Let (QP) and (QD) be feasible and 0<μ1. Then Lemma 1 implies that the problems (QP)μ and (QD)μ satisfy the IPC, hence their central paths exist. This means that the system

{bAx=μrb0cATy+Qxs=μrc0xs=μe, \ \ \ \ \ \ x>0,s>0
1

has a unique solution, for every μ>0.

In the sequel, this unique solution is denoted by (x(μ),y(μ),s(μ)). These are the μ-centres of the perturbed problems (QP)μ and (QD)μ. Note that because x0s0=e, x0 is the μ-center of the perturbed problem (QP)1  and (y0,s0) the μ-center of (QD)1 .

To get iterates that are feasible for (QP)μ+ and (QD)μ+ we need search directions Δx,Δy and Δs such that

{A(x+Δx)=bμ+rb0AT(y+Δy)Q(x+Δx)+(s+Δs)=cμ+rc0
2

where μ+=(1θ)μ with 0<θ<1.

Therefore, the following system is used to define Δx,Δy and Δs:

{AΔx=θμrb0ATΔyQΔx+Δs=θμrc0sΔx+xΔs=(1θ)μexs
3

Now, we introduce a norm-based proximity measure as:

δ(x,z,μ)=12μexs
4

to measure the closeness of the points (x,y,s) to the central path. We suppose that the initial point (x0,y0,s0) verified δ(x0,y0,s0)<1 for certain μ, is known.

Algorithm 2
Primal-dual Infeasible IPM algorithm

Input

  - An accuracy parameter ε>0.

  - barrier update parameter 0<θ<1 .

  - a strictly feasible point (x0,y0,s0)=(e,0,e) and μ0=1

  such that δ(x0,y0,s0)=0<1.

begin

x=x0,y=y0,s=s0,μ=μ0.

while rb+rc+xTs>ε do

(rb=Axb,rc=cATy+Qxs)

solve the system (5) to obtain: (Δx,Δy,Δs).

  update x:=x+Δx, y:=y+Δy, z:=z+Δz,

  update μ:=(1θ)μ.

end

3 Complexity analysis

Now the following notations are useful for studying the complexity of the proposed algorithm.

υ=xsμ,dx=υΔxx,ds=υΔss,d=xs
5

In addition, we obtain:

sΔx+xΔs=μυ(dx+ds)
6

and

ΔxΔs=μdxds
7

By using these notations the linear system in 5 and the proximity become:

{A¯dx=μ12θμrb0=μ12θrb0A¯TdyQ¯dx+ds=μ12θμDrc0=μ12θDrc0dx+ds=(1θ)υ1υ
8

where

D=diag(di),A¯=AD,Q¯=DQD,dy=,μ12Δy

and

δ(υ)=12υ1υ

we have:

x+s+=xs+(sΔx+xΔs)+ΔxΔs=(1θ)μe+ΔxΔs=(1θ)μe+xsυ2=μ((1θ)e+dxds).

Lemma 3

The iterates (x+, s+) are strictly feasible if and only if:

(1θ)e+dxds>0.

Corollary 4

The iterates (x+, s+) are strictly feasible if dxds<1θ.

Proof â–¼
By the definition of -norm i.e., dxds=max{|dxidsi|:i=1,2, ,n} and from the assumption, we have that |dxidsi|<1θ . Equivalently, we have:
(1θ)<dxidsi<1θ

which implies that:

dxds+(1θ)e>0

Thus, by Lemma 3, (x+, s+) are strictly feasible.

Proof â–¼
Our aim is to find an upper bound for δ(υ+) such that the proximity of the new iterates (x+, s+) is always less than 1. For each iteration we have:
dxdsdxdsdxds12(dx2+ds2)=w(υ).

Corollary 5

if w(υ)<1θ then (x+, s+) are strictly feasible.

Proof â–¼
We have:
dxdsw(υ)<1θ.

by Corollary 1, (x+, s+) are strictly feasible.

Proof â–¼

The following Lemma proved by C. Roos in 2015 [ 10 ] leads us to find an upper bound for δ(υ+).

Lemma 6

Let a,b Rn,r(0,1] and f(a,b)=i=1n ζ(aibi). If

a2+b22r2,

then

f(a,b)(n1)ζ(0)+max{ζ(r2),ζ(r2)}

where

ζ(t)=1+t1θ+1θ1+t2=(θ+t)2(1θ)(1+t)0,t>1,0<θ<1.

Lemma 7

If w(υ)<1θ, then

4δ(υ+)2(n1)ζ(0)+max{ζ(w(υ)),ζ(w(υ)}.

Proof â–¼
We have:
υ+2=x+s+μ+=μ((1θ)e+dxds)(1θ)μ=e+dxds1θ

Because

4δ(υ+)2=(υ+)1υ+2=i=1n(1υi+υi+)2=i=1n(1(υi+)2+(υi+)22)=i=1n[(1θ)(1θ)+dxidsi+(1θ)+dxidsi(1θ)2]=i=1nζ(dxidsiθ)i=1nζ(dxidsi),ζ(t)

is an increasing function.

Then, by Lemma 6 we have the result which is:

4δ(υ+)2(n1)ζ(0)+max{ζ(w(υ)),ζ(w(υ)}

Corollary 8

If θ=1n then:

4δ(υ+)2<4δ(υ+)<1

Proof â–¼
By Lemma 6 and Lemma 7
Proof â–¼

Lemma 9

The following equation and inequalities hold:
i/ ΔxTΔsμδ2
ii/ x+s+=μ(1θ)e+ΔxΔs
iii/ (x+)Ts+μ(n+δ2)

Proof â–¼

i/

ΔxTΔs=(xυ1dx)T(sυ1ds)=(xμsdx)T(sμxds)=μ(dx)Tdsμδ2.

ii/

x+s+=(x+Δx)(s+Δs)=xs+xΔx+sΔs+ΔxΔs=μ(1θ)e+ΔxΔs

iii/

(x+)Ts+=eT(x+s+)=eT(μ(1θ)e+ΔxΔs)=μ(1θ)n+ΔxTΔs=μ(1θ)n+μdxTdsμ(n+δ2)

Theorem 10

Let θ=1n, x0=e,y0=0,s0=e and μ0=1n(x0)Ts0=1 with δ(υ0)=0<1 . Then, the full Newton step IIPM algorithm requires at most nlognε iterations to reach the ε-solution of (QP) and (QD).

Proof â–¼
Let xk and sk be the k-th iterates of Algorithm 2. Then

(x+)Ts+μk(n+δ2)μk(n+1)=(1θ)kμ0(n+1)<ε

Then by taking logarithm of both sides, we have:

log((n+1)(1θ)k)<logεlog(n+1)+log(1θ)k<logεklog(1θ)<logεlog(n+1)=logε(n+1)kθ<logε(n+1)k>1θlog(n+1)ε=nlog(n+1)ε \ \ \ (because θ=1n).

4 Numerical implementation

In this section, we deal with the numerical implementation of the infeasible interior point algorithm applied to convex quadratic and linear problems. The feasible starting point is (x0,y0,s0)=(e,0,e), the proximity condition δ(x0,z0,μ0)=0, x means the optimal solution of (QP), (y,s) means the optimal solution of (QD), fPQ(opt) and fDQ(opt) means the optimal objective values of (QP) and (QD) respectively. ‘Iter’ refers to the number of iterations produced by Algorithm 2. Our accuracy parameter is ε=104. For the update parameter θ we have first used the theoretical strategy, followed by a practical strategy with different values.

Example 11

Let

A=(110110),Q=(200020000),c=(000),b=(12)

x=(0.5,1.5,0)T, y=(0,0.9999)T, z=(0,0,0)T, fPQ(opt)=4.5, fDQ(opt)=4.4999. â–¡

Example 12

Let

A=(11101501),Q=(4200240000000000),c=(4600),

b=(48), x=(0.9999,2.9999,0,0)T,y=(0.9999,0.9999)T,
z=(0,0,0.9996,0.9998)T, fPQ(opt)=22.9995, fDQ(opt)=22.9988. â–¡

Example 13

Let

A=(1.5110.50.5000000000020.50.51110101010100101010101),b=(5.521015),

c=0R10, Q=2IR10 (I the identity matrix), x=
(0.1868,1.5938,1.2060,2.6132,2.2254,3.1898,3.3555,3.7433,3.0233,3.8540)Ty=(4.0776,0.4430,6.4888,7.2646)T,z=(0,0,0,0,0,0,0,0,0,0)TfPQ(opt)=75.25,fDQ(opt)=75.29. â–¡

Linear programming is a special case of convex quadratic programming.

Example 14

Consider the linear problem where:

A[i,j]={1, if j=i+m or i=j0, if ji+m or ijb[i]=2,i=1,...,m,c[i]=1,i=1,...,n.

4-1) m=5,n=10

x= 1,i=1:10, y=0.9999,i=1:5, z=0,i=1:10,

fPQ(opt)=10, fDQ(opt)=9.9998.

4-2) m=10,n=20

x=1 ,i=1:20, y=0.9999,i=1:10, z=0,i=1:20

fPQ(opt)=20, fDQ(opt)=19.9998

4-3) m=20,n=40

x=1, i=1:40, y=0.9999,i=1:20, z=0,i=1:40,

fPQ(opt)=40, fDQ(opt)=39.9998

4-4) m=50,n=100

x=1, i=1:100, y=0.9999,i=1:50, z=0,i=1:100,

fPQ(opt)=100, fDQ(opt)=99.9998

4-5) m=250,n=500

x=1, i=1:500, y=0.9999,i=1:250, z=0,i=1:500

fPQ(opt)=500, fDQ(opt)=499.9998

4-6) m=500,n=1000

x=1, i=1:1000, y=0.9999,i=1:500, z=0,i=1:1000,

fPQ(opt)=1000, fDQ(opt)=999.9998. â–¡

We summarize the number of iterations and the computation time in the following two tables:

example

size

θ=1/n

θ=0.5

θ=0.9

1

(2,3)

22

14

6

2

(2,4)

31

14

6

3

(4,10)

70

16

7

4-1

(5,10)

77

13

7

4-2

(10,20)

105

14

10

4-3

(20,40)

193

16

11

4-4

(50,100)

281

17

11

4-5

(250,500)

691

24

13

4-6

(500,1000)

8765

30

13

Table 1 Number of iterations

example

θ=1/n

θ=0.5

θ=0.9

1

0.000000

0.000000

0.000000

2

0.000000

0.000000

0.000000

3

0.049826

0.000000

0.000000

4-1

0.000000

0.000000

0.000000

4-2

0.091239

0.000000

0.000000

4-3

0.89966

0.031780

0.028232

4-4

4.814184

0.149128

0.065752

4-5

1857.001255

16.50277

3.889985

4-6

15176.701324

86.074864

27.313875

Table 2 Computation time (s)

The numerical results show that the number of iterations of Algorithm 2 depends on the values of the parameter θ. It is quite surprising that θ = 0.9 gives the lowest iteration and time.

5 Conclusion

In this paper we extended results from [ 8 ] and [ 9 ] to solve the convex quadratic problems CQP. These methods were based on modified search directions such that only one full-Newton step is needed in each main iteration. We can conclude that these methods constitute a valid solution to the algorithm initialization problem. Our numerical results are acceptable, whereas getting a starting feasible centered point for these algorithms is a challenging task. Finally, we point out that the implementation with the update parameter θ significantly reduces the number of iterations produced by this algorithm and enables these algorithms to reach their real numerical performance. Future research might extend the algorithm for other optimization problems.

Bibliography

1

M. Achache, M. Goutal, A primal-dual interior point algorithm for convex quadratic programs, Studia Univ. Babes-Bolyai Informatica. LVII (2012) no. 1, pp. 48–58.

2

Zs. Darvay, I.-M. Papp, P.-R. Takacs, An infeasible full-Newton step algorithm for linear optimization with one centering step in major iteration , Studia Univ. Babes-Bolyai Informatica, 59 (2014), pp. 28-45.

3

G. Gu, M. Zangiabadi, C. Roos, Full Nesterov-Todd step infeasible interior-point methods for symmetric optimization, European J. Oper. Res, 214 (2011) no. 3, pp. 473-484. \includegraphics[scale=0.1]{ext-link.png}

4

N.K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (1984) no. 4, pp. 373-395. \includegraphics[scale=0.1]{ext-link.png}

5

H. Mansouri, C. Roos, A simplified O(nL) infeasible interior-point algorithm for linear optimization using full Newton steps, Optim. Methods Softw, 22 (2007) no. 3, pp. 519-530. \includegraphics[scale=0.1]{ext-link.png}

6

H. Mansouri, M. Zangiabad, An adaptive infeasible interior-point algorithm with full-Newton step for linear optimization, Optimization, 62 (2013) no. 2, pp. 285-297. \includegraphics[scale=0.1]{ext-link.png}

7

H. Mansouri, M. Zangiabadi, M. Arzani, A modified infeasible-interior-point algorithm for linear optimization problems, J. Optim. Theory Appl., 166 (2015) no. 2, pp. 605-618. \includegraphics[scale=0.1]{ext-link.png}

8

H. Mansouri, M. Zangiabadi, M. Pirhaji, A full-Newton step O(n) infeasible interior-point algorithm for linear complementarity problems, Nonlinear Anal. Real World Appl. 12 (2011) no. 1, pp. 545-561. \includegraphics[scale=0.1]{ext-link.png}

9

C. Roos, A full-Newton step O(n) infeasible interior-point algorithm for linear optimization, SIAM J. Optim, 16 (2006) no. 4, pp. 1110-1136. \includegraphics[scale=0.1]{ext-link.png}

10

C. Roos, An improved and simplified full-Newton step O(n) infeasible interior-point method for linear optimization, SIAM J. Optim, 25 (2015) no. 1, pp. 102-114. \includegraphics[scale=0.1]{ext-link.png}

11

Gy. Sonnevend, An Analytic Center for Polyhedrons and New Classes of Global Algorithms for Linear (Smooth, Convex) Programming, Lecture Notes in Control and Information Sciences, Springer, Berlin, 84 (1985), pp.866-876. \includegraphics[scale=0.1]{ext-link.png}

12

G.Q. Wang, Y.Q. Ba, A new full Nesterov-Todd step primal-dual path-following interior-point algorithm for symmetric optimization, J. Optim. Theory Appl., 154 (2012) no. 3, pp. 966-985. \includegraphics[scale=0.1]{ext-link.png}

13

G.Q. Wang, Y.Q. Bai, X.Y. Gao, D.Z. Wang, Improved complexity analysis of full Nesterov-Todd step interior-point methods for semidefinite optimization, J. Optim. Theory Appl., 165 (2014) no. 1, pp. 242-262. \includegraphics[scale=0.1]{ext-link.png}

14

G.Q. Wang, G. Lesaja, Full Nesterov-Todd step feasible interior-point method for the Cartesian P(*)-SCLCP, Optim. Methods Softw, 28 (2013) no. 3, pp. 600-618. \includegraphics[scale=0.1]{ext-link.png}