AN INFEASIBLE INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEMS

. In this paper, we deal with the study and implementation of an infeasible interior point method for convex quadratic problems ( CQP ) . The algorithm uses a Newton step and suitable proximity measure for approximately tracing the central path and guarantees that after one feasibility step, the new iterate is feasible and suﬃciently close to the central path. For its complexity analysis, we reconsider the analysis used by the authors for linear optimization ( LO ) and linear complementarity problems ( LCP ). We show that the algorithm has the best known iteration bound, namely n log ( n +1) (cid:15) . Finally, to measure the numerical performance of this algorithm, it was tested on convex quadratic and linear problems. MSC 2010.


INTRODUCTION
Convex quadratic programs appear in many areas of application. For example, they appear in finance and as sub-problems in sequential Quadratic Programming. Interior point methods (IPMs) are among the most effective methods for solving a wide range of optimization problems. Today, the most popular and robust of them are primal-dual path-following algorithms owing to their numerical efficiency and theoretical polynomial complexity [2], [3], [7]- [12]. Two types of (IPMs) exist, feasible and infeasible IPMs (IIPMs). Feasible IPMs start from a strictly feasible point for the problem at hand, while IIPMs start from an arbitrary positive point. Owing to the difficulty of finding a feasible starting point for various problems, the use of IIPMs is unavoidable. Roos [9], proposed the first full-Newton step IIPM for LO. This algorithm starts from strictly feasible iterates on the central path of the intermediate problems produced by suitable perturbations in the LO problem. Then, it uses so-called feasibility steps that serve to generate strictly feasible iterates for the next perturbed problems. After accomplishing a few centring steps for the new perturbed problem, it obtains strictly feasible iterates close enough to the central path of the new perturbed problems. This algorithm has been extensively extended to other optimization problems, e.g., [3], [8], [12]- [14]. Subsequently, some authors have tried to improve Roos's infeasible algorithm [5], [6]. Some improvements have been done to reduce or remove the centring steps [2], [7], [10]. Recently, Mansouri et al. [7] proposed an IIPM for solving LO problems with a reformulation of the central path. Their algorithm need not perform a centring step to reach the closeness of the iterates to their µ-centres. To reach this target, they select a (rather small) fixed default barrier update parameter in their algorithm to return the iterates to the neighbourhood of the central path without doing a centring step. This value seems to be undesirable for practical purposes.
In this paper, we present an IIPM for convex quadratic programs (CQP). The algorithm uses a short-step and a suitable proximity measure for approximately tracing the central path. The rest of the paper is organized as follows. In section 2, we introduce the perturbed problems pertaining to the original primal-dual pair. In section 3, the complexity analysis of the algorithm is discussed. In section 4, we deal with the numerical implementation of the infeasible interior point algorithm applied to convex quadratic and linear problems. In section 5, a conclusion is stated.
We start with arbitrarily choosing x 0 > 0 and y 0 , s 0 > 0 such that x 0 s 0 = µe, if µ = (x 0 ) T s 0 n = 1 then x 0 = e yields a strictly feasible solution of (QP ) 1 and y 0 = 0, s 0 = e yields a strictly feasible solution of (QD) 1 . We conclude that (QP ) µ and (QD) µ satisfy the interior point conditions IP C. Proof. Letx be a feasible solution of (QP ) and (ȳ,s) be a feasible solution of (QD). Then Ax = b and A Tȳ − Qx +s = c, withx ≥ 0 ands ≥ 0. Now let 0 < µ ≤ 1, and consider: One has Similarly, showing that y, s are feasible for (QD) µ . Because µ > 0, x and s are positive, thus proving that (QP ) µ and (QD) µ satisfy the IP C. To prove the inverse implication, suppose that (QP ) µ and (QD) µ satisfy the IP C for each µ satisfying 0 < µ ≤ 1. Letting µ go to zero, it follows that (QP ) and (QD) are feasible.
Let (QP ) and (QD) be feasible and 0 < µ ≤ 1. Then Lemma 1 implies that the problems (QP ) µ and (QD) µ satisfy the IP C, hence their central paths exist. This means that the system has a unique solution, for every µ > 0.
To get iterates that are feasible for (QP ) µ + and (QD) µ + we need search directions ∆x, ∆y and ∆s such that Therefore, the following system is used to define ∆x, ∆y and ∆s : Now, we introduce a norm-based proximity measure as: (4) δ(x, z, µ) = 1 2 µe − xs to measure the closeness of the points (x, y, s) to the central path. We suppose that the initial point (x 0 , y 0 , s 0 ) verified δ(x 0 , y 0 , s 0 ) < 1 for certain µ, is known.
In addition, we obtain: and (7) ∆x∆s = µd x d s By using these notations the linear system in (5) and the proximity become: we have: x + s + = xs + (s∆x + x∆s) + ∆x∆s = (1 − θ) µe + ∆x∆s Lemma 3. The iterates (x + , s + ) are strictly feasible if and only if: Proof. By the definition of ∞-norm i.e., d x d s ∞ = max{|d x i d s i | : i = 1, 2, . . . , n} and from the assumption, we have that |d x i d s i | < 1 − θ . Equivalently, we have: Thus, by Lemma 3, (x + , s + ) are strictly feasible.
Our aim is to find an upper bound for δ(υ + ) such that the proximity of the new iterates (x + , s + ) is always less than 1. For each iteration we have: Corollary 5. if w(υ) < 1 − θ then (x + , s + ) are strictly feasible.
The following Lemma proved by C. Roos in 2015 [10] leads us to find an upper bound for δ(υ + ).
Proof. We have: is an increasing function. Then, by Lemma 6 we have the result which is: Proof. By Lemma 6 and Lemma 7 Lemma 9. The following equation and inequalities hold: n , x 0 = e, y 0 = 0, s 0 = e and µ 0 = 1 n x 0 T s 0 = 1 with δ(υ 0 ) = 0 < 1 . Then, the full Newton step IIPM algorithm requires at most n log n ε iterations to reach the ε-solution of (QP ) and (QD) . Proof. Let x k and s k be the k-th iterates of Algorithm 2. Then Then by taking logarithm of both sides, we have:

NUMERICAL IMPLEMENTATION
In this section, we deal with the numerical implementation of the infeasible interior point algorithm applied to convex quadratic and linear problems. The feasible starting point is (x 0 , y 0 , s 0 ) = (e, 0, e), the proximity condition δ(x 0 , z 0 , µ 0 ) = 0, x * means the optimal solution of (QP ), (y * , s * ) means the optimal solution of (QD), f P Q (opt) and f DQ (opt) means the optimal objective values of (QP ) and (QD) respectively. 'Iter' refers to the number of iterations produced by Algorithm 2. Our accuracy parameter is ε = 10 −4 . For the update parameter θ we have first used the theoretical strategy, followed by a practical strategy with different values. Linear programming is a special case of convex quadratic programming.

Example 11. Let
Example 14. Consider the linear problem where: We summarize the number of iterations and the computation time in the following two tables: The numerical results show that the number of iterations of Algorithm 2 depends on the values of the parameter θ. It is quite surprising that θ = 0.9 gives the lowest iteration and time.

CONCLUSION
In this paper we extended results from [8] and [9] to solve the convex quadratic problems CQP . These methods were based on modified search directions such that only one full-Newton step is needed in each main iteration. We can conclude that these methods constitute a valid solution to the algorithm initialization problem. Our numerical results are acceptable, whereas getting a starting feasible centered point for these algorithms is a challenging task. Finally, we point out that the implementation with the update parameter θ significantly reduces the number of iterations produced by this algorithm and enables these algorithms to reach their real numerical performance. Future research might extend the algorithm for other optimization problems.