COMPLEXITY ANALYSIS OF PRIMAL-DUAL ALGORITHMS FOR THE SEMIDEFINITE LINEAR COMPLEMENTARITY PROBLEM

. In this paper a primal-dual path-following interior-point algorithm for the monotone semideﬁnite linear complementarity problem is presented. The algorithm is based on Nesterov-Todd search directions and on a suitable proximity for tracing approximately the central-path. We provide an uniﬁed analysis for both long and small-update primal-dual algorithms. Finally, the iteration bounds for these algorithms are obtained.


INTRODUCTION
Let S n be the space of all real symmetric matrices of order n, S n + be the cone of positive semidefinite matrices in S n and S n ++ is the cone formed by all symmetric positive definite matrices.The expression X 0 (X 0) means that X ∈ S n + (X ∈ S n ++ ).Let L : S n −→ S n be a given linear transformation and Q ∈ S n .The semidefinite linear complementarity problem (SDLCP) is to find a couple of matrices (X, Y ) ∈ A such that: (1) X 0, Y 0, and XY = 0, where A = {(X, Y ) ∈ S n × S n : Y − L(X) = Q} is an affine subset of S n .This problem has made the object of many studies of research this last years.Its growing importance can be measured by its different applications in control theory and various areas of optimization problems.This problem can be also viewed as a generalization of the standard linear complementarity problem (LCP) and also included the geometric monotone semi-definite linear complementarity introduced by Kojima et al., [9] and which contains the pair of primal-dual semidefinite programming problems (SDP).For more details on (SDLCP) we refer the reader to the references [6][7][8]11] and the thesis of Phd of Song [20].Moreover it turns out that primal-dual path-following interior point algorithms can solve efficiently many problems such as linear, semidefinite programming, convex quadratic programming, conic and complementarity problems.These algorithms have polynomial complexity and numerical efficiency (see [1-4, 9, 10, 12-19, 21, 22]).It has been shown that most primal-dual (IPMs) algorithms and their analysis can be extended naturally from linear programming to (SDP) and so to more general context of (SDLCP).
The goal of this paper is to analyze the polynomial complexity of a primaldual path-following interior-point algorithm for solving (SDLCP).Here, we reconsider the analysis used by Peng et al., (Ref.[17]) for (SDP) and we make it suiting to (SDLCP) case.The algorithm uses at each interior point iteration a full Nesterov-Todd (NT) step and a suitable proximity for tracing approximately the central-path.We provide also an unified analysis for both long and small-update primal-dual algorithms.Finally, the total iteration bounds for these algorithms are obtained.These polynomial complexity are analogous to such methods for linear, quadratic, semidefinite programming, conic and complementarity problems.
The rest of the paper is as follows.In section 2, the central path with its properties and Nesterov-Todd search directions are discussed.In section 3, the algorithm and its complexity analysis are stated.In section 4, a conclusion and future researches are given.
Throughout the paper we use the following notation.R n×n is the space of all real n × n matrices, .denotes the Frobenius norm of matrices.For a given matrix A ∈ R n×n , det A denotes its determinant if in addition A is nonsingular then A −1 denotes its inverse whereas A T is the transpose of A. Let X ∈ R n + , X 1/2 represents its square root matrix.The trace of a matrix A is the sum of its diagonal entries and it is denoted by Tr(A).Recall that for any two matrices A = (a ij ) and B = (b ij ) in R n×n , their inner product (trace) is defined by: A, B := Tr(A T B) = i,j a ij b ij .The identity matrix of order n is denoted by I. Finally, g(t) = O(f (t)) if and only if g(t) ≤ kf (t) for some positive constant k where f (t) and g(t) are two real valued functions and t > 0.

CENTRAL PATH AND NESTEROV-TODD SEARCH DIRECTIONS FOR SDLCP
In this section we define the central path for SDLCP and its properties and we determine Nesterov-Todd search directions.
The feasible set and the strict feasible set of (1) are subsets of R n×n : In the sequel we do the following hypothesis.Hypothesis 1.The linear transformation L is monotone [6,20] i.e.
L(X), X ≥ 0, for all X ∈ S n ; Hypothesis 2. F 0 = ∅.Under our hypothesis it is shown that the set of solutions of ( 1) is compact and non empty [6,20].
In addition ( 1) is equivalent to the following optimization problem: (2) (OP) min Hence solving ( 1) is equivalent to find the minimizer of ( 2) with its objective value is zero.Now to introduce an interior point method for solving (2) we use the technique of the logarithmic barrier approach.So for (2) we associate with it the following minimization problem: where µ > 0 is the barrier parameter and log det(XY ) is the logarithmic barrier function associated with (2), [4].
Under our hypothesis we deduce the following useful properties for (3).
) is a solution of (1), [9].• The solution (X(µ), Y (µ)) is characterized by the first-order necessary and sufficient optimality conditions called conditions of Karush-Kuhn-Tucker of (OP) µ given by the following nonlinear system of equations: • Hence solving (OP) µ is equivalent to solve (4).
• The set of solutions of the system (4) defines the central-path which is a smooth curve.
For solving (4) we use primal-dual path-following (IPMs) algorithms.The basic ideas behind these algorithms is to follow the central path approximately and to approach the solution of (4) as µ goes to zero by using Newton step.Suppose now that we have (X, Y ) ∈F 0 .Applying Newton method for the system (4) we obtain a class of search directions given by the following linear system of equations: (5) Note that the system (5) has a unique solution (∆X, ∆Y ) but unfortunately this last it is not symmetric.To remedy this default many works have been done in literature for symmetrizing the second equation in (5).In most suggestions used by researchers in this domain is to introduce a scaling and invertible matrix P and to consider the following linear transformation H P given by: Then the second equation in system (5) becomes: Now we give some popular directions used in IPMs.For P = I we get the Alizadeh-Haerberly-Overton (A.H.O) direction and for P =Y 1/2 or P = X 1/2 it defines the so-called (H.K.M) direction.
Here we use the Nesterov-Todd (NT) direction where P is given by The role of the symmetric and positive definite matrix D is to rescale the two matrices X and Y to the same symmetric and positive definite matrix V given by: Then the scaling Newton directions are: Hence using ( 7) and (8) the system (6) can be written as: and L is given by: The linear transformation L is also monotone on S n .Under our hypothesis the new linear system in (9) has a unique symmetric solution (D X , D Y ).Furthermore these directions satisfy: (11) Tr This inequality shows that the Newton directions in the primal-dual space are not orthogonal in contrast to SDP case.Thus makes the analysis of the complexity a little different to SDP.As mentioned before the proximity used here is defined as: (12) δ(XY, µ) One can easily verify that: Remark 1.The unique solution of the Sylvester equation: stated in ( 9) is ( 13) and using ( 12), we deduce ( 14)

THE ALGORITHM AND ITS COMPLEXITY ANALYSIS
In this section we present the algorithm and we study its complexity and we compute the total number of iterations produced by it.We start to state some technical results that are needed for the analysis of the algorithm.Let A be a given matrix in R n×n .We decompose A in its symmetric part A and its skew-symmetric part A. Thus we have Lemma 2. [14, Lemme 2.1] If A is positive definite matrix then equality holds if and only A = A T .
3.2.Complexity analysis.Defining the symmetric matrix Since D is symmetric then its eigenvalues λ i ( D) are real for all i ∈ I = {1, 2, . . ., n} and due to (11) there exist two nonnegative numbers (Ref.[18]) such that: where and Proof.For the left hand of the first statement it is clear from ( 11) and ( 15) that: For the right hand of it since D is symmetric then it is diagonalizable and there exists an orthogonal matrix Q D such that: On the other hand the matrix and together leads to Thus gives For the last result it follows from ( 11) and ( 15) that: Then using (10) and ( 14) we deduce: It completes the proof.
Next we want to estimate the quantity δ 2 + − δ 2 i.e. the effect of a damped Newton iteration on the proximity.Here δ + denotes the proximity after a stepsize α.Let First we start to give a result for δ + .
Lemma 5. Suppose that the step α is strictly feasible.Then with D is defined in (15).
Proof.We deal first with the quantity Tr(X + Y + ).We have: Now substituting D X , D Y , D V by their values and by (11) we deduce: For the quantity Tr((X + ) −1 (Y + ) −1 ) we have by a simple calculus and using Lemma 3.1 that: Now using the fact that σ ≤ 2δ 2 and σ − ≤ σ.We deduce: and It completes the proof of the theorem.
Theorem 8.If α = 1, then from the above inequality in Theorem 4.2, we get: σ , then the step size is strictly feasible and by Theorem 3.2 it follows that This completes the proof of the theorem.Now we are ready to compute the iteration bounds of the algorithm.Proof.By (17) in Lemma 3.5 after the update Then every damped Newton step decreases δ 2 by at least 5 24 .Hence after at most [ 24 5 ( (2τ +θ iterations the proximity will have passed the threshold value τ .This completes the proof of the lemma.
Consequently we have the following theorem.
Theorem 11.If τ ≥ 1, then the total number of iterations produced by the primal-dual algorithm is not more than Meanwhile if θ = n − 1 2 and τ = 1 we obtain the best well-known iteration bound for small-update primal-dual algorithms, namely: It completes the proof of the theorem.

CONCLUSION AND FUTURE RESEARCH
In this paper we have extended the study of complexity analysis of a primaldual path-following algorithm designed for (SDP) to monotone (SDLCP).We have succeed to give an unified analysis for its polynomial complexity.For long-update algorithm we have O n log nµ 0 iteration bound.This complexity is similar to such primal-dual (IPMs) methods.For small-update algorithm the iteration bound is O √ n log nµ 0 which is the best known iteration bound.Finally, an important topic for further research is the numerical implementation of this algorithm and to extend it for other optimization problems.