<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Complexity analysis of primal-dual algorithms for the semidefinite linear complementarity problem: Complexity analysis of primal-dual algorithms for the semidefinite linear complementarity problem</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">

<div class="titlepage">
<h1>Complexity analysis of primal-dual algorithms for the semidefinite linear complementarity problem</h1>
<p class="authors">
<span class="author">Mohamed Achache\(^{\ast }\) Naima Boudiaf\(^{\S }\)</span>
</p>
<p class="date">December 25, 2010</p>
</div>
<p>\(^{\ast }\)Department of Mathematics, Faculty of Science, University Ferhat Abbas of Setif, 19000, Algeria, Laboratory of Fundamental and Numerical Mathematics,<br />e-mail: <span class="ttfamily">achache_m@yahoo.fr.</span> </p>
<p>\(^{\S }\)Department of Mathematics, Faculty of Science, University El Hadj Lakhdar, Batna 05000, Algeria, e-mail: <span class="ttfamily">benhassinen13@yahoo.fr.</span> </p>

<div class="abstract"><p> In this paper a primal-dual path-following interior-point algorithm for the monotone semidefinite linear complementarity problem is presented. The algorithm is based on Nesterov-Todd search directions and on a suitable proximity for tracing approximately the central-path. We provide an unified analysis for both long and small-update primal-dual algorithms. Finally, the iteration bounds for these algorithms are obtained. </p>
<p><b class="bf">MSC.</b> 90C30, 90C33 </p>
<p><b class="bf">Keywords.</b> Semidefinite linear complementarity problems, interior point methods, long and small-update primal-dual algorithms, polynomial complexity. </p>
</div>
<h1 id="a0000000002">1 Introduction</h1>
<p>Let\(\, \, \mathbb {S}^{n}\) be the space of all real symmetric matrices of order \(n\), \(\mathbb {S}_{+}^{n}\) be the cone of positive semidefinite matrices in \(\mathbb {S}^{n}\)and \(\mathbb {S}_{++}^{n}\)is the cone formed by all symmetric positive definite matrices. The expression \(X\succeq 0\, (X\succ 0)\) means that \(X\in \mathbb {S}_{+}^{n}(X\in \mathbb {S}_{++}^{n})\). Let \(L\) :\(~ \mathbb {S}^{n}\longrightarrow \mathbb {S}^{n}\) be a given linear transformation and \(Q\in \mathbb {S}^{n}\). The semidefinite linear complementarity problem (SDLCP) is to find a couple of matrices \((X,Y)\) \(\in \) \(\mathcal{A}\) such that: </p>
<div class="equation" id="a0000000003">
<p>
  <div class="equation_content">
    \begin{equation}  X\succeq 0,\text{ }Y\succeq 0,\text{ and }~ XY=0, \end{equation}
  </div>
  <span class="equation_label">1</span>
</p>
</div>
<p> where </p>
<div class="displaymath" id="a0000000004">
  \[  \mathcal{A}=\left\{  (X,Y)\in \mathbb {S}^{n}\times \mathbb {S}^{n}:Y-L(X)=Q\right\}   \]
</div>
<p> is an affine subset of \(\mathbb {S}^{n}.\) </p>
<p>This problem has made the object of many studies of research this last years. Its growing importance can be measured by its different applications in control theory and various areas of optimization problems. This problem can be also viewed as a generalization of the standard linear complementarity <br />problem (LCP) and also included the geometric monotone semi-definite linear complementarity introduced by Kojima et al., <span class="cite">
	[
	<a href="#KojShiHar97" >9</a>
	]
</span> and which contains the pair of primal-dual semidefinite programming problems (SDP). For more details on (SDLCP) we refer the reader to the references [6-8,&#160;11] and the thesis of Phd of Song [20]. Moreover it turns out that primal-dual path-following interior point algorithms can solve efficiently many problems such as linear, semidefinite programming, convex quadratic programming, conic and complementarity problems. These algorithms have polynomial complexity and numerical efficiency (see[1-4,&#160;9,&#160;10,&#160;12-19,&#160;21, 22]). It has been shown that most primal-dual (IPMs) algorithms and their analysis can be extended naturally from linear programming to (SDP) and so to more general context of (SDLCP). </p>
<p>The goal of this paper is to analyze the polynomial complexity of a primal-dual path-following interior-point algorithm for solving (SDLCP). Here, we reconsider the analysis used by Peng <i class="itshape">et al.</i>, (Ref. [17]) for (SDP) and we make it suiting to (SDLCP) case. The algorithm uses at each interior point iteration a full Nesterov-Todd (NT) step and a suitable proximity for tracing approximately the central-path. We provide also an unified analysis for both long and small-update primal-dual algorithms. Finally, the total iteration bounds for these algorithms are obtained. These polynomial complexity are analogous to such methods for linear, quadratic, semidefinite programming, conic and complementarity problems. </p>
<p>The rest of the paper is as follows. In section 2, the central path with its properties and Nesterov-Todd search directions are discussed. In section 3, the algorithm and its complexity analysis are stated. In section 4, a conclusion and future researches are given. </p>
<p>Throughout the paper we use the following notation.\(\, \mathbb {R}^{n\times n}\, \)is the space of all real \(n\times n~ \)matrices,\(\, \left\Vert .\right\Vert \) denotes the Frobenius norm of matrices. For a given matrix \(A\in \mathbb {R}^{n\times n}\), det\(\, A\) denotes its determinant if in addition \(A\) is nonsingular then \(A^{-1}\) denotes its inverse whereas \(A^{T}\, \)is the transpose of \(A\). Let \(X\in \mathbb {R}_{+}^{n},\) \(X^{1/2}\) represents its square root matrix. The trace of a matrix \(A\  \)is the sum of its diagonal entries and it is denoted by \(\operatorname {Tr}(A).\) Recall that for any two matrices \(A~ =(a_{ij})\  \)and \(B=(b_{ij})\  \)in \(\mathbb {R}^{n\times n},\, \)their inner product (trace) is defined by: \(\langle A,B\rangle :=\operatorname {Tr}(A^{T}B)={\textstyle \sum _{i,j}} a_{ij}b_{ij}.\, \)The identity matrix of order \(n\) is denoted by \(I\). Finally, \(g(t)=O(f(t))\) if and only if \(g(t)\leq kf(t)\) for some positive constant \(k\) where \(f(t)\) and \(g(t)\) are two real valued functions and \(t{\gt}0\). </p>
<h1 id="a0000000005">2 Central path and Nesterov-Todd search directions for SDLCP</h1>
<p>In this section we define the central path for SDLCP and its properties and we determine Nesterov-Todd search directions. </p>
<p>The feasible set and the strict feasible set of (1) are subsets of \(\mathbb {R}^{n\times n}\): </p>
<div class="displaymath" id="a0000000006">
  \begin{align*}  \mathcal{F} &  =\left\{  \left( X,Y\right) \in \mathcal{A}:X\succeq 0\text{, }Y\succeq 0\right\}  & \\ \mathcal{F}^{0} &  =\left\{  \left( X,Y\right) \in \mathcal{F}:X\succ 0\text{, }Y\succ 0\right\}  . & \end{align*}
</div>
<p>In the sequel we do the following hypothesis. </p>
<p><b class="bfseries">Hypothesis 1</b>. The linear transformation \(L\) is monotone [6,&#160;20] i.e. </p>
<div class="displaymath" id="a0000000007">
  \[  \left\langle L(X),X\right\rangle \, \geq 0,\text{ for all }X\in \mathbb {S}^{n}; \]
</div>
<p><b class="bfseries">Hypothesis 2.</b> \(\mathcal{F}^{0}\neq \emptyset \).<br />Underour hypothesis it is shown that the set </p>
<div class="displaymath" id="a0000000008">
  \[  S_{cp}=\left\{  \left( X,Y\right) \in \mathcal{F}:XY=0\right\}   \]
</div>
<p> of solutions of (1) is compact and non empty [6,&#160;20]. </p>
<p>In addition (1) is equivalent to the following optimization problem: </p>
<div class="equation" id="a0000000009">
<p>
  <div class="equation_content">
    \begin{equation}  \text{(OP)}\, \, {\; \min _{X,Y}}\left[ \operatorname {Tr}(XY):X,Y\in \mathcal{A}:X\succeq 0,\, Y\succeq 0\right] . \end{equation}
  </div>
  <span class="equation_label">2</span>
</p>
</div>
<p> Hence solving (1) is equivalent to find the minimizer of (2) with its objective value is zero. </p>
<p>Now to introduce an interior point method for solving (2) we use the technique of the logarithmic barrier approach. So for (2) we associate with it the following minimization problem: </p>
<div class="equation" id="a0000000010">
<p>
  <div class="equation_content">
    \begin{equation}  (OP)_{\mu }\, \, {\, \, \min _{X,Y}}\left[ \operatorname {Tr}(XY)-\mu \log \det (XY):~ X,Y\in \mathcal{A}:X\succ 0,Y\succ 0\right] , \end{equation}
  </div>
  <span class="equation_label">3</span>
</p>
</div>
<p> where \(\mu {\gt}0\, \)is the barrier parameter and \(\log \det (XY)\, \, \)is the logarithmic barrier function associated with (2), [4]. </p>
<p>Under our hypothesis we deduce the following useful properties for (3). </p>
<ul class="itemize">
  <li><p>(OP)\(_{\mu }\) is a convex optimization problem. </p>
</li>
  <li><p>(OP)\(_{\mu }\) has a unique solution (\(X(\mu ),Y(\mu ))\) for all \(\mu {\gt}0\). </p>
</li>
  <li><p>If \(\mathcal{A}\neq \emptyset \). Then \(\lim _{\mu \mapsto 0}\)\((X(\mu ),\, Y(\mu ))=(X^{\ast },Y^{\ast })\, \)is a solution of (1), [9]. </p>
</li>
  <li><p>The solution \((X(\mu ),Y(\mu ))\, \)is characterized by the first-order necessary and sufficient optimality conditions called conditions of Karush-Kuhn-Tucker of (OP)\(_{\mu }\) given by the following nonlinear system of equations: </p>
<div class="equation" id="a0000000011">
<p>
  <div class="equation_content">
    \begin{equation}  \left\{  \begin{array}[l]{l}Y-L(X)=Q,\\ XY=\mu I,\, \, X\succ 0,Y\succ 0. \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">4</span>
</p>
</div>
</li>
  <li><p>Hence solving (OP)\(_{\mu }\) is equivalent to solve (4). </p>
</li>
  <li><p>The set </p>
<div class="displaymath" id="a0000000012">
  \[  \left\{  (X(\mu ),Y(\mu ))\, :\mu {\gt}0\right\}   \]
</div>
<p> of solutions of the system (4) defines the <i class="itshape">central-path</i> which is a smooth curve. </p>
</li>
</ul>
<p>For solving (4) we use primal-dual path-following (IPMs) algorithms. The basic ideas behind these algorithms is to follow the central path approximately and to approach the solution of (4) as \(\mu \  \)goes to zero by using Newton step. Suppose now that we have \((X,Y)\in \)\(\mathcal{F}^{0}\). Applying Newton method for the system (4) we obtain a class of search directions given by the following linear system of equations: </p>
<div class="equation" id="a0000000013">
<p>
  <div class="equation_content">
    \begin{equation}  \left\{  \begin{array}[l]{l}L(\Delta X)=\Delta Y,\\ X\Delta Y+\Delta XY=\mu I-XY. \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">5</span>
</p>
</div>
<p> Note that the system (5) has a unique solution \((\Delta X,\Delta Y) \)but unfortunately this last it is not symmetric. To remedy this default many works have been done in literature for symmetrizing the second equation in (5). In most suggestions used by researchers in this domain is to introduce a scaling and invertible matrix \(P\) and to consider the following linear transformation \(H_{P}\) given by: </p>
<div class="displaymath" id="a0000000014">
  \[  H_{P}(M)=\tfrac {1}{2}(PMP^{-1}+P^{-T}M^{T}P^{T})\, \, \text{for a given }M~ \text{in }\mathbb {R}^{n\times n}.  \]
</div>
<p> Then the second equation in system (5) becomes: </p>
<div class="equation" id="a0000000015">
<p>
  <div class="equation_content">
    \begin{equation}  \left\{  \begin{array}[l]{l}L(\Delta X)=\Delta Y,\\ P(X\Delta Y+\Delta XY)P^{-1}+P^{-T}(Y\Delta X+\Delta YX)P^{T}=\\ \quad \quad =2\mu I-PXYP^{-1}-P^{-T}YXP^{T}. \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">6</span>
</p>
</div>
<p>Now we give some popular directions used in IPMs. For \(P=I\) we get the Alizadeh-Haerberly-Overton (A.H.O) direction and for \(P=\)\(Y^{1/2}\) or \(P=X^{1/2}\) it defines the so-called (H.K.M) direction. </p>
<p>Here we use the Nesterov-Todd (NT) direction where \(P\, \)is given by \(P=D^{-1/2}\) and \(D=X^{1/2}(X^{1/2}YX^{1/2})^{-1/2}X^{1/2}=Y^{-1/2}(Y^{1/2}XY^{1/2})^{1/2}Y^{-1/2}.\) </p>
<p>The role of the symmetric and positive definite matrix\(\, D\) is to rescale the two matrices \(X\, \, \)and \(Y\, \)to the same symmetric and positive definite matrix\(\, V\, \) given by: </p>
<div class="equation" id="a0000000016">
<p>
  <div class="equation_content">
    \begin{equation}  V=D^{-\tfrac {1}{2}}XD^{-\tfrac {1}{2}}=D^{\tfrac {1}{2}}YD^{\tfrac {1}{2}}. \end{equation}
  </div>
  <span class="equation_label">7</span>
</p>
</div>
<p>Then the scaling Newton directions are: </p>
<div class="equation" id="a0000000017">
<p>
  <div class="equation_content">
    \begin{equation}  D_{X}=D^{-\tfrac {1}{2}}\Delta XD^{-\tfrac {1}{2}}\, ,~ \, D_{Y}=D^{\tfrac {1}{2}}\Delta YD^{\tfrac {1}{2}}. \end{equation}
  </div>
  <span class="equation_label">8</span>
</p>
</div>
<p>Hence using (7) and (8) the system (6) can be written as: </p>
<div class="equation" id="a0000000018">
<p>
  <div class="equation_content">
    \begin{equation}  \left\{  \begin{array}[l]{l}\bar{L}(D_{X})=D_{Y},\\ VD_{V}+D_{V}V=2\mu I-2V^{2}, \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">9</span>
</p>
</div>
<p> where </p>
<div class="equation" id="a0000000019">
<p>
  <div class="equation_content">
    \begin{equation}  D_{V}=D_{X}+D_{Y}\end{equation}
  </div>
  <span class="equation_label">10</span>
</p>
</div>
<p> and \(\bar{L}\, \)is given by: </p>
<div class="displaymath" id="a0000000020">
  \[  \bar{L}(D_{X})=D^{\tfrac {1}{2}}L(D^{\tfrac {1}{2}}D_{X}D^{\tfrac {1}{2}})D^{\tfrac {1}{2}}.  \]
</div>
<p>The linear transformation \(\bar{L}\, \)is also monotone on \(\mathbb {S}^{n}\). Under our hypothesis the new linear system in (9)has a unique symmetric solution \((D_{X},D_{Y})\). Furthermore these directions satisfy: </p>
<div class="equation" id="a0000000021">
<p>
  <div class="equation_content">
    \begin{equation}  \operatorname {Tr}(D_{Y}D_{X})=\operatorname {Tr}(D_{X}D_{Y})\geq 0\text{.}\end{equation}
  </div>
  <span class="equation_label">11</span>
</p>
</div>
<p>This inequality shows that the Newton directions in the primal-dual space are not orthogonal in contrast to SDP case. Thus makes the analysis of the complexity a little different to SDP. As mentioned before the proximity used here is defined as: </p>
<div class="equation" id="a0000000022">
<p>
  <div class="equation_content">
    \begin{equation}  \delta (XY,\mu )=\tfrac {1}{2}\left\Vert \tfrac {1}{\sqrt{\mu }}V-\sqrt{\mu }V^{-1}\right\Vert . \end{equation}
  </div>
  <span class="equation_label">12</span>
</p>
</div>
<p> One can easily verify that: </p>
<div class="displaymath" id="a0000000023">
  \[  \delta ^{2}(XY,\mu )=\tfrac {1}{4}(\tfrac {1}{\mu }\operatorname {Tr}(XY)-2n+\mu \operatorname {Tr}(XY)^{-1}).  \]
</div>
<p><div class="remark_thmwrapper " id="a0000000024">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">1</span>
  </div>
  <div class="remark_thmcontent">
  <p>The unique solution of the Sylvester equation: </p>
<div class="displaymath" id="a0000000025">
  \[  VD_{V}+D_{V}V=2\mu I-2V^{2},  \]
</div>
<p> stated in (9) is </p>
<div class="equation" id="a0000000026">
<p>
  <div class="equation_content">
    \begin{equation}  D_{V}=\mu V^{-1}-V, \end{equation}
  </div>
  <span class="equation_label">13</span>
</p>
</div>
<p> and using (12),we deduce </p>
<div class="equation" id="a0000000027">
<p>
  <div class="equation_content">
    \begin{equation}  \left\Vert D_{V}\right\Vert ^{2}=4\mu \delta ^{2}\text{.} \end{equation}
  </div>
  <span class="equation_label">14</span>
</p>
</div>

  </div>
</div> </p>
<h1 id="a0000000028">3 The algorithm and its complexity analysis</h1>
<p>In this section we present the algorithm and we study its complexity and we compute the total number of iterations produced by it. We start to state some technical results that are needed for the analysis of the algorithm. Let\(\, A\, \, \)be a given matrix in \(\mathbb {R}^{n\times n}\). We decompose \(A\, \)in its symmetric part \(\overline{A}\) and its skew-symmetric part \(\widetilde{A}\). Thus we have </p>
<div class="displaymath" id="a0000000029">
  \[  A=\overline{A}+\widetilde{A},\, \, \text{ }\overline{A}=\tfrac {A+A^{T}}{2},\text{ }\, \, \widetilde{A}=\tfrac {A-A^{T}}{2}.  \]
</div>
<p><div class="lemma_thmwrapper " id="a0000000030">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2</span>
  </div>
  <div class="lemma_thmcontent">
  <p><span class="cite">
	[
	<a href="#PenRooTer02" >14</a>
	, 
	Lemme 2.1
	]
</span> If \(A\) is positive definite matrix then </p>
<div class="displaymath" id="a0000000031">
  \[  \operatorname {Tr}A^{-1}\leq \operatorname {Tr}(\overline{A}^{-1})  \]
</div>
<p> equality holds if and only \(A=A^{T}\). </p>

  </div>
</div> </p>
<p><div class="lemma_thmwrapper " id="a0000000032">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">3</span>
  </div>
  <div class="lemma_thmcontent">
  <p><span class="cite">
	[
	<a href="#PenRooTer02" >14</a>
	, 
	Lemme 2.2
	]
</span> Suppose that \(A_{1}\) and \(A_{2}\) are both symmetric and positive definite and, \(\alpha ,\beta \geq 0\), \(\alpha +\beta =1\). Then </p>
<div class="displaymath" id="a0000000033">
  \[  \operatorname {Tr}(\alpha A_{1}+\beta A_{2})^{-1}\leq \alpha \operatorname {Tr}(A_{1})^{-1}+\beta \operatorname {Tr}(A_{2})^{-1} \]
</div>
<p> with equality only if \(\alpha \) =1 or \(\beta =1\). </p>

  </div>
</div> </p>
<p>Now the generic form of this algorithm is stated as follows. </p>
<h2 id="a0000000034">3.1 The Algorithm</h2>
<p> (Generic primal-dual algorithm for (SDLCP)) </p>
<p> <br /></p>
<p><i class="itshape">Input:</i> </p>
<p>an accuracy parameter \(\epsilon ;\) </p>
<p>an update parameter \(\theta ,0\) \({\lt}\theta {\lt}1;\) </p>
<p>a proximity parameter \(\tau ;\) </p>
<p>a feasible step size \(\alpha ;\) </p>
<p>a strictly feasible point \((X^{0},Y^{0})\, \)and \(\mu ^{0}= {\tfrac 1n\operatorname {Tr}(X^{0}Y^{0})}\) s.t. \(\delta (X^{0}Y^{0},\mu ^{0})\leq \tau ;\) </p>
<p><i class="itshape">begin</i> </p>
<p>set \(X:=X^{0};\; Y:=Y^{0};\; \mu :=\mu ^{0};\) </p>
<p><i class="itshape">while</i> \(n\mu \geq \epsilon \, \, \)<i class="itshape">do</i> </p>
<p>&#8195;&#8195;<i class="itshape">begin </i> </p>
<p>&#8195;&#8195;&#8195;\(\mu :=(1-\theta )\mu ;\) </p>
<p>&#8195;&#8195;&#8195;<i class="itshape">while</i> \(\delta (XY,\mu )\geq \tau ~ \) <i class="itshape">do</i> </p>
<p>&#8195;&#8195;&#8195;&#8195;<i class="itshape">begin</i> </p>
<p>&#8195;&#8195;&#8195;&#8195;&#8195;compute \((\Delta X,\Delta Y)\, \; \)via (9); </p>
<p>&#8195;&#8195;&#8195;&#8195;&#8195;set \(\; X:=X+\alpha \Delta X;\; Y:=Y+\alpha \Delta Y;\) </p>
<p>&#8195;&#8195;&#8195;&#8195;<i class="itshape">end</i> </p>
<p><i class="itshape">end</i> </p>
<p><i class="itshape">end.</i> </p>
<h2 id="a0000000035">3.2 Complexity analysis</h2>
<p>Defining the symmetric matrix </p>
<div class="equation" id="a0000000036">
<p>
  <div class="equation_content">
    \begin{equation}  \widetilde{D}=\tfrac {1}{2\mu }(D_{X}D_{Y}+D_{Y}D_{X})\text{.}\end{equation}
  </div>
  <span class="equation_label">15</span>
</p>
</div>
<p> Since \(\widetilde{D}\, \, \)is symmetric then its eigenvalues \(\lambda _{i}(\widetilde{D})\, \)are real for all \(i\in I=\left\{  1,2,\ldots ,n\right\}  \) and due to (11) there exist two nonnegative numbers (Ref. [18]) such that: </p>
<div class="displaymath" id="a0000000037">
  \[  \sigma _{+}=\sum _{i\in I_{+}}\lambda _{i}(\widetilde{D})\, ,\text{~ ~ }\sigma _{-}=-\sum _{i\in I_{_{-}}}\lambda _{i}(\widetilde{D})  \]
</div>
<p> where </p>
<div class="displaymath" id="a0000000038">
  \[  I_{+}=\left\{  i\in I:\lambda _{i}(\widetilde{D})\geq 0\right\}   \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000039">
  \[  I_{-}=I-I_{+}=\left\{  i\in I:\lambda _{i}(\widetilde{D}){\lt}0\right\}   \]
</div>
<p> are index sets. </p>
<p><div class="lemma_thmwrapper " id="a0000000040">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">4</span>
  </div>
  <div class="lemma_thmcontent">
  <p>One has </p>
<div class="displaymath" id="a0000000041">
  \[  \sigma _{-}\leq \sigma _{+}\leq 2\delta ^{2} \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000042">
  \[  0\leq \operatorname {Tr}(\widetilde{D})\leq 2\delta ^{2}.  \]
</div>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000043">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>For the left hand of the first statement it is clear from (11) and (15) that: </p>
<div class="displaymath" id="a0000000044">
  \[  \operatorname {Tr}(\widetilde{D})=\sum _{i\in I}\lambda _{i}(\widetilde{D})=\sigma _{+}-\sigma _{-}\geq 0.  \]
</div>
<p> For the right hand of it since \(\widetilde{D\, \, }\)is symmetric then it is diagonalizable and there exists an orthogonal matrix \(Q_{\widetilde{D}}\, \)such that: </p>
<div class="displaymath" id="a0000000045">
  \[  Q_{\widetilde{D}}^{T}\widetilde{D}Q_{\widetilde{D}}=\text{diag}(\lambda _{i}(\widetilde{D}),\text{ }i\in I_{+},\text{ }\lambda _{i}(\widetilde{D}),\text{ }i\in I_{-})  \]
</div>
<p> and let </p>
<div class="displaymath" id="a0000000046">
  \[  \overline{D}=\tfrac {1}{2\mu }Q_{\widetilde{D}}^{T}(D_{X}^{2}+D_{Y}^{2})Q_{\widetilde{D}}.  \]
</div>
<p> Since the matrix </p>
<div class="displaymath" id="a0000000047">
  \[  \tfrac {1}{2\mu }Q_{\widetilde{D}}^{T}(D_{X}+D_{Y})^{2}Q_{\widetilde{D}}=\overline{D}+\text{diag}(\lambda _{i}(\widetilde{D}),\, \text{ }i\in I_{+},\text{ }\lambda _{i}(\widetilde{D}),\text{ }i\in I_{-})  \]
</div>
<p> is positive semi-definite it holds that: </p>
<div class="displaymath" id="a0000000048">
  \[  \overline{D}_{ii}+\lambda _{i}(\widetilde{D})\geq 0\text{\thinspace \thinspace for all }\, i\in I,\text{ }\overline{D}_{ii}\geq -\lambda _{i}(\widetilde{D}){\gt}0\, \, \text{for all }\, i\in I_{-},\text{ } \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000049">
  \[  \sum \limits _{i\in I_{-}}\overline{D}_{ii}\geq \sum \limits _{i\in I_{-}}-\lambda _{i}(\widetilde{D})=\sigma _{-}.  \]
</div>
<p> On the other hand the matrix </p>
<div class="displaymath" id="a0000000050">
  \[  \tfrac {1}{2\mu }Q_{\widetilde{D}}^{T}(D_{X}-D_{Y})^{2}Q_{\widetilde{D}}=\overline{D}-\text{diag}(\lambda _{i}(\widetilde{D}),\text{ }i\in I_{+}\text{, }\lambda _{i}(\widetilde{D}),\, \text{ }i\in I_{-})  \]
</div>
<p> is also positive semi-definite whence </p>
<div class="displaymath" id="a0000000051">
  \[  \overline{D}_{ii}-\lambda _{i}(\widetilde{D})\geq 0,\, \text{i.e.}\, \text{ }\overline{D}_{ii}\geq \lambda _{i}(\widetilde{D}){\gt}0\, \, \, \text{for all}\, ~ i\in I_{+},\,   \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000052">
  \[  \sum \limits _{i\in I_{+}}\text{{}}\overline{D}_{ii}\geq \sum \limits _{i\in I_{+}}\lambda _{i}(\widetilde{D})=\sigma _{+},  \]
</div>
<p> and together leads to </p>
<div class="displaymath" id="a0000000053">
  \begin{align*}  \sigma _{+}+\sigma _{-} &  \leq \operatorname {Tr}(\bar{D})\newline =\tfrac {1}{2\mu }\operatorname {Tr}(D_{X}^{2}+D_{Y}^{2})\newline \\ &  \leq \tfrac {1}{2\mu }\operatorname {Tr}(D_{X}+D_{Y})^{2}\newline \\ &  =\tfrac {1}{2\mu }\left\Vert D_{V}\right\Vert ^{2}\newline =2\delta ^{2}. \end{align*}
</div>
<p> Thus gives </p>
<div class="displaymath" id="a0000000054">
  \[  \text{ }\sigma _{+}\leq 2\delta ^{2},\text{ }\, \sigma _{-}\leq 2\delta ^{2}.  \]
</div>
<p> For the last result it follows from (11) and (15) that: </p>
<div class="displaymath" id="a0000000055">
  \begin{align*}  2\mu \operatorname {Tr}(\widetilde{D}) &  =\operatorname {Tr}(D_{X}D_{Y}+D_{Y}D_{X})\\ &  \leq \left\Vert D_{X}+D_{Y}\right\Vert ^{2}. \end{align*}
</div>
<p> Then using (10) and (14) we deduce: </p>
<div class="displaymath" id="a0000000056">
  \begin{align*}  0\leq \operatorname {Tr}(\widetilde{D}) &  \leq \tfrac {1}{2\mu }\left\Vert D_{X}+D_{Y}\right\Vert ^{2}\\ &  =2\tfrac {1}{4\mu }\left\Vert D_{V}\right\Vert ^{2}\\ &  =2\delta ^{2}. \end{align*}
</div>
<p> It completes the proof. </p>
<p>Next we want to estimate the quantity \(\delta _{+}^{2}-\delta ^{2}\) i.e. the effect of a damped Newton iteration on the proximity. Here \(\delta _{+}\, \)denotes the proximity after a stepsize \(\alpha \). Let </p>
<div class="displaymath" id="a0000000057">
  \[  \delta _{+}=\delta (X^{+}Y^{+},\mu )\text{, }\, X^{+}=X+\alpha \Delta X\text{\thinspace and }Y^{+}=Y+\alpha \Delta Y  \]
</div>
<p> First we start to give a result for \(\delta _{+}\). </p>
<p><div class="lemma_thmwrapper " id="a0000000058">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">5</span>
  </div>
  <div class="lemma_thmcontent">
  <p>Suppose that the step \(\alpha \) is strictly feasible. Then </p>
<div class="displaymath" id="a0000000059">
  \begin{align*}  \delta _{+}^{2} \leq \tfrac {1-\alpha }{4\mu }\operatorname {Tr}(V^{2})+\tfrac {\alpha n}{4}\! -\! \tfrac {n}{2}+\tfrac {\alpha ^{2}}{4}\sum _{i=1}^{n}\lambda _{i}(\widetilde{D})\! +\! \tfrac {\mu (1-\alpha )}{4}\operatorname {Tr}(V^{-2})\! +\! \tfrac {\alpha }{4}\sum _{i=1}^{n}\tfrac {1}{1+\alpha \lambda _{i}(\widetilde{D})}\end{align*}
</div>
<p> with \(\widetilde{D}\) is defined in (15). </p>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000060">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>We deal first with the quantity \(\operatorname {Tr}(X^{+}Y^{+})\). We have: </p>
<div class="displaymath" id="a0000000061">
  \begin{align*}  \operatorname {Tr}(X^{+}Y^{+}) &  =\operatorname {Tr}((V+\alpha D_{X})(V+\alpha D_{Y}))\\ &  =\operatorname {Tr}(V^{2}+\alpha (D_{X}V+VD_{Y})+\alpha ^{2}D_{X}D_{Y})\\ &  =\operatorname {Tr}(V^{2}+\alpha VD_{V}+\alpha ^{2}D_{X}D_{Y}), \end{align*}
</div>
<p> since \(\operatorname {Tr}(VD_{X})=\operatorname {Tr}(D_{X}V)\) and \(\operatorname {Tr}(VD_{Y})=\operatorname {Tr}(D_{Y}V).\) </p>
<p>Now substituting \(D_{X}\),\(~ D_{Y}\), \(D_{V}\, \, \)by their values and by (11) we deduce: </p>
<div class="displaymath" id="a0000000062">
  \begin{align*}  \operatorname {Tr}(X^{+}Y^{+}) &  =\operatorname {Tr}((V^{2}+\alpha (\mu I-V^{2}))+\alpha ^{2}\mu \operatorname {Tr}(\widetilde{D})\\ &  =(1-\alpha )\operatorname {Tr}(V^{2})+\alpha n\mu +\alpha ^{2}\mu \sum \limits _{i=1}^{n}\lambda _{i}(\widetilde{D}). \end{align*}
</div>
<p> For the quantity \(\operatorname {Tr}((X^{+})^{-1}(Y^{+})^{-1})\) we have by a simple calculus and using Lemma 3.1 that: </p>
<div class="displaymath" id="a0000000063">
  \begin{align*} &  \operatorname {Tr}((X^{+})^{-1}(Y^{+})^{-1})=\\ &  =\operatorname {Tr}((V+\alpha D_{X})^{-1}(V+\alpha D_{Y})^{-1})\\ &  \leq 2\operatorname {Tr}\left[ (V+\alpha D_{X})(V+\alpha D_{Y})+(V+\alpha D_{Y})(V+\alpha D_{X})\right] ^{-1}\\ &  =2\operatorname {Tr}\left[ (2V^{2}+\alpha VD_{V}+\alpha D_{V}V+\alpha ^{2}D_{Y}D_{X}+\alpha ^{2}D_{X}D_{Y})^{-1}\right] \\ &  =2\operatorname {Tr}\left[ (2V^{2}+\alpha VD_{V}+\alpha D_{V}V+2\alpha ^{2}\mu \widetilde{D})^{-1}\right] \\ &  =2\operatorname {Tr}([(1-\alpha )2V^{2}+2\alpha \mu (I+\alpha \widetilde{D})]^{-1}). \end{align*}
</div>
<p>Now for the last term in the right hand side of the above inequality and if \(0{\lt}\alpha {\lt}1\), is sufficiently small such that the matrix \((I+\alpha \widetilde{D})\) is positive definite, then by Lemma 3.2 we get: </p>
<div class="displaymath" id="a0000000064">
  \begin{align}  \operatorname {Tr}([(1-\alpha )2V^{2}+2\alpha \mu (I+\alpha \widetilde{D})]^{-1}) &  \leq \tfrac {1-\alpha }{2}\operatorname {Tr}(V^{-2})+\tfrac {\alpha }{2\mu }\operatorname {Tr}(I+\alpha \widetilde{D})^{-1}\nonumber \\ &  \leq \tfrac {1-\alpha }{2}\operatorname {Tr}(V^{-2})+\tfrac {\alpha }{2\mu }\sum _{i=1}^{n}\tfrac {1}{1+\alpha \lambda _{i}(\widetilde{D})}. \end{align}
</div>
<p> It completes the proof of the lemma. </p>
<p><div class="theorem_thmwrapper " id="a0000000065">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">6</span>
  </div>
  <div class="theorem_thmcontent">
  <p>If \(0{\lt}\alpha {\lt}\tfrac {1}{\sigma _{+}}\). Then the step size is strictly feasible and the matrix </p>
<div class="displaymath" id="a0000000066">
  \[  (V+\alpha D_{X})(V+\alpha D_{Y})  \]
</div>
<p> is positive definite. This result is an immediate consequence of <span class="rm">(16)</span>. </p>

  </div>
</div> </p>
<p>Now by Lemma 3.3 \(\lambda _{i}(\widetilde{D})\) the eigenvalues of \(\widetilde{D}\, \)satisfy \(\left\vert \lambda _{i}(\widetilde{D})\right\vert \leq \sigma \leq 2\delta ^{2}\) for all \(i\in I\, \) where \(\sigma =\sigma _{+}\) since \(\operatorname {Tr}(\widetilde{D})\geq 0,\) it implies \(\sigma _{+}\geq \sigma _{-},\) then the following result holds. </p>
<p><div class="theorem_thmwrapper " id="a0000000067">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">7</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Let \(\delta =\delta (XY,\mu )\) and \(\sigma =\sigma _{+}\).For all \(0{\lt}\alpha {\lt}\tfrac {1}{\sigma }\), one has </p>
<div class="displaymath" id="a0000000068">
  \[  \delta _{+}^{2}\leq (1-\alpha )\delta ^{2}+\tfrac {2\alpha ^{3}\delta ^{4}}{1-4\alpha ^{2}\delta ^{4}}.  \]
</div>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000069">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>By Lemma 3.2 it follows that: </p>
<div class="displaymath" id="a0000000070">
  \begin{align*}  \delta _{+}^{2} &  \leq \tfrac {1-\alpha }{4\mu }\operatorname {Tr}(V^{2})+\tfrac {\alpha n}{4}-\tfrac {n}{2}+\tfrac {\alpha ^{2}}{4}\sum _{i=1}^{n}\lambda _{i}(\widetilde{D})\! +\! \tfrac {\mu (1-\alpha )}{4}\operatorname {Tr}(V^{-2})\! +\! \tfrac {\alpha }{4}\sum _{i=1}^{n}\tfrac {1}{1+\alpha \lambda _{i}(\widetilde{D})}\\ &  =\tfrac {1-\alpha }{4}\operatorname {Tr}(\tfrac {1}{\mu }V^{2}+\mu V^{-2}-2I)-\tfrac {\alpha n}{4}+\tfrac {\alpha ^{2}}{4}\sum _{i=1}^{n}\lambda _{i}(\widetilde{D})+\tfrac {\alpha }{4}\sum _{i=1}^{n}\tfrac {1}{1+\alpha \lambda _{i}(\widetilde{D})}\\ &  =\tfrac {1-\alpha }{4}\operatorname {Tr}(\tfrac {1}{\mu }V^{2}+\mu V^{-2}-2I)+\tfrac {\alpha ^{2}}{4}\sum _{i=1}^{n}\lambda _{i}(\widetilde{D})+\tfrac {\alpha }{4}\sum _{i=1}^{n}(\tfrac {1}{1+\alpha \lambda _{i}(\widetilde{D})}-1)\\ &  =(1-\alpha )\delta ^{2}+\tfrac {\alpha ^{2}}{4}\sum _{i=1}^{n}\lambda _{i}(\widetilde{D})-\tfrac {\alpha ^{2}}{4}\sum _{i=1}^{n}\tfrac {\lambda _{i}(\widetilde{D})}{1+\alpha \lambda _{i}(\widetilde{D})}, \end{align*}
</div>
<p> and </p>
<div class="displaymath" id="a0000000071">
  \begin{align*}  \delta _{+}^{2} &  \leq (1-\alpha )\delta ^{2}+\tfrac {\alpha ^{2}}{4}\sum _{i=1}^{n}\lambda _{i}(\widetilde{D})-\tfrac {\alpha ^{2}}{4}\sum _{i\in I_{+}}^{n}\tfrac {\lambda _{i}(\widetilde{D})}{1+\alpha \lambda _{i}(\widetilde{D})}+\tfrac {\alpha ^{2}}{4}\sum \limits _{i\in I_{-}}\tfrac {-\lambda _{i}(\widetilde{D})}{1+\alpha \lambda _{i}(\widetilde{D})}\\ \delta _{+}^{2} &  \leq (1-\alpha )\delta ^{2}+\tfrac {\alpha ^{2}}{4}\sigma _{+}-\tfrac {\alpha ^{2}}{4}\tfrac {\sigma _{+}}{(1+\alpha \sigma _{+})}+\tfrac {\alpha ^{3}}{4}\sum \limits _{i\in I_{-}}\tfrac {(\lambda _{i}(\widetilde{D}))^{2}}{(1+\alpha \lambda _{i}(\widetilde{D}))}\\ &  \leq (1-\alpha )\delta ^{2}+\tfrac {\alpha ^{3}\sigma _{+}^{2}}{4(1+\alpha \sigma _{+})}+\tfrac {\alpha ^{3}\sigma _{-}^{2}}{4(1-\alpha \sigma _{-})}. \end{align*}
</div>
<p> Now using the fact that \(\sigma \leq 2\delta ^{2}\) and \(\sigma _{-}\leq \sigma .\) We deduce: </p>
<div class="displaymath" id="a0000000072">
  \[  \tfrac {\alpha ^{3}\sigma _{+}^{2}}{1+\alpha \sigma _{+}}=\tfrac {\alpha ^{3}\sigma ^{2}}{1+\alpha \sigma }\leq \tfrac {4\alpha ^{3}\delta ^{4}}{1+2\alpha \delta ^{2}} \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000073">
  \[  \tfrac {\alpha ^{3}\sigma _{-}^{2}}{1-\alpha \sigma _{-}}\leq \tfrac {\alpha ^{3}\sigma ^{2}}{1-\alpha \sigma }\leq \tfrac {4\alpha ^{3}\delta ^{4}}{1-2\alpha \delta ^{2}}.  \]
</div>
<p> It completes the proof of the theorem. </p>
<p><div class="theorem_thmwrapper " id="a0000000074">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">8</span>
  </div>
  <div class="theorem_thmcontent">
  <p>If \(\alpha =1\), then from the above inequality in Theorem <span class="rm">4.2</span>, we get: </p>
<div class="displaymath" id="a0000000075">
  \[  \delta _{+}^{2}\leq \tfrac {2\delta ^{4}}{1-4\delta ^{4}}.  \]
</div>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000076">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>Since </p>
<div class="displaymath" id="a0000000077">
  \[  \alpha =\tfrac {1}{4\delta ^{2}\text{ }}\leq \tfrac {1}{\sigma \text{ }},  \]
</div>
<p> then the step size is strictly feasible and by Theorem 3.2 it follows that </p>
<div class="displaymath" id="a0000000078">
  \begin{align*}  \delta _{+}^{2}-\delta ^{2} &  \leq -\alpha \delta ^{2}+\tfrac {2\alpha ^{3}\delta ^{4}}{1-4\alpha ^{2}\delta ^{4}}\\ &  \leq -\tfrac {1}{4}+\tfrac {1}{24\delta ^{2}}\leq -\tfrac {5}{24}. \end{align*}
</div>
<p> This completes the proof of the theorem. </p>
<p>Now we are ready to compute the iteration bounds of the algorithm. </p>
<h2 id="a0000000079">3.3 Iteration bounds</h2>
<p><div class="lemma_thmwrapper " id="a0000000080">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">9</span>
  </div>
  <div class="lemma_thmcontent">
  <p><span class="cite">
	[
	<a href="#RooTerVia97" >19</a>
	, 
	Lemme IV.36
	]
</span> Let \((X,Y)\) be a strictly feasible point and \(\mu {\gt}0\). If \(\mu _{+}=(1-\theta )\mu \) then </p>
<div class="equation" id="a0000000081">
<p>
  <div class="equation_content">
    \begin{equation}  \delta (XY,\mu _{+})^{2}\leq \tfrac {(2\delta +\theta \sqrt{n})^{2}}{4(1-\theta )}\text{ .\qquad }\end{equation}
  </div>
  <span class="equation_label">17</span>
</p>
</div>

  </div>
</div> </p>
<p><div class="lemma_thmwrapper " id="a0000000082">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">10</span>
  </div>
  <div class="lemma_thmcontent">
  <p>If \(\delta (XY,\mu )\leq \tau \) and \(\tau \geq 1\), then after an update of the barrier parameter no more than </p>
<div class="displaymath" id="a0000000083">
  \[  \left[ \tfrac {6\theta }{5(1-\theta )}\left( n\theta +4\tau \  \sqrt{n}+4\tau \  ^{2}\right) \right] ,\text{ \qquad } \]
</div>
<p> iterations are needed to recenter. </p>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000084">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>By (17) in Lemma 3.5 after the update </p>
<div class="displaymath" id="a0000000085">
  \[  \delta ^{2}(XY,\mu _{+})\leq \tfrac {(2\delta +\theta \sqrt{n})^{2}}{4(1-\theta )}.  \]
</div>
<p> Then every damped Newton step decreases \(\delta ^{2}\) by at least \(\tfrac {5}{24}\). Hence after at most </p>
<div class="displaymath" id="a0000000086">
  \[  \lbrack \tfrac {24}{5}(\tfrac {(2\tau +\theta \sqrt{n})^{2}}{4(1-\theta )}-\tau ^{2})],  \]
</div>
<p> iterations the proximity will have passed the threshold value \(\tau \). This completes the proof of the lemma. </p>
<p>Consequently we have the following theorem. </p>
<p><div class="theorem_thmwrapper " id="a0000000087">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">11</span>
  </div>
  <div class="theorem_thmcontent">
  <p>If \(\tau \geq 1\), then the total number of iterations produced by the primal-dual algorithm is not more than </p>
<div class="displaymath" id="a0000000088">
  \[  \left[ \tfrac {6\theta }{5(1-\theta )}(n\theta +4\tau \sqrt{n}+4\tau ^{2})\right] \left[ \tfrac {1}{\theta }\log \tfrac {n\mu ^{0}}{\epsilon }\right] .  \]
</div>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000089">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>It is shown that the number of iterations produced by the algorithm (see Ref. <span class="cite">
	[
	<a href="#RooTerVia97" >19</a>
	, 
	Lemme II.18 page116
	]
</span> is given by </p>
<div class="displaymath" id="a0000000090">
  \[  \tfrac {1}{\theta }\log \tfrac {n\mu ^{0}}{\epsilon }.  \]
</div>
<p> Multiplication of this number by the bound in Lemma 3.5 yields the theorem. Omitting the round-off brackets in Theorem 3.4 (this does not change the order of magnitude of the iteration bound) our algorithm does not exceed the following upper bound of iterations </p>
<div class="displaymath" id="a0000000091">
  \[  O\big(\tfrac {6}{5(1-\theta )}(n\theta +4\tau \sqrt{n}+4\tau ^{2})\log \tfrac {n\mu ^{0}}{\epsilon }\big).\text{ } \]
</div>
<p> Thus if \(\theta =\tfrac {1}{2}\) and \(\tau =1\) the bound becomes: </p>
<div class="displaymath" id="a0000000092">
  \[  O\big(\tfrac {12}{5}(\tfrac {n}{2}+4\sqrt{n}+4)\log \tfrac {n\mu ^{0}}{\epsilon }\big)=O\big(n\log \tfrac {n\mu ^{0}}{\epsilon }\big)  \]
</div>
<p> which is the bound complexity of such long-update primal-dual algorithms. Meanwhile if \(\theta =n^{-\tfrac {1}{2}}\) and \(\tau =1\) we obtain the best well-known iteration bound for small-update primal-dual algorithms, namely: </p>
<div class="displaymath" id="a0000000093">
  \[  O(\sqrt{n}\log \tfrac {n\mu ^{0}}{\epsilon }).  \]
</div>
<p> It completes the proof of the theorem. </p>
<h1 id="a0000000094">4 Conclusion and future research</h1>
<p>In this paper we have extended the study of complexity analysis of a primal-dual path-following algorithm designed for (SDP) to monotone (SDLCP). We have succeed to give an unified analysis for its polynomial complexity. For long-update algorithm we have \(O\big(n\log \tfrac {n\mu ^{0}}{\epsilon }\big)\) iteration bound. This complexity is similar to such primal-dual (IPMs) methods. For small-update algorithm the iteration bound is \(O\big(\sqrt{n}\log \tfrac {n\mu ^{0}}{\epsilon }\big)\) which is the best known iteration bound. Finally, an important topic for further research is the numerical implementation of this algorithm and to extend it for other optimization problems. </p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="Ach06">1</a></dt>
  <dd><p><span class="scshape">M. Achache</span>, <i class="itshape">A new primal-dual path-following method for convex quadratic programming</i>, Comput. Appl. Math., <b class="bfseries">25</b>, No. 1, pp.&#160; 97–110, 2006. </p>
</dd>
  <dt><a name="Ach10">2</a></dt>
  <dd><p><span class="scshape">M. Achache</span>, <i class="itshape">Complexity analysis and numerical implementation of a short-step primal-dual algorithm for linear complementarity problems</i>, Applied Mathematics and Computation, <b class="bfseries">216</b>, pp.&#160; 1889–1895, 2010. </p>
</dd>
  <dt><a name="AliHaeOve98">3</a></dt>
  <dd><p><span class="scshape">F. Alizadeh, J.A. Haeberly</span> and <span class="scshape">M. Overton</span>, <i class="itshape">Primal-dual interior point methods for semidefinite programming. Convergence rates, stability and numerical results</i>, SIAM J. Optimization, <b class="bfseries">8</b>, pp.&#160;746–768, 1998. </p>
</dd>
  <dt><a name="BorLew00">4</a></dt>
  <dd><p><span class="scshape">J.M. Borwein</span> and <span class="scshape">A.S. Lewis</span>, <i class="itshape">Convex Analysis and nonlinear optimization. Theory and Examples</i>, Springer-Verlag, New-York Berlin Heidlberg, 2000. </p>
</dd>
  <dt><a name="Elg05">5</a></dt>
  <dd><p><span class="scshape">M. El ghami</span>, <i class="itshape">New primal-dual interior point methods based on kernels functions</i>, Phd thesis, Delft University, Netherland, 2005. </p>
</dd>
  <dt><a name="GowSON00">6</a></dt>
  <dd><p><span class="scshape">M.S. Gowda</span> and <span class="scshape">Y. Song</span>, <i class="itshape">On semidefinite linear complementarity problem</i>, Mathematical Programming, Series A, <b class="bfseries">88</b>, pp.&#160;575–587, 2000. </p>
</dd>
  <dt><a name="GowSONRav03">7</a></dt>
  <dd><p><span class="scshape">M.S. Gowda, Y. Song,</span> and <span class="scshape">G. Ravindran</span>, <i class="itshape">Some interconnections between strong monotonicity, GUS and P properties in semidefinite linear complementarity problems</i>, Linear algebras and its applications, <b class="bfseries">370</b>, pp.&#160;355–386, 2003. </p>
</dd>
  <dt><a name="GowSON03">8</a></dt>
  <dd><p><span class="scshape">M.S. Gowda</span> and <span class="scshape">Y. Song</span>, <i class="itshape">Some new results for the semidefinite linear complementarity problem</i>, SIAM Journal on Matrix Analysis and Applications, <b class="bfseries">24</b>, no.1, pp.&#160;25–39, 2003. </p>
</dd>
  <dt><a name="KojShiHar97">9</a></dt>
  <dd><p><span class="scshape">M. Kojima, M. Shindoh</span> and <span class="scshape">S. Hara</span>, <i class="itshape">Interior point methods for monotone semidefinite linear complementarity problem in symmetric matrices</i>, SIAM J. Optimization, <b class="bfseries">7</b>, pp.&#160; 86–125, 1997. </p>
</dd>
  <dt><a name="LiuSun07">10</a></dt>
  <dd><p><span class="scshape">Z. Liu</span> and <span class="scshape">W. Sun</span>, <i class="itshape">An infeasible interior-point algorithms with full-Newton step for linear optimization</i>, Numer. Algor., <b class="bfseries">46</b>, pp.&#160; 173–188, 2007. </p>
</dd>
  <dt><a name="MalMoh06">11</a></dt>
  <dd><p><span class="scshape">M. Malik</span> and <span class="scshape">S.R. Mohan</span>, <i class="itshape">Some geometrical aspects of semidefinite linear complementarity problems</i>, Linear and multilinear algebras, <b class="bfseries">54</b> <b class="bfseries">1</b>, pp.&#160;55–70, 2006. </p>
</dd>
  <dt><a name="NesTod97">12</a></dt>
  <dd><p><span class="scshape">Y. Nestrov</span> and <span class="scshape">M. Todd</span>, <i class="itshape">Self-scaled barriers and interior point methods for convex programming</i>, Mathematics of operations Research, <b class="bfseries">22</b>, No. 1, pp.&#160; 1–42, 1997. </p>
</dd>
  <dt><a name="PenRooTer01">13</a></dt>
  <dd><p><span class="scshape">J. Peng, C. Roos</span> and <span class="scshape">T. Terlaky</span>, <i class="itshape">A new and efficient large-update interior point method for linear optimization</i>, Vychist. Tekhnol., <b class="bfseries">6</b>, pp.&#160; 61–80, 2001. </p>
</dd>
  <dt><a name="PenRooTer02">14</a></dt>
  <dd><p><span class="scshape">J. Peng, C. Roos</span> and <span class="scshape">T. Terlaky</span>, <i class="itshape">Self-regularity: A new paradigm for primal-dual interior point algorithms</i>. Princeton University Press, Princeton, NJ, 2002. </p>
</dd>
  <dt><a name="PenRooTer002">15</a></dt>
  <dd><p><span class="scshape">J. Peng, C. Roos</span> and <span class="scshape">T. Terlaky</span>, <i class="itshape">A new class of polynomial primal-dual methods for linear and semidefinite optimization</i>, European J. Oper. Res., <b class="bfseries">143</b>, pp.&#160;234–256, 2002. </p>
</dd>
  <dt><a name="PenRooTer2002">16</a></dt>
  <dd><p><span class="scshape">J. Peng, C. Roos</span> and <span class="scshape">T. Terlaky</span>, <i class="itshape">Self-regular functions and new search directions for linear and semidefinite optimization</i>, European Mathematical Programming, <b class="bfseries">93</b>, pp.&#160; 129–171, 2002. </p>
</dd>
  <dt><a name="PenRooTer001">17</a></dt>
  <dd><p><span class="scshape">J. Peng, C. Roos</span> and <span class="scshape">T. Terlaky</span>, <i class="itshape">New complexity analysis of the primal-dual method for semidefinite optimization based on the Nesterov-Todd direction</i>. Journal of Optimization Theory and Application, <b class="bfseries">109</b>, No. 2, pp.&#160;327–343, 2001. </p>
</dd>
  <dt><a name="PenRooTer98">18</a></dt>
  <dd><p><span class="scshape">J. Peng, C. Roos</span> and <span class="scshape">T. Terlaky</span>, <i class="itshape">New complexity analysis of primal-dual Newton methods for P</i>\(_{\ast }\)(k) linear complementarity problem, Manuscript, 1998. </p>
</dd>
  <dt><a name="RooTerVia97">19</a></dt>
  <dd><p><span class="scshape">C. Roos, T. Terlaky</span> and <span class="scshape">J.Ph. Vial</span>, <i class="itshape">Theory and algorithms for linear optimization. An interior point approach</i>, John Wiley and Sons, Chichester, UK, 1997. </p>
</dd>
  <dt><a name="Son00">20</a></dt>
  <dd><p><span class="scshape">Y. Song</span>, <i class="itshape">The P and globally uniquely solvable properties in semidefinite linear complementarity problem</i>, Phd thesis, University of Maryland, USA, 2000. </p>
</dd>
  <dt><a name="StuZha95">21</a></dt>
  <dd><p><span class="scshape">J.F. Sturm</span> and <span class="scshape">S. Zhang</span>, <i class="itshape">Symmetric primal-dual path-following algorithms for semidefinite programming</i>, Technical report 9554/A, Tinbergen Institute, Erasmus University, Rotterdam, The Netherlands, 1995. </p>
</dd>
  <dt><a name="Wri97">22</a></dt>
  <dd><p><span class="scshape">S.J. Wright</span>, <i class="itshape">Primal-dual interior point methods</i>, SIAM, Philadelphia, USA, 1997. </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>