A numerical study of an infeasible interior-point algorithm for convex quadratic semi-definite optimization
Abstract.
The focus of this research is to apply primal-dual interior-point path-following methods, specifically those derived from Newton’s method for solving convex quadratic semidefinite optimization (CQSDO) problems. In this paper, we present a numerical study of an infeasible primal-dual interior-point method for tackling this class of optimization problems. Unlike the feasible interior-point algorithms, the proposed algorithm can be start with any initial positive definite matrix and does not require the strictly feasible initial points. Under certain conditions, the Newton system is well defined and its Jacobian is nonsingular at the solution. For computing an iteration throughout the algorithm, a Newton direction and a step-size are determined. Here, our search direction is based on Alizadeh-Haeberly-Overton (AHO) symmetrization. However, for the step size along this direction two alternatives are suggested. Preliminary numerical results demonstrate the efficiency of our algorithm.
Key words and phrases:
Convex quadratic semidefinite optimization, Interior-point methods, Primal-dual algorithm, Newton method.2005 Mathematics Subject Classification:
90C05, 90C51The work of this author has been supported by La Direction Générale de la Recherche Scientifique et du Développement Technologique (DGRSDT-MESRS), under projects PRFU number C00L03UN190120190004.
1. Introduction
Let and denote the cones of real symmetric and symmetric positive semi-definite matrices, respectively. The convex quadratic semidefinite optimization (CQSDO) problem is defined as follows
and its dual
where
and
, is a given linear transformation on and the inequality means that . Recall that the notation , denotes the trace inner product.
The CQSDO problem has wide applications such as the computation of nearest correlation matrix problem (NCMP) also it arises in nearest Euclidean distance matrix problems and other matrix least square problems (SDLS) (see [17]). Many problems in metric embeddings, covariance estimations, eigenvalues and max-cut problems and molecular conformations can also be restated as CQSDO. It also includes the semidefinite optimization (SDO) as a special case when .
Over the last decades, many feasible primal-dual interior-point algorithms (FIPA) for solving CQSDO have been proposed (see, e.g., [20, 22, 6, 9, 15]). These algorithms are of Newton type methods and which enjoy certain property such as the locally quadratic convergence and have polynomial complexity. Further they are highly efficient in practice. It is well-known that FIPAs are iterative methods and start with a strictly feasible starting point and follow the central-path with keeping feasibility and centrality during the solution process. However, experimental issue of these algorithms show that getting a strictly feasible centered point is a hard task and even impossible. Therefore, to overcome this drawback many remedies are suggested. Among them, infeasible interior-point (IFIPA) algorithms which require that the starting initial point is any arbitrary symmetric positive definite matrix where feasibility is reached as optimality is approached [24, 27].
The main aim of this work is to deal with the numerical implementation of an IFIPAs for solving the CQSDO problems. Here, the determination of Newton search directions is based on the symmetrization of Alizadeh-Haeberly-Overton (AHO) [8]. Further, efficient step-size are suggested to keep iterations positive definite during Newton’s process. Our tested examples of CQSDO are reformulated from some known optimization problems such as SDO, SDLS, NCMP, the computation of the smallest eigenvalue of a symmetric positive definite matrix and the Max-cut problem.
Throughout the paper, the following notations are used. denotes the set of all real matrices. The trace inner-product and the Frobenius norm in are denoted, respectively, by: For a matrix , denote its eigenvalues with () as the smallest (largest) one, respectively, and stand for its determinant. The square root matrix of any is denoted by and the identity matrix is denoted by , and is the vector of all l’s.
The paper is organized as follows. In Section 2, preliminaries, the concept of the central-path and the AHO symmetrization search directions for CQSDO are discussed. Further, the numerical computation of the step-size and the search directions are stated. We described in Section 3 the generic primal-dual IFIPA. In Section 4, some numerical results are reported. In Section 5, a conclusion and future remarks end the paper.
2. A primal-dual IFIPA for CQSDO
2.1. Preliminaries
In this subsection, we first provide some mathematical useful tools which are crucial for the development of our proposed algorithm.
Lemma 1 (Lemma 2.1 in [22]).
If . Then the following statements
-
1)
-
2)
-
3)
are equivalent.
Let and two matrices, their Kronecker product is denoted by . For a matrix , is the vector given by
We make use of the functions and formally defined as follows
Definition 2.
For is given by
Further, the operator is the inverse operator of . That is,
In all the paper, the feasible sets and the strict feasible sets of and are the subsets of and , respectively,
The optimal sets of and are the sets
and
It is assumed that the two problems satisfy the following conditions:
-
-
is nonempty.
-
-
The , are linearly independent.
-
-
The linear transformation is monotone and self-adjoint, i.e.,
.
The first assumption implies
-
-
.
-
-
and are two non empty bounded convex sets.
Meanwhile, the second assumption is only to ensure that the dual variables and are in one to one correspondence. By assumption three, and are two convex optimization problems, therefore the set of primal and dual optimal solutions consists of all solutions of the following Karush-Khun-Tucker optimality system:
(1) |
Therefore, by Lemma 1 finding an optimal solution for and is equivalent to solving the system
(2) |
2.2. The central-path of CQSDO
The basic idea behind primal-dual IPAs is to replace the third equation in (2) by the parameterized equation
with Thus we consider the system
(3) |
where is the centrality parameter. Under our assumptions, system (3) has a unique solution for each The set
of -centers is called the central-path of and . If tends to zero then the limit of exists and since the limit point satisfies the complementarity condition, the limit yields an approximated optimal solution of and (e.g., [8, 10]).
2.3. The Newton search directions for CQSDO
Next, we want to define search directions that move in the direction of the central-path . Applying Newton’s method for (3), for a given and an infeasible point . Then the Newton direction at this point is the unique symmetric solution of the linear system:
(4) |
with
The search direction obtained via system (4) is named as the AHO-direction. It is worth mentioning that beside the AHO direction, there are other important search directions such as Kojima et al. [16], Helmberg et al. [12], Monteiro and Nesterov and Todd (NT) [19].
We will refer to the assignment:
as a damped Newton step with is the step-size along this direction.
2.4. Computation of AHO directions
We have seen that the direction of AHO is given by system (4). So to compute this direction, we apply the ”” operator to the three equations of (4). For the first equation
such that
The second equation
such that
The third equation becomes
such that
where denotes the symmetric Kronecker product.
Finally, we obtain the following system
which can be written as the following matrix form:
(5) |
We note that the system (5) is a linear system whose blocked matrix is a square matrix of order by using any linear system solver we get
Then, we introduce the reciprocal application of to this last result.
2.5. Computation of a step-size
For computing the step size in each iteration such that
we need to determine the maximum step-size so that if then and Let and be the maximum possible step-size in the direction and , respectively. The natural step length for a Newton direction is , but a step length of 1 may cause us to exceed the positive definite region to which we are constrained. Therefore, we need to determine the maximum possible step length such that if , then and The maximum step length in the direction can easily be seen as being the smallest positive number such that
or if no positive solutions exist. Similarly, the maximum step length in the direction is defined as the smallest positive number such that
or if no positive solutions exist. Our maximum possible step length is then the minimum of these two numbers:
We can compute and as the minimum positive eigenvalues of the respective generalized eigenvalue problems
Since algorithms for solving generalized eigenvalue problems prefer to have the matrix on the right-hand-side be positive definite [17], it is better computationally to find the maximum eigenvalues and of the respective problems
Thus
and
Once these two allowed maximum step-sizes are determined, then the step size is taken as follows
For specifying the barrier parameter , it is easily seen from the last equation in (3) that .
3. An infeasible path-following algorithm for CQSDO
We present an infeasible path-following interior-point algorithm for computing an optimal solution of CQSDO that uses the primal-dual interior-point framework proposed by many authors. In each iteration the algorithm starts with guesses (matrices) not necessarily feasible but only symmetric positive definite. We would like to update these matrices until we are within our desired tolerance of satisfying equations. We will stop our algorithm when the
is small enough.
4. Numerical results
In this section, we implemented Algorithm 1 in the software Matlab environment and run it on a personal computer. In the implementation, we take as our tolerance and stand for the initial guessing point of the algorithm. In below tables of the obtained numerical results, the number of iterations and the time obtained by the algorithm are denoted by “Iter” and “CPU”, respectively. Our testing examples are inspired from different well known optimization problems. For each example, denotes an approximated primal-dual optimal solution for CQSDO problems.
Example 1.
Consider the CQSDO problem whose primal-dual pair and have the following data:
The initial point is defined as follows
The obtained primal-dual optimal solution is
Example 2 ([4]).
The semidefinite least squares optimization, (SDLS) is a convex optimization problem which is defined as follows:
where and . Because
then, the SDLS can be restated as a CQSDO problem with
For the size , the data of the SDLS is stated as follows:
The initial point is defined as follows
The approximated primal-dual optimal solution is
Example 3.
This example is reformulated from the following Nearest Correlation Matrix Problem (NCMP):
Here,
For the size , , the data of NCMP is given by
,
The initial point is defined as follows:
An approximated optimal solution is:
Example 4.
This example of CQSDO is reformulated from the computation of the smallest eigenvalue of a symmetric positive definite matrix denoted by (EVP):
The EVP becomes a CQSDO as follows:
since
and
with . For the size , we consider the following EVP where its data is given by
The initial point is defined as follows: and
An approximated optimal primal-dual solution is:
finally we get With the help of Matlab, the exact spectrum of , denoted by Sp is given by
So it is clear that our algorithm gives the exact smallest eigenvalue of .
Example 5 ([12]).
This example is reformulated from the following Max-cut problem (MCP):
where its reformulation as a CQSDO is as follows:
with , the matrix is called the Laplacian matrix associated with the graph. For the size , we consider the following MCP where
The initial point is defined as follows
and
An approximated optimal primal-dual solution is:
In Table 1, the number of iterations and the elapsed time for each problem are summarized.
Algorithm 1 | |||
---|---|---|---|
ITER | CPU | ||
Example 1 | (4,3) | 11 | 0.0528 |
Example 2 | (7,5) | 15 | 0.3887 |
Example 3 | (8,4) | 12 | 0.5901 |
Example 4 | (9,1) | 13 | 0.0506 |
Example 5 | (6,6) | 40 | 0.0279 |
4.1. Comparison study
In this subsection, in order to compare our obtained numerical results, we suggest beside the first alternative described in subsection 2.5 where we denote it by Alternative 1, a second alternative.
4.1.1. Alternative 2
The latter was suggested by Touil et al. [23] for solving the SDO problems using a feasible starting point. Here we adapted it for our IFIPA.
This alternative is described as follows, let
such that
are the eigenvalues of is a small positive real and is the Cholesky factorization.
We mention that the same formula with respect to the dual variable . The numerical results obtained through these enhancements are presented in Table 2. For consistency, all experiments were conducted under the same parameters, , , and , to ensure a fair comparison.
Alter 1 | Alter 2 | |||||||
---|---|---|---|---|---|---|---|---|
Example 1 | (4, 3) | Iter | 11 | 23 | ||||
CPU | 0.0528 | 0.0877 | ||||||
Example 2 | (7, 5) | Iter | 15 | 24 | ||||
CPU | 0.3887 | 0.6201 | ||||||
Example 3 | (8, 4) | Iter | 12 | 21 | ||||
CPU | 0.5901 | 1.1124 | ||||||
Example 4 | (9, 1) | Iter | 13 | 32 | ||||
CPU | 0.0506 | 0.1536 | ||||||
Example 5 | (6, 6) | Iter | 40 | 67 | ||||
CPU | 0.0279 | 0.0651 |
4.2. Analysis
Results indicate that Alter 1 often demonstrates improved convergence rates across multiple problem types, especially in cases involving complex data matrices or stricter tolerance levels. While Alter 2 is competitive, particularly in problems with simpler structure or fewer constraints, where it shows fewer iterations yet longer CPU times. Overall, these findings emphasize that Alter 1 (Algorithm 1) offers significant advantages for CQSDO problems, balancing computational efficiency with robustness. These comparatives confirm that this algorithm is effective and show that it could be useful for a wide range of complex tasks in semidefinite programming.
5. Conclusion and future work
In this paper, we implemented an infeasible primal-dual IPA for solving CQSDO based on AHO search directions and on efficient step-sizes for computing an approximated optimal solution of it. The advantage of this algorithm is that it can be started with any initial arbitrary symmetric positive definite matrix. The obtained numerical results on a set of problems which are taken from different benchmarked examples like NCMP, SDLS, the Max-Cut problem and eigenvalue problems are very encouraging. Finally, using other symmetrization scheme for computing the search directions remains a good topic of research in the future.
References
- [1] M. Achache, A new primal-dual path-following method for convex quadratic programming, Comput. Appl. Math., 25 (2006), pp. 97–110. https://doi.org/10.1590/S0101-82052006000100005
- [2] M. Achache, A weighted full-Newton step primal-dual interior point algorithm for convex quadratic optimization, Stat. Optim. Inf. Comput, 2 (2014), pp. 21–32. https://doi.org/10.19139/soic.v2i1.21
- [3] M. Achache, A weighted path-following method for the linear complementarity problem, Studia Univ. Babeş-Bolyai Inf., 49 (2004) no. 1, pp. 61–73.
- [4] M. Achache, Ch. Daili, An interior point algorithm for semidefinite least squares problems, Applications of mathematics., 3 (2022), pp. 371–391. https://doi.org/10.21136/AM.2021.0217-20
- [5] M. Achache, Complexity analysis and numerical implementation of a short-step primal-dual algorithm for linear complementarity problems, Appl. Math. Comput., 216 (2010), pp. 1889–1895. https://doi.org/10.1016/j.amc.2010.03.015
- [6] M. Achache, L Guerra, A full Nesterov Todd-step feasible primal dual interior point algorithm for convex quadratic semi-definite optimization, Appl. Math. Comput. 231 (2014), pp. 581–590. https://doi.org/10.1016/j.amc.2013.12.070
- [7] M. Achache, N. Tabchouche, A full-Newton step feasible interior-point algorithm for monotone horizontal linear complementarity problems, Optim. Lett., 13 (2019), pp. 1039–1057. https://doi.org/10.1007/s11590-018-1328-9
- [8] F. Alizadeh, J.P.A. Haeberly, M.L. Overton, Primal-dual interior-point methods for semidefinite programming. Convergence rates, stability and numerical results, SIAM J. Optim., 8 (1998) 3, pp. 746–768. https://doi.org/10.1137/S1052623496304700
- [9] Y.Q. Bai, F.Y. Wang, X.W. Luo, A polynomial time interior point algorithm for convex quadratic semidefinite optimization, RAIRO, Oper. Res., 44 (2010), pp. 251–265. https://doi.org/10.1051/ro/2010016
- [10] Zs. Darvay, New interior-point algorithms for linear optimization, Advanced Modeling Optimization, 5 (1), 51-92 (2003).
- [11] E. De Klerk, Interior point methods for semidefinite programming. Master of Science in the Faculty of Engineering University of Pretoria, (1997). https://pure.uvt.nl/ws/portalfiles/portal/844453/thesis.pdf
- [12] C. Helmberg, F. Rendl, R.J. Vanderbei, H. Wolkowicz, An interior-point method for semidefinite programming, SIAM J. Optim., 6 (1996) 2, pp. 342–361. https://doi.org/10.1137/0806020
- [13] M. Seetharama Gowda, Yoon Song, On semidefinite linear complementarity problems, Math. Progr., 88 (2000), pp. 575–587. https://doi.org/10.1007/PL00011387
- [14] W. Grimes, Path-following interior-point algorithm for monotone linear complementarity problems, Asian-Eur. J. Math., 15 (2021) 9, https://doi.org/10.1142/S1793557122501704
- [15] L. Guerra, A class of new search directions for full-NT step feasible interior point method in semidefinite optimization, RAIRO Oper. Res., 56 (2022), pp. 3955–3971. https://doi.org/10.1051/ro/2022192
- [16] M. Kojima, M. Shida, S. Shindoh, Search directions in the SDP and monotone SDLCP: Generalization and inexact computation, Math. Progr., 85 (1999), pp. 51–80.
- [17] B. Krislock, Numerical solution of semidefinite constrained least squares problems, University Regina, (2000). http://hdl.handle.net/2429/14126
- [18] N. Moussaoui, M. Achache, A weighted-path following interior-point algorithm for convex quadratic optimization based on modified search directions, Stat. Optim. Inf. Comput, 10 (2022) 3, pp. 873–889. https://doi.org/10.19139/soic-2310-5070-1385
- [19] Y.E. Nesterov, M.J. Todd, Primal-dual interior-point methods for self-scaled cones, SIAM J. Optim., 8 (1998), pp. 324–364. https://doi.org/10.1137/S1052623495290209
- [20] J.W. Nie, Y.X. Yuan, A potential reduction algorithm for an extended SDP problem, Sci. China A, Math., 43 (2000) 1, pp. 35–46.
- [21] X. Quian, Comparison between an infeasible interior point algorithm and a homogeneous self dual algorithm for semidefinite programming, New Mexico Institute of Mining and Technology. Socorro, New Mexico (2006).
- [22] K.C. Toh, M.J. Todd, R.H. Tütüncu, SDPT3-a MATLAB package for semidefinite programming, Optim. Methods Softw., 11 (1999), pp. 545–581. https://doi.org/10.1080/10556789908805762
- [23] I. Touil, D. Benterki, A. Yassine, A feasible primal–dual interior point method for linear semidefinite programming, J. Comput. Appl. Math., 312 (2017), pp. 216–230. https://doi.org/10.1016/j.cam.2016.05.008
- [24] S. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, PA, 1997. https://epubs.siam.org/doi/pdf/10.1137/1.9781611971453.bm
- [25] Y. Ye, Interior point algorithms: Theory and Analysis, 44, John Wiley & Sons, 2011.
- [26] F. Zhang, Matrix theory basic results and techniques, Springer, ISBN 0-387-98696-0.
- [27] Y. Zhang, On the convergence of a class of infeasible interior-point methods for horizontal linear complementarity problem, SIAM J. Optim., 4 (1994) 1, pp. 208–227. https://doi.org/10.1137/0804012