Combined approach to solve the linear complementarity problem

Z. Kebbiche\(^1\) H. Roumili\(^2\)

June 27, 2016.

\(^1\)Department of Mathematics, University of Setif, Algeria, e-mail: kebbichez@yahoo.fr.

\(^2\)Department of Mathematics, University of Setif, Algeria, e-mail: H\(\_ \)Roumili@yahoo.fr. The work of this author has been supported by LMNF Laboratory.

In this paper, we present a new approach in order to solve the linear complementary problem (LCP). We have combined the ideas of Lemke’s method and its variants taking advantage of the benefits of each approach in order to improve the convergence of these algorithms.

Numerical simulations and comparative results of the new approach are provided.

Since the quadratic convex program and linear program can be written as (LCP), so it can be resolved thanks to our new approach.

MSC. 90C33

Keywords. Linear complementary problem, Lemke’s algorithm, quadratic convex program, linear program, notion of pivot.

1 Introduction

Study of complementary problems began with the works of R.W. Cottle in the 1960’s [3]. Problems of this type arise frequently in engineering applications, game theory, and economics. It can be seen as a particular case of variational inequalities. Also, the KKT conditions for linear and quadratic programming problems can be written as a linear complementary problem, hence the algorithm presented in this paper can be used to solve both linear and quadratic programming problems.

An important tool existing since a long time to solve the (LCP) is the Lemke’s method (1968), based on the principle of the simplex method introduced by Dantzig in 1951. It chooses the entering variable by a complementary pivot rule, the entering variable in a step is always the complement of the dropping variable in the previous step. An artificial variable \(z_{0}\) associated to the column vector \(-e\) (\(-e\) is the column vector of all 1’s in \(\mathbb {R} ^{n})\) is introduced into (LCP) and the idea is to drive the variable \(z_{0}\) to \(0\) by series of complementary pivots, thus obtaining a solution to the (LCP). This method converges with a finite number of iterations, when the problem admits a solution.

In this paper, we present two variants of Lemke’s method:

Variant 1: the idea is to replace the column vector associated with \(z_{0}\) by any strictly positive vector \(d.\)

Variant 2: the idea is to replace the column vector associated with \(z_{0}\) by the column vector \(M_{.\; t}\) of the matrix \(M\) such that \(M_{.\; t}{\gt}0.\)

The main difficulty of variant 1 lies in the determination of a strictly positive vector \(d\), it is a very restrictive condition. The question is how to choose the components of \(d\)?

Concerning variant 2, the supposed condition on the matrix \(M\) restricted classes of (LCP) treated with this variant (we can not apply this variant for linear or quadratic problems, for (LCP) with any matrix, ...) because there is no strictly positive column.

Our goal in this paper is to propose a new approach that combines the ideas of these variants and Lemke’s method taking advantage of the benefits of each approach in order to improve the convergence of these algorithms.

The paper is organized as follow:

Firstly, we present Lemke’s method. In section 3, we present the variants of Lemke’s method. Numerical simulations and comparative experimental results between the two variants and Lemke’s algorithm are given in section 4. The new approach is described in section 5 with several examples of applications. Section 6 presents some conclusions.

2 Lemke’s method

We consider the linear complementary problem

\begin{align} (LCP)\qquad \text{Find }& \left( w,z\right) \in \mathbb {R} ^{n}\times \mathbb {R} ^{n}\text{ such that} \nonumber \\ w& =Mz+q, \\ \left( w,z\right) & \geq 0\\ w^{t}z& =0.\end{align}

where \(M\) is a given square matrix of order \(n\) and \(q\) is a column vector in \(\mathbb {R} ^{n}.\)

2.1 Preliminaries

Let \(w_{i}\) (resp.\(z_{i}\)) be the component of number \(i\) of vector \(w\) (resp. \(z\)). The component \(w_{i}\) (resp.\(z_{i}\)) is called basic variable if \(w_{i}\geq 0\) (resp. \(z_{i}\geq 0\)). If \(w_{i}\) (resp. \(z_{i}\)) is nonbasic variable, then \(w_{i}=0\) (resp. \(z_{i}=0\)).

Definition 1

The couple \((w,z)\) is called complementarity basic feasible solution (CBFS), if it verifies the following conditions

  • \((w,z)\) is a feasible solution to (1) and (2),

  • one and only one component of \((w_{i},z_{i})\) is a basic variable for \(1\leq i\leq n.\)

Remark 2

1) If \(q\geq 0\) then, we immediately have a complementarity basic feasible solution (CBFS) of (LCP) by letting \(\left( w,z\right) =\left( q,0\right) .\)

2) If \((q\ngeq 0),\) an artificial variable \(z_{0}\) associated with the column vector \(-e\) ( \(-e\) is the column vector of all 1’s in \(\mathbb {R} ^{n})\) is introduced into (LCP). â–¡

The idea is to drive the variable \(z_{0}\) to \(0\) by series of complementary pivots, by solving the following problem

\begin{align} (LCP)_{z_{0}}\ \text{Find }& \left( w,z\right) \in \mathbb {R} ^{n}\times \mathbb {R} ^{n}\text{ such that} \nonumber \\ w-Mz-ez_{0}& =q, \\ \left( w,z\right) & \geq 0\text{ ,}\ z_{0}\geq 0, \\ w_{i}z_{i}& =0\text{ , for }i=1,...,n.\end{align}

Thus obtaining a solution to the (LCP). Where \(z_{0}\) is defined by:

\[ z_{0}=\max \left\{ -q_{i}:i=1,...,n\right\} . \]

Then \(\left( w,z\right) =\left( q+ez_{0},0\right) \) is a starting solution to \(\left( \text{LCP}\right) _{z_{0}}\).

Definition 3

\((w,z,z_{0})\) is called almost-complementarity basic feasible solution of \(\left( \text{LCP}\right) _{z_{0}}\) if it verifies the following conditions:

  • \((w,z,z_{0})\) is a feasible solution to (4) and (5),

  • neither \(w_{s}\) nor \(z_{s}\) are basic for some \(s\in \left\{ 1,...,n\right\} ,\)

  • \(z_{0}\) is basic, and \(\ \)exactly one variable from each complementary pair \((w_{i},z_{i})\) is basic for \(i=1,...,n\) and \(i\neq s\).

2.2 Description of Lemke’s algorithm

This algorithm was introduced by Lemke (1968), known as the complementary pivot algorithm because it chooses the entering variable by a complementary pivot rule, the entering variable in a step is always the complement of the dropping variable in the previous step. The dropping variable has to determined according to the minimum ratio test to guarantee that the new basis obtained after the pivot step will also be a feasible basis.

Algorithm 1
  • Data. \(q\in \mathbb {R} ^{n}\), \(M\in \mathbb {R} ^{n\times n}.\)

  • Initialization.

    • If \(q\geq 0\) stop. \(\left( w,z\right) =\left( q,0\right) \) is a solution of the \(\left( \text{LCP}\right) \) .

    • Else determine \(s\in \left\{ 1,....,n\right\} \) to satisfy: \(q_{s}=\min \left\{ q_{i}:1\leq i\leq n\right\} ,\) \(\text{ }q_{s}{\lt}0\), We perform a pivot step with the column vector of \(z_{0}\) as the pivot column, and the \(s^{th}\) row as the pivot row. \(w_{s\text{ }}\)drops from the basis vector and \(z_{0\text{ }}\)is the entering variable, put \(y_{s}=z_{s}.\)

  • Iteration.

  • Step 1: Let \(h_{s}\) be the column vector corresponding to the variable \(y_{s\text{ }}\)then

    • If \(h_{s}\leq 0\) go to step \(4.\)

    • Else determine the set \(\begin{array}{c} I=\left\{ i:h_{s}\left[ i\right] {\gt}0\right\} \end{array},\) then choose \(r\) such that \(\begin{array}{c} \frac{\overline{q}_{r}}{h_{s}\left[ r\right] }=\underset {i\in I}{\min }\left\{ \frac{\overline{q_{i}}}{h_{s}\left[ i\right] }\right\} .\end{array}\) (the vector \(\overline{q}\) designates the second member column after pivoting)

    • If the basic variable in the row \(r\) is \(z_{0}\) go to step \(3.\)

    • Else go to step 2.

  • Step 2: The basic variable in the row \(r\) is either \(w_{k}\) or \(z_{k}\) for \(k\neq s,\) so we perform a pivot step with the row \(r\) as the pivot row and the column of \(y_{s}\) as the pivot column. If \(w_{k}\left( \text{respectively }z_{k}\right) \) drops the basis, put \(y_{s}=z_{k}\left( \text{resp. }y_{s}=w_{k}\right) \) and we return to step \(1.\)

  • Step 3: We perform a pivot step with the row of \(z_{0}\) and the column of \(y_{s}\), so \(z_{0\text{ }}\) leaves the basis, we obtain a solution of (LCP).

  • Step 4: Since \(h_{s}\leq 0\) we stop with a ray of termination (RT):

    \(\ \ \ \ \ \ \ \ \ \begin{array}{c} R=\left\{ \left( w,z,z_{0}\right) +\lambda h:\lambda \geq 0\right\} , \end{array} \) or \(\left( w,z,z_{0}\right) \) is an almost complementarity basic feasible solution to \(\ (\)LCP\()_{z_{0}}\), the components of \(h\) are defined as

    \(\begin{cases} 1,& \text{for the row corresponding to \ }y_{s} \\ -h_{s},& \text{for rows corresponding to basic variables } \\ 0,& \text{elsewhere}\end{cases}\)

  • End.

2.3 Convergence

This algorithm is guaranteed to work under certain nondegeneracy assymptions and when \(M\) satisfies certain properties.

Theorem 4

We suppose that \(M\in CPP\) (\(M\) is copositive plus) and each feasible almost-complementarity solution of \(\left( \text{LCP}\right) _{z_{0}}\) is nondegenerate, then the algorithm stops after a finite number of iterations.

If the system defined by (1) and (2) is consistent, then the algorithm stops with a complementarity basic feasible solution of \(\left( \text{LCP}\right) \), else, \(\left( \text{LCP}\right) \) admits no solution.

Example 5

It is possible that cycling occurs under degeneracy. Here we provide an example of cycling constructed by M. Kostreva [4]. Let

\begin{equation*} M=\left( \begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 2 \\ 2 & 0 & 1\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

Lemke’s method cycles after \(7\) iterations.

\(M\) is a \(p\)-matrix, so (LCP) has a unique solution, and that the complementary pivot method always terminates in a finite number of pivot steps with that solution, if it is carried out in such a way that cycling does not occur under degeneracy. So using the lexico-minimum ratio rule for choosing the pivot row in each step, cycling cannot occur, and the method gives the solution \((w,z)^{t}=(0,0,0,\frac{2}{3},\frac{2}{3},\frac{2}{3})^{t}\) after \(4\) iterations. â–¡

Example 6

Consider the (LCP) problem, where

\begin{equation*} M=\left( \begin{array}{ccc} 21 & 0 & 0 \\ 28 & 14 & 0 \\ 24 & 24 & 12\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

The algorithm stops after \(2\) iterations with the solution: \((w,z)^{t}=\)
\((0,0.33,0.14,0.05,0,0)^{t}\). â–¡

3 Variants of Lemke’s method

3.1 Variant 1

This variant is described in [7].

Principle

In Lemke’s algorithm, we have the following table

\begin{equation*} \begin{tabular}{|l|l|l|l|} \hline $w$ & $z$ & $z_{0}$ & second member \\ \hline $I$ & $-M$ & $-e$ & \ \ \ \ \ \ \ \ \ \ $q\ \, $ \\ \end{tabular}\end{equation*}

The idea of this variant is to replace the corresponding column to \(z_{0}\) by a vector \(d\) such that \(d{\gt}0,\) so we have the table

\begin{equation*} \begin{tabular}{|l|l|l|l|} \hline $w$ & $z$ & $z_{0}$ & second member \\ \hline $I$ & $-M$ & $-d$ & \ \ \ \ \ \ \ \ \ \ $q\ \, $ \\ \end{tabular}, \end{equation*}

and the system

\begin{align} \text{Find }(w,z)\in \mathbb {R} ^{n}\times \mathbb {R} ^{n}& \text{ such that:} \nonumber \\ w-Mz-dz_{0}& =q, \\ (w,z)& \geq 0,\text{ }z_{0}\geq 0,\\ w_{i}\text{ }z_{i}& =0\text{, }\text{for }i=1:n.\end{align}

1) If \(q\geq 0,\) \(\left( w,z\right) =\left( q,0\right) \) is a solution of the (LCP).

2) If \(q\ngeq 0,\) identify row \(s\) to satisfy \(\begin{array}{c} \frac{q_{s}}{d_{s}}=\min \left\{ \frac{q_{i}}{d_{i}}:1\leq i\leq n\right\} .\end{array}\) We perform a pivot step with the column vector of \(z_{0}\) as the pivot column, and the \(s^{th}\) row as the pivot row. \(w_{s\text{ }}\)drops from the basis vector and \(z_{0\text{ }}\)is the entering variable, put \(y_{s}=z_{s}.\) Then, we follow the same steps of Lemke’s algorithm.

We will now illustrate this variant using the numerical examples of the previous section.

Example 7

Consider the (LCP) problem with

\begin{equation*} M=\left( \begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 2 \\ 2 & 0 & 1\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

We choose \(d=\left( 7, 3, 5 \right)^{t} \), when we apply variant 1, we obtain the solution: \((w,z)^{t}=(0,0,0,\frac{2}{3},\frac{2}{3},\frac{2}{3})^{t}\) after \(4\) iterations. We choose \(d=\left( 15, 7, 9 \right)^{t} \), then the algorithm stops after \(6\) iterations with the same solution.â–¡

Example 8

Consider the \((LCP)\) problem, where

\begin{equation*} M=\left( \begin{array}{ccc} 21 & 0 & 0 \\ 28 & 14 & 0 \\ 24 & 24 & 12\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

Let \(d=\left( 12, 14, 21\right)^{t} \), the algorithm stops after \(2\) iterations with the solution: \((w,z)^{t}=(0,0.33,0.14,0.05,0,0)^{t}\).

For \(d=\left( 2, 3, 1 \right)^{t} \), the algorithm stops after \(8\) iterations with the same solution. â–¡

Remark 9

1) The choice of the vector \(d\) affects the results obtained in number of iterations and execution time, so the question is: first, how to choose components of the vector d, and secondly, what is the ideal choice?

2) Noting that among the advantages of this variant is that it does not suffer from cycling problem (Example 7).â–¡

3.2 Variant 2

This variant also is described in [7].

Principle

Suppose the matrix \(M\) satisfies the condition: there exists a column vector of \(M\) in which all the entries are strictly positive ( \(M_{.\text{ }t}{\gt}0\) ). Then a variant of the Lemke’s algorithm which uses no artificial variable at all, can be applied on the \((LCP).\) So \(M_{.t}\) replace the vector \(d\) and the variable \(z_{t}\) play the same role as that of the artificial variable \(z_{0}\).

We assume that \(q\nsucceq 0.\) Let \(t\) be such that \(M_{.\text{ }t}{\gt}0\). When a pivot step is carried out with the column of \(z_{t}\) as the pivot column and row \(s\) as the pivot row, the right hand side constants vector becomes nonnegative after this pivot step. Hence \((w_{1},...,w_{s-1},z_{t},w_{s+1},...,w_{n})\) is feasible basic vector, and if \(s=t,\) it is a complementary feasible basic vector and the solution corresponding to it is a solution of the (LCP). If \(s\neq t\), the feasible basic vector \((w_{1},...,w_{s-1},z_{t},w_{s+1},...,w_{n})\) satisfies the following properties:

Definition 10

Any feasible basic vector satisfying the following conditions is known as an almost complementary feasible basic vector:

  • It contains exactly one basic variable from the complementary pair \((w_{i},z_{i})\) for \(n-2\) values of \(i\) (namely \(i\neq s,t\) here).

  • It contains both the variables from a fixed complementary pair (namely \((w_{t},z_{t})\) here), as basic variables.

  • There exists exactly one complementary pair both the variables in which are not contained in this basic vector (namely \((w_{s},z_{s})\) here).

Algorithm 2
  • Data. \(q\in \mathbb {R} ^{n}\), \(M\in \mathbb {R} ^{n\times n}.\)

  • Initialization.

    • If \(q\geq 0\) stop. \(\left( w,z\right) =\left( q,0\right) \) is a solution of the \(\left( \text{LCP}\right) \) .

    • Else 1) Choose \(t\) such that \(M_{.\; t}{\gt}0.\)

      2) Determine \(s\in \left\{ 1,....,n\right\} \) to satisfy: \(\frac{q_{s}}{m_{st}}=\min \left\{ \frac{q_{i}}{m_{it}},i=1...n\right\} \). We perform a pivot step with the column vector of \(z_{t}\) as the pivot column, and the \(s^{th}\) row as the pivot row. \(w_{s\text{ }}\)drops from the basis vector and \(z_{t\text{ }}\)is the entering variable, put \(y_{s}=z_{s}.\)

  • Iteration.

  • Step 1: Let \(h_{s}\) be the column vector corresponding to the variable \(y_{s\text{ }}\)then

    • If \(h_{s}\leq 0\) go to step \(4.\)

    • Else Determine the set \(\begin{array}{c} I=\left\{ i:h_{s}\left[ i\right] {\gt}0\right\} , \end{array}\) then choose \(r\) such that

      \(\begin{array}{c} \frac{\overline{q}_{r}}{h_{s}\left[ r\right] }=\underset {i\in I}{\min }\left\{ \frac{\overline{q_{i}}}{h_{s}\left[ i\right] }\right\} .\end{array}\) (the vector \(\overline{q}\) designates the second member column after pivoting)

    • If the basic variable in the row \(r\) is \(z_{t}\) go to step \(3.\)

    • Else go to step \(2.\)

  • Step 2: The basic variable in the row \(r\) is either \(w_{k}\) or \(z_{k}\) for \(k\neq t,\) so we perform a pivot step with the row \(r\) as the pivot row and the column of \(y_{s}\) as the pivot column. If \(w_{k}\left( \text{respectively }z_{k}\right) \) drops the basis, put \(y_{s}=z_{k}\left( \text{resp. }y_{s}=w_{k}\right) \) and we return to step \(1.\)

  • Step 3: We perform a pivot step with the row of \(w_{t}\) or \(z_{t}\) and the column of \(y_{s}\), so \(w_{t}\) or \(z_{t}\) leaves the basis, we obtain a solution of (LCP).

  • Step 4: Since \(h_{s}\leq 0,\) we stop with a ray of termination:

    \(\begin{array}{c} R=\left\{ (w_{1},w_{2},...,w_{n},z_{1},z_{2},...z_{n})+\lambda h:\lambda \geq 0\right\} . \end{array}\)

    the components of \(d\) are defined as

    \( \begin{cases} 1,& \text{for the row corresponding to \ }y_{s} \\ -h_{s},& \text{for rows corresponding to basic variables } \\ 0,& \text{elsewhere}\end{cases}\)
  • End.

We will now illustrate this variant using the same examples for variant 1.

Example 11

Let

\begin{equation*} M=\left( \begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 2 \\ 2 & 0 & 1\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

There is no strictly positive column in \(M,\) so we can not apply this variant. â–¡

Example 12

Let

\begin{equation*} M=\left( \begin{array}{ccc} 21 & 0 & 0 \\ 28 & 14 & 0 \\ 24 & 24 & 12\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

\(M_{.1}\succ 0\), so we can apply variant 2. The algorithm stops after \(1\) iteration with the solution: \((w,z)^{t}=(0,0.33,0.14,0.05,0,0)^{t}\). â–¡

4 Comparative numerical results

In this section, we present the comparative numerical results between the two variants and Lemke’s algorithm.

Example 13

Consider a (LCP) problem with

\begin{equation*} M=\left( \begin{array}{ccc} 2 & 1 & -1 \\ 2 & 1 & -1 \\ 1 & 1 & 0\end{array}\right) \text{ and }q=\left( \begin{array}{c} 3 \\ 1 \\ -1\end{array}\right) . \end{equation*}

Example 14

Let

\begin{equation*} M=\left( \begin{array}{cc} -\frac{1}{2} & 1 \\ 1 & \, -\frac{1}{2}\end{array}\right) \text{ and }q=\left( \begin{array}{c} 1 \\ -1\end{array}\right) . \end{equation*}

Example 15

Let

\begin{equation*} M=\left( \begin{array}{ccc} 21 & 0 & 0 \\ 28 & 14 & 0 \\ 24 & 24 & 12\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

Example 16

Consider a (LCP) problem with

\begin{equation*} M=\left( \begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 2 \\ 2 & 0 & 1\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

Example 17

Let

\begin{equation*} M=\left( \begin{array}{cccc} 1 & 1 & 3 & 4 \\ 5 & 3 & 1 & 1 \\ 2 & 1 & 2 & 2 \\ 1 & 4 & 1 & 1\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ 2 \\ 1 \\ 3\end{array}\right) . \end{equation*}

Example 18

We consider a convex quadratic problem (CQP)

\begin{equation*} (CQP)\left\{ \begin{array}{c} \min x_{1}^{2}+2x_{2}^{2}-2x_{1}x_{2}-x_{1}-6x_{2} \\ \text{\ }x_{1}+2x_{2}\leq 4 \\ \text{\ }-x_{1}-2x_{2}\leq 4 \\ x_{1},x_{2}\geq 0.\end{array}\right. \end{equation*}

Example 19

We consider the following linear problem (LP)

\begin{equation*} (LP)\left\{ \begin{array}{c} \min -x_{1}-x_{2}-x_{3} \\ -x_{1}+2x_{2}\leq 4 \\ x_{1}-3x_{2}-x_{3}\leq -3 \\ x_{1}+x_{2}\leq 9 \\ -x_{1}-\frac{1}{2}x_{2}+x_{3}\leq -2 \\ -2x_{2}\leq -5 \\ x_{1},x_{2},x_{3}\geq 0.\end{array}\right. \end{equation*}

Example 20

We have a (LCP) problem with

\begin{equation*} M[i,j] = \begin{cases} 5,& \text{if \ \ }i{\lt}j \\ 1,& \text{ if \ }i=j \\ 0,& \text{ if\ }i{\gt}j\end{cases}\text{ \ and \ }q[i]=-1, \end{equation*}

for \(i =1,...,15\) and \(j=1,...,15.\) â–¡

Example 21

We have

\begin{equation*} M=\left( \begin{array}{cccccccc} 1 & 2 & 2 & 2 & . & . & . & 2 \\ 2 & 5 & 6 & 6 & . & . & . & 6 \\ 2 & 6 & 9 & 10 & . & . & . & 10 \\ 2 & 6 & 10 & 13 & 14 & . & . & . \\ . & . & . & 14 & 17 & 18 & . & . \\ . & . & . & . & 18 & . & . & . \\ . & . & . & . & . & . & . & . \\ 2 & 6 & 10 & . & . & . & . & 4n-3\end{array}\right) \text{ \ and \ }q=\left( \begin{array}{c} -1 \\ -1 \\ . \\ . \\ . \\ . \\ . \\ -1\end{array}\right) \end{equation*}

In the following tables, \(n\) represents the problem dimension, \(k\) represents the number of iterations and \(t\) represents the calculation time in seconds for the three algorithms.

   

Lemke’s algor.

 

v. 1

 

v. 2

 

Examples

\(n\)

\(k\)

\(t\)

\(k\)

\(t\)

\(k\)

\(t\)

Ex. 13

3

3

0.49

3

0.5

2

0.05

Ex. 14

2

RT

0.5

RT

0.5

\(\times \)

\(\times \)

Ex. 15

3

2

0.59

2

0.49

1

0.00

Ex. 16

3

4

0.55

6

0.5

\(\times \)

\(\times \)

Ex. 17

4

2

0.55

2

0.49

1

0.00

Ex. 18

4

4

0.55

4

0.49

\(\times \)

\(\times \)

Ex. 19

8

11

0.66

9

0.49

\(\times \)

\(\times \)

Ex. 20

15

30

2.11

56

2.91

2

1.51

Ex. 21

20

2

0.49

2

0.49

1

0.00

Table 1 Comparative numerical results of the Lemke’s method, variant 1 and variant 2.

According to the numerical results in Table 1, the following remarks can be made:

  • Variant 1 difficulty lies in the determination of an ideal vector \(d\) since the choice of this vector affects the results obtained in number of iterations and execution time (See section 3).

  • We can not apply variant 2 for:

- All linear programming problems (Example 18) since there is no strictly positive column. The matrix \(M\) in this case is in the form \(M=\left( \begin{array}{cc} 0 & A^{t} \\ -A & 0\end{array}\right) \) (See section 5).

- Some convex quadratic problems (Example 17). The matrix \(M\) in this case is in the form \(M=\left( \begin{array}{cc} Q & A^{t} \\ -A & 0\end{array}\right) \) (See section 5).

- Any matrix \(M\) which has no strictly positive column (Example 14 and Example 16).

This result does not exist for Lemke’s method and variant 1.

  • The number of iterations and the calculation time recorded for variant 2 are better than those given by variant 1 and Lemke’s method.

  • We must think about a new approach that treats these classes of (LCP) problems.

5 Combined approach

Our goal in this paper is to propose a new approach that combines the ideas of these variants and Lemke’s method taking advantage of the benefits of each approach in order to improve the convergence of these algorithms. We give the principle of our approach that combines the ideas of the three methods.

Principle

We assume that \(q\nsucceq 0.\)

  • If \(M\) satisfies the condition: \(\exists t\) such that \(M_{.\text{ }t}{\gt}0,\) then we apply variant 2.

  • Else, we define a vector \(d\succ 0\) from any column \(s\) of \(M\) with the properties: we let all positive components of \(M_{.s}\) and we put \(1\) for the null components and \(-M[i,s]\) for the negative one.

Hence, our algorithm is defined as:

Algorithm 3
  • Data. \(q\in \mathbb {R} ^{n}\), \(M\in \mathbb {R} ^{n\times n}.\)

  • Initialization.

    • If \(q\geq 0\) stop. \(\left( w,z\right) =\left( q,0\right) \) is a solution of the \(\left( LCP\right) \) .

    • Else - If \(\exists \) \(t\in \left\{ 1,...,n\right\} \) such that: \(M_{.t}{\gt}0\) apply \(\, \)variant 2\(.\)

      - Else Choose a column vector \(M_{.s}\) \(\ngtr 0\) and calculate \(d\in \mathbb {R} _{+}^{n}\) as follow:

      \begin{equation*} d[i]= \begin{cases} M[i,s], & {\rm if}\ M[i,s]{\gt}0, \\ 1, & {\rm if}\ M[i,s]=0, \\ -M[i,s], & {\rm if}\ M[i,s]{\lt}0,\end{cases}\end{equation*}

      then apply variant 1.

  • End.

5.1 Numerical tests

Case of linear complementary problem

We consider the (LCP) problem

\begin{equation*} (LCP)\left\{ \begin{array}{c} \text{Find }(w,z)\in \mathbb {R} ^{n}\times \mathbb {R} ^{n}\text{ such that:} \\ w=Mz+q \\ (w,z)\geq 0 \\ w^{t}z=0.\end{array}\right. \end{equation*}

where \(M\) is a given square matrix of order \(n\) and \(q\) is a column vector in \(\mathbb {R} ^{n}.\)

Example 22

We take example of Kostreva with \(M=\left( \begin{array}{ccc} 1 & 2 & 0 \\ 0 & 1 & 2 \\ 2 & 0 & 1\end{array}\right) ,\) then \(d=\left( 1,2,1\right)^t \) for \(s=3.\) â–¡

Example 23

Let

\begin{equation*} M=\left( \begin{array}{cccc} 2 & 2 & 1 & 2 \\ 3 & 3 & 2 & 3 \\ -2 & 1 & 5 & -2 \\ 1 & -2 & -1 & 2\end{array}\right) \text{ and }q=\left( \begin{array}{c} -4 \\ -6 \\ 4 \\ 4\end{array}\right) . \end{equation*}

Example 24

Let

\begin{equation*} M=\left( \begin{array}{cccccc} 2 & 2 & -1 & 3 & -3 & 2 \\ 3 & -3 & 2 & -2 & 5 & 2 \\ -2 & -1 & 5 & -2 & -2 & -1 \\ 1 & -2 & -1 & 2 & 3 & -1 \\ 2 & -1 & 2 & -3 & 1 & 0 \\ 0 & 1 & 2 & 5 & -1 & 0\end{array}\right) \text{ and }q=\left( \begin{array}{c} -1 \\ -1 \\ -1 \\ -1 \\ -1 \\ -1\end{array}\right) . \end{equation*}

Examples

n

k

t

Example 22

3

4

0.49

Example 23

4

2

0.5

Example 24

6

7

0.6

Table 2 Numerical results of the combined approach in the case of a linear complementary problem.

In these examples, variant 2 is not applicable so we apply the new approach with a vector \(d\) which is well defined.

Case of linear programming

We consider the following (LP) problem

\begin{equation*} (LP)\left\{ \begin{array}{c} \min c^{t}x \\ Ax\leq b \\ x\geq 0,\end{array}\right. \end{equation*}

Where \(A\) is a given \((m,n)\)-matrix with \(\operatorname {rank}(A)=m,\) \(b\in \mathbb {R} ^{m}\) and \(c\in \mathbb {R} ^{n}\).

The conditions of KKT’s related to the (LP) problem are written as (LCP) where:

\begin{equation*} M=\left( \begin{array}{cc} 0 & A^{t} \\ -A & 0\end{array}\right) \text{, }q=\left( \begin{array}{c} c \\ b\end{array}\right) \text{ and }z^{\ast }=\left( \begin{array}{c} x^{\ast } \\ y^{\ast }\end{array}\right) \text{.} \end{equation*}

\((x^{\ast },y^{\ast })\) is the primal-dual solution of (LP) and it dual (D).

Example 25

Consider the (LP) problem

\begin{equation*} (LP)\left\{ \begin{array}{c} \min -x_{1}-x_{2}-x_{3} \\ -x_{1}+2x_{2}\leq 4 \\ x_{1}-3x_{2}-x_{3}\leq -3 \\ x_{1}+x_{2}\leq 9 \\ -x_{1}-\frac{1}{2}x_{2}+x_{3}\leq -2 \\ -2x_{2}\leq -5 \\ x_{1},x_{2},x_{3}\geq 0.\end{array}\right. \end{equation*}

Example 26

We have the following (LP) problem

\begin{equation*} (LP)\left\{ \begin{array}{c} \min -4x_{1}-5x_{2}-x_{3}-3x_{4}+5x_{5}-8x_{6} \\ x_{1}-4x_{3}+3x_{4}+x_{5}+3x_{6}\leq 1 \\ 5x_{1}+3x_{2}+x_{3}-x_{5}+3x_{6}\leq 4 \\ 4x_{1}+5x_{2}-3x_{3}+3x_{4}-4x_{5}+x_{6}\leq 4 \\ -x_{2}+2x_{4}+x_{5}-5x_{6}\leq 5 \\ -2x_{1}+x_{2}+x_{3}+x_{4}+2x_{5}+2x_{6}\leq 7 \\ 2x_{1}-3x_{2}+2x_{3}-x_{4}+4x_{5}+5x_{6}\leq 5 \\ x_{i}\geq 0,i=1,2,...,6,\end{array}\right. \end{equation*}

Example 27

We consider a (LP) problem where

\begin{eqnarray*} A & =& \left( \begin{array}{cccccccccc} 1 & 2 & 3 & 4 & 5 & 5 & 4 & 3 & 2 & 1 \\ 6 & 7 & 8 & 9 & 10 & 5 & 2 & 8 & 3 & 1 \\ 11 & 12 & 13 & 14 & 15 & 6 & 7 & 80 & 90 & 10 \\ 1 & 10 & 20 & 30 & 40 & 50 & 60 & 80 & 90 & 10 \\ 3 & 9 & 27 & 60 & 45 & 60 & 75 & 8 & 9 & 46\end{array}\right) \text{, }b=\left( \begin{array}{c} 10000 \\ 10000 \\ 10000 \\ 10000 \\ 10000\end{array}\right) \\ \text{and \ }c & =& (-1,-1,...,-1)^{t}. \end{eqnarray*}

Examples

n

k

t

Example 25

8

9

0.55

Example 26

12

13

0.82

Example 27

15

5

0.66

Table 3 Numerical results of the combined approach in the case
of linear programming problems.

For all linear programming problems presented here, we can not apply variant 2 since there is no strictly positive column. But the new approach gives us the above results.

5.2 Case of convex quadratic programming

We consider the (CQP) problem

\begin{equation*} \text{(CQP)}\left\{ \begin{array}{c} \min \frac{1}{2}x^{t}Qx+c^{t}x \\ Ax\leq b \\ x\geq 0.\end{array}\right. \end{equation*}

where \(Q\) is a symmetric semidefinite matrix of order \(n,\) \(A\) is a given \((m,n)\)-matrix with \(\operatorname {rank}(A)=m,\) \(b\in \mathbb {R} ^{m}\) and \(c\in \mathbb {R} ^{n}\).

The conditions of KKT’s related to the (CQP) problem are written as (LCP) where:

\begin{equation*} z^{\ast }=\left( \begin{array}{c} x^{\ast } \\ y^{\ast }\end{array}\right) \text{, }q=\left( \begin{array}{c} c \\ b\end{array}\right) \text{ and }M=\left( \begin{array}{cc} Q & A^{t} \\ -A & 0\end{array}\right) \text{.} \end{equation*}

\((x^{\ast },y^{\ast })\) is the primal-dual solution of (CQP) and it dual (D).

Example 28

We consider the following (CQP) problem

\begin{equation*} \begin{array}{c} \min 2x_{1}^{2}+4x_{2}^{2}+6x_{3}^{2}-2x_{1}x_{2}-6x_{1}x_{3}+8x_{2}x_{3}+5x_{1}+6x_{2}-12x_{3} \\ \text{\ }-x_{1}-2x_{2}-x_{3}\leq -6 \\ \text{\ }x_{1}+x_{2}+x_{3}\leq 16 \\ -x_{1}+2x_{2}\leq 4 \\ x_{1},x_{2},x_{3}\geq 0.\end{array}\end{equation*}

Example 29

We have a (CQP) problem where

\begin{align*} Q& =\left( \begin{array}{cccc} 1 & 0 & -5 & 0 \\ 0 & 5 & 0 & 0 \\ -5 & 0 & 1 & 5 \\ 0 & 0 & 5 & 5\end{array}\right) \text{,\ }A=\left( \begin{array}{cccc} 1 & 2 & 1 & 1 \\ 3 & 1 & 2 & -1 \\ 0 & -1 & -4 & 0\end{array}\right) \text{,\ } \\ b& =\left( \begin{array}{c} 5 \\ -4 \\ \frac{3}{2}\end{array}\right) \text{ and\ }c=\left( \begin{array}{c} 1 \\ 3 \\ -1 \\ 1\end{array}\right) . \end{align*}

Example 30

Consider the (CQP) problem where

\begin{equation*} Q=\left( \begin{array}{cccccccccc} 30 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 21 & 0 & 1 & -1 & 1 & 0 & 1 & 0.5 & 1 \\ 1 & 0 & 15 & -0.5 & -2 & 1 & 0 & 1 & 1 & 1 \\ 1 & 1 & -0.5 & 30 & 3 & -1 & 1 & -1 & 0.5 & 1 \\ 1 & -1 & -2 & 3 & 27 & 1 & 0.5 & 1 & 1 & 1 \\ 1 & 1 & 1 & -1 & 1 & 16 & -0.5 & 0.5 & 0 & 1 \\ 1 & 0 & 0 & 1 & 0.5 & -0.5 & 8 & 1 & 1 & 1 \\ 1 & 1 & 1 & -1 & 1 & 0.5 & 1 & 24 & 1 & 1 \\ 1 & 0.5 & 1 & 0.5 & 1 & 0 & 1 & 1 & 39 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 11\end{array}\right) , \end{equation*}
\begin{equation*} A=\left( \begin{array}{cccccccccc} 1 & -1 & 1.9 & 1.25 & 1.2 & 0.4 & -0.7 & 1.06 & 1.5 & 1.05 \\ 1.3 & 1.2 & 0.15 & 2.15 & 1.25 & 1.5 & 0.4 & 1.52 & 1.3 & 1 \\ 1.5 & -1.1 & 3.5 & 1.25 & 1.8 & 2 & 1.95 & 1.2 & 1 & -1\end{array}\right) \text{,} \end{equation*}

\(b=\left( 11.651, 16.672, 21.295\right)^t \), and \(c=(-0.5,-1,0,0,-0.5,0,0,-1,-0.5,-1)^{t}\, \). â–¡

Examples

n

k

t

Example 28

6

4

0.55

Example 29

7

3

0.49

Example 30

13

4

0.97

Table 4 Numerical results of the combined approach in the case of quadratic programming problems.

In Table 4, we have applied the new approach to quadratic programming problems with any matrix \(M.\)

6 Conclusion

In this paper, we have proposed a new efficient approach to solve linear complementary problems. We have combined the ideas of Lemke’s method and its variants to introduce a new vector which is well defined, so, the algorithm obtained is able to solve any type of problems (linear problems, convex quadratic problems, any matrix which has no strictly positive column, ...) and the numerical results confirm the efficient of our algorithm.

Acknowledgements

The authors wish to thank the anonymous referees for their comments and suggestions which have considerably improved the overall presentation of the paper.

Bibliography

1

M. Bazarra, H.D. Sherali and C.M. Shetty, Nonlinear Programming: Theory and Algorithms, Second edition, Wiley, 1993. \includegraphics[scale=0.1]{ext-link.png}

2

I. Ben Gharbia, Résolution de problèmes de complémentarité: Applications à un écoulement diphasique dans un milieu poreux, Thèse de Doctorat, Université Paris-Dauphine , 2012.

3

R.W. Cottle, J.S. Pang and R.E. Stone, The Linear Complementarity Problem, Academic Press, Inc., New York, 1992. \includegraphics[scale=0.1]{ext-link.png}

4

Dipti Dubey, J.M. Neogy, On hiden Z-matrices and the linear complementarity problem, Linear Algebra and its Applications, 496 (2016) no. 1, pp. 81–100, 2016. \includegraphics[scale=0.1]{ext-link.png}

5

M. Garcia-Esnaola, S.K. Peña, Error bounds for the linear complementarity problem with a \(\Sigma \)-SDD matrix, Linear Algebra and its Applications, 413 (2013) no. 3, pp. 1339–1346. \includegraphics[scale=0.1]{ext-link.png}

6

C.E. Lemke, On complementary pivot theory, In Mathematic of the Decision Sciences, American Mathematical Society, pp. 95–114, 1968.

7

K.G. Murty, Linear Complementary, Linear and Nonlinear Programming, Helder Mann Verlag, 1988.