On General Fixed Point Method
Based On Matrix Splitting
For Solving Linear Complementarity Problem

Bharat Kumar\(^1\), Deepmala\(^2\) A.K. Das\(^3\)

November 11, 2022; accepted: December 18, 2022; published online: December 31, 2022.

In this article, we introduce a modified fixed point method to process the large and sparse linear complementarity problem (LCP) and formulate an equivalent fixed point equation for the LCP and show the equivalence. Also, we provide convergence conditions when the system matrix is a \(P\)-matrix and two sufficient convergence conditions when the system matrix is an \(H_+\)-matrix. To show the efficiency of our proposed method, we illustrate two numerical examples for different parameters.

MSC. 65F10, 65F50.

Keywords. Linear complementarity problem, \(H_{+}\)-matrix, \(P\)-matrix, Matrix splitting, Convergence.

\(^1\)The first author is thankful to the University Grants Commission (UGC), Government of India under the SRF fellowship Program No. 1068/(CSIR-UGC NET DEC.2017).

\(^1\)Mathematics Discipline, PDPM-Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, M.P., India, e-mail: bharatnishad.kanpu@gmail.com

\(^2\)Mathematics Discipline, PDPM-Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, M.P., India, e-mail: dmrai23@gmail.com

\(^3\)Indian Statistical Institute, 203 B.T. Road, Kolkata-700108, India, e-mail: akdas@isical.ac.in

1 Introduction

Given \(A_1\in {\mathbb R}^{n\times n}\) and a vector \( q \in \, {\mathbb R}^{n},\) the linear complementarity problem denoted as LCP\((q,A_{1})\) is to find the solution \(z\; \in {\mathbb R}^{n}\; \) to the following system

\begin{eqnarray} \label{eq1} z\geq 0, ~ ~ ~ ~ A_{1}z +q \geq 0,~ ~ ~ ~ z^T(A_{1}z +q)=0. \end{eqnarray}

The LCP has many applications, including operations research, control theory, mathematical economics, optimization theory, stochastic optimal control, the American option pricing problem, economics, elasticity theory, the free boundary problem, and the Nash equilibrium point of the bimatrix game, which has been extensively studied in the literature on mathematical programming. For details see [ 14 ] , [ 10 ] , [ 21 ] . For recent works on this problem see [ 8 ] , [ 9 ] .

The methods available for solving the LCP may be classified into two groups namely pivoting method [ 2 ] , [ 5 ] , [ 23 ] and iterative method [ 22 ] , [ 18 ] . The basic idea of the pivotal method is to obtain a basic feasible complementary vector through a series of pivot steps, whereas the iterative method generates a series of iterations that converge to a solution [ 4 ] , [ 7 ] . Lemke and Howson [ 20 ] introduced the complementary pivot method. Following this method, Lemke introduced a technique known as Lemke’s algorithm, which is well known for finding the solution to the LCP.

In order to develop effective iteration methods, we commonly use matrix splittings to find a numerical solution of the large and sparse LCP\((q, A_{1})\), such as the projected methods [ 16 ] , [ 19 ] , [ 25 ] , the modulus algorithm [ 11 ] and the modulus based matrix splitting iterative methods [ 26 ] , [ 27 ] .

A general fixed point method (GFP) is proposed by Fang [ 17 ] assuming the case where \(\Omega = \omega D_{1}^{-1}\) with \(\omega \) \(\textgreater \) \( 0\) and \(D_{1}\) is the diagonal matrix of \(A_{1}\). The GFP approach takes less iterations than the modulus based successive over relaxation method (MSOR) [ 11 ] . We present a modified form of GFP [ 17 ] that incorporates projected type iteration techniques by including two positive diagonal parameter matrices \(\Omega _1\), \(\Omega _2\) and \(\phi \) is a strictly lower triangular matrix in this article. We also show that the fixed point equation and the LCP is equivalent and discuss the convergence conditions and along with several convergence domains for our method.

The paper is organised as follows: we review some notation, definitions and lemmas in section 2 in order to establish our key findings. The iterative fixed point approach for solving LCP\((q,A_{1})\) with convergence analysis is proposed in section 3. We present two numerical examples in section 4 to demonstrate the efficiency of the proposed methods. section 5 ends the paper with some conclusions.

2 Preliminaries

Some notation, preliminary definitions, and required lemmas are reviewed.

Here \( A_{1}=( \bar{a}_{ij}) \in {{\mathbb R}}^{n\times n}\) and \( B_{1}=( \bar{b}_{ij}) \in { {\mathbb R}}^{n\times n}\) are square matrices. For \(A_{1}=(\bar{a}_{ij}) \in {{\mathbb R}}^{n\times n}\) and \(B_1=(\bar{b}_{ij}) \in {{\mathbb R}}^{n\times n}\), \(A_{1} \geq \) \((\textgreater )\) \( B_{1}\) means \(\bar{a}_{ij}\geq (\textgreater )\) \( \bar{b}_{ij}\) for all \( i,j \in \{ 1,2,\ldots ,n\} \).

Definition 1 [ 17 ]

Let \( A_{1}=(\bar{a}_{ij})\in { {\mathbb R}}^{n\times n}\) be a square matrix, then \(\lvert A_{1}\rvert =(\bar{b}_{ij})\) is defined by \( \bar{b}_{ij} = \lvert \bar{a}_{ij}\rvert \) \(\forall ~ i,j\) and \(|A_{1}| \) represent that \( \bar{a}_{ij} \geq 0\) \(\forall ~ i,j \).

Definition 2 [ 17 ]

Let \(A_{1}, B_{1} \in {{\mathbb R}}^{n \times n}\) be two square matrices. Then \(|A_{1}+B_{1}|\leq |A_{1}|+|B_{1}|\) and \(|A_{1}B_{1}|\leq |A_{1}|\cdot |B_{1}|\). Moreover, when \(a_{1}, b_{1} \in {{\mathbb R}}^{n}\) then \(|a_{1}+b_{1}|\leq |a_{1}|+|b_{1}|\) and \(||a_{1}|-|b_{1}||\leq |a_{1}-b_{1}|\).

Definition 3 [ 4 ]

Let \(A_{1}\in {{\mathbb R}}^{n\times n}\) be a square matrix. \(A_{1}\) is said to be a \(P\)-matrix if all its principle minors are positive such that \(det({A_1}_{\alpha _1 \alpha _1}) ~ \textgreater ~ 0\) for all \(\alpha _1 \subseteq \{ 1,2,\ldots , n\} \).

Definition 4 [ 17 ]

Suppose \( A_{1}\in {{\mathbb R}}^{n\times n}\) is a square matrix, then its comparison matrix is defined as \(\langle \bar{a}_{ij}\rangle =|\bar{a}_{ij}| \) if \(i=j\) and \(\langle \bar{a}_{ij}\rangle =-|\bar{a}_{ij}| \) if \(i \neq j\).

Definition 5 [ 6 ]

Suppose \( A_{1} \in {{\mathbb R}}^{n\times n}\) is a square matrix. It is said to be a \(Z\)-matrix if all of its non diagonal elements are less than or equal to zero; an \(M\)-matrix if \(A_1^{-1}\geq 0\) as well as \(Z\)-matrix. The matrix \( A_{1}\) is said to be an \(H\)-matrix if \(\langle A_1 \rangle \) is an \(M\)-matrix. \( A_{1}\) is said to be an \(H_+\)-matrix if \( A_{1}\) is an \(H_+\)-matrix with \(\bar{a}_{ii} ~ \textgreater ~ 0 ~ \forall ~ i \in \{ 1,2,\ldots ,n\} \).

Definition 6 [ 6 ]

The splitting \(A_{1} = M_{1}-N_{1} \) is called an \(M\)-splitting if \(M_{1}\) is a nonsingular \(M\)-matrix and \(N_{1} \geq 0\); an \(H\)-splitting if \(\langle M_{1} \rangle -|N_1| \) is an \(M\)-matrix; an \(H\)-compatible splitting if \(\langle A_{1} \rangle = \langle M_1 \rangle - \lvert N_1\rvert \).

Lemma 7 [ 1 ]

Let \(a_{1}, b_1 \in {\mathbb R}^{n} \). Then \(a_{1}\geq 0\), \(b_{1}\geq 0\), \(a_1^{T}b_{1}=0\) if and only if \(a_{1}+b_{1}=|a_{1}-b_{1}|\).

Lemma 8 [ 6 ]

Suppose \(A_{1}, B_{1} \in {{\mathbb R}}^{n\times n}\). If \(A_{1}\) and \(B_{1}\) are \(M\) and \(Z\)-matrices respectively with \(A_{1} \leq B_{1}\), then \(B_{1}\) is an \(M\)-matrix. If \(A_{1}\) is an \(H\)-matrix, then \(|A_{1}^{-1}|\leq \langle A_{1}\rangle ^{-1}\). If \(A_{1} \leq B_{1}\), then \(\rho (A_{1}) \leq \rho (B_{1})\).

Lemma 9 [ 17 ]

Let \(A_{1}\in {{\mathbb R}}^{n\times n}\) be an \(M\)-matrix and \(A_{1}=M_{1}-N_{1}\) be an \(M\)-splitting. Let \(\rho \) be the spectral radius, then  \(\rho (M_{1}^{-1}N_{1})\) \(\textless \) \(1\).

Lemma 10 [ 13 ]

If splitting is an \(H\)-compatible of an \(H\)-matrix, then it is an \(H\)-splitting but converse is not true.

Lemma 11 [ 6 ]

Suppose \(A_{1} \geq 0 \). If there exist \(v ~ \textgreater ~ 0 \in {{\mathbb R}}^{n}\) and a scalar \(\alpha _{1}~ \textgreater ~ 0\) such that \(A_{1}v \leq \alpha _{1} v\), then \(\rho (A_{1}) \leq \alpha _{1} \). Moreover, if \( A_{1}v \textless \alpha _{1} v\), then \(\rho (A_{1})\textless \alpha _{1}\).

3 Main results

For a given vector \(s\in {\mathbb R}^{n }\), we consider the vectors \(s_{+}=\max \{ 0,s\} \), \(s_{-}=\max \{ 0,-s\} \) and \(A_{1}=(D_1 +\phi )-(L_1+U_1+\phi )\), where \(\phi \) is a strictly lower triangular matrix, \(U_{1}\) is a strictly upper triangular matrix of \(A_{1}\). \(U^T_1\) denotes the transpose of \(U_{1}\), \(L_1\) is strictly lower triangular matrix of \(A_1\) and \(\alpha \) is a positive real number. In the following theorem we convert the LCP\((q, A_{1})\) into a fixed point equation.

Theorem 12

Let \(A\in {\mathbb R}^{n\times n} \) with the splitting \(A_{1}=(D_1 +\phi )-(L_1+U_1+\phi )\). If \(z=\Omega _{1}s_+\) and \(\omega =\Omega _2s_{-}\), then the equivalent formulation of the LCP\((q,A_{1})\) in the form of fixed point equation is

\begin{eqnarray} \label{eq2} s=(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s_{+}-\Omega ^{-1}_{2}q. \end{eqnarray}

Proof â–¼
Let \(z=\Omega _{1}s_{+}\) and \(w=\Omega _{2}s_{-}\), and \(s=s_{+}-s_{-}\). From LCP\((q,A_{1})\)

\begin{align*} \Omega _{2}s_{-} & =A_{1}\Omega _{1}s_{+} + q, \\ s& =s_{+}-\Omega ^{-1}_{2}(A_{1}\Omega _{1}s_{+}+ q), \\ s& =(I_{1}-\Omega ^{-1}_{2}A_{1}\Omega _{1})s_{+}-\Omega ^{-1}_{2}q, \\ s& =(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s_{+}-\Omega ^{-1}_{2}q. \end{align*}

Proof â–¼

Based on 2 we propose the following iteration method which is referred to as modified general fixed point method (MGFP) for solving the LCP\((q, A_{1})\),

\begin{equation} \label{eq3} s^{(k+1)}=(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s^{(k)}_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s^{(k+1)}_{+}-\Omega ^{-1}_{2}q. \end{equation}
3

Let \(\operatorname {Res}\) be the Euclidean norm of the error vector, which is defined [ 17 ] as follows,

\[ \operatorname {Res}(z^{(k)})=\| \min (z^{(k)}, A_{1}z^{(k)}+q) \| _{2}. \]

Consider the nonnegative initial vector \(z^{(0)}\in {\mathbb R}^n\). The iteration process continues until the iteration sequence \(\{ z^{(k)}\} _{k=0}^{+\infty } \subset {\mathbb R}^n\) converges. The iteration process stop if \(\operatorname {Res}(z^{(k)})\) \(\textless \) \( 10^{-5} \). For computing \(z^{(k+1)}\in {\mathbb R}^{n}\) we use the following algorithm.

None
(Modified General Fixed Point Method)

  • Given any initial vector \(s^{(0)} \in {\mathbb R}^{n}\), \(\epsilon ~ \textgreater ~ 0 \) and set \( k=0 \).

  • for \(k=0,1,2,\ldots \) do

  • \(s^{(k)}_{+}=\max \{ 0,s^{(k)}\} \)

  • compute \(\operatorname {Res}= norm (\min (s^{(k)}_+, A_1s^{(k)}_++q\)))

  • if \(\operatorname {Res}{\lt} \epsilon \) then

  • \(s=s^{(k)}\)

  • break

  • else

  • Using the following scheme, create the sequence \(s^{(k)}\):

    \begin{equation*} s_1^{(k+1)}=((I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s^{(k)}_{+}-\Omega ^{-1}_{2}q)_1, \end{equation*}
  • for \(i= 2,3. \ldots , n\) do
    \(s_i^{(k+1)}=((I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s^{(k)}_{+}-\Omega ^{-1}_{2}q)_i\)
        \(+(\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s_{+}^{{(k+1)}})_{i}\)
    and set \(z^{(k+1)}=\Omega _{1}s_{+}^{(k+1)}\).

  • end for

  • end if

  • end for.

Moreover, the MGFP provides a general structure for solving LCP\((q, A_{1})\). Using some particular values of the parameter matrices \(\Omega _{1}, \Omega _2\) and we obtain an iterative method. In particular,

  • When \(\Omega _{1}=I\), \(\Omega _2=\Omega ^{-1}\) and \(\phi =0 \), from 3 we have,

    \begin{eqnarray} \label{eq4} s^{(k+1)}=(I_{1}-\Omega (D_{1}-U_{1}))s^{(k)}_{+}+\Omega L_1 s^{(k+1)}_{+}-\Omega q, \end{eqnarray}

    this is a GFP [ 17 ] .

Theorem 13

Suppose \(A_{1}=(D_1 +\phi )-(L_1+U_1+\phi ) \in {\mathbb R}^{n \times n}\) and \(q\in {\mathbb R}^n\). Then \(s^{*}\) is a solution of 2 if and only if \(z^{*}=\Omega _{1}s^{*}_{+}\) is a solution of LCP\((q,A_{1})\).

Proof â–¼
Let \(s^{*} \) be a solution of 2. Then

\begin{align*} s^*& =(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s^*_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s^*_{+}-\Omega ^{-1}_{2}q, \\ s^{*}& =(I_{1}-\Omega ^{-1}_{2}A_{1}\Omega _{1})s^{*}_{+}-\Omega ^{-1}_{2}q, \\ \Omega _{2}s^{*}_{-} & =A_{1}\Omega _{1}s^{*}_{+} + q. \end{align*}

Since \(\Omega _{2}s^{*}_{-} \geq 0\),

\[ A_{1}\Omega _{1}s^{*}_{+} + q \geq 0. \]

Moreover,

\[ (\Omega _{1}s^{*}_{+})^T(A_{1}\Omega _{1}s^{*}_{+}+ q) = (\Omega _{1}s^{*}_{+})^T(\Omega _{2}s^{*}_{-}) = 0, \]

and \(\Omega _{1}s^{*}_{+}\geq 0\). Therefore \(z^{*}=\Omega _{1}s^{*}_{+}\) is a solution of LCP\((q,A_{1})\).

Let \(z^{*}=\Omega _{1}s^{*}_{+}\), \(w^{*}=\Omega _{2}s^{*}_{-}\) and \(s^{*}=s^{*}_{+}-s^{*}_{-}\). From LCP\((q,A_{1})\)

\begin{align*} \Omega _{2}s^{*}_{-} & =A_{1}\Omega _{1}s^{*}_{+} + q, \\ s^{*}& =s^{*}_{+}-\Omega ^{-1}_{2}(A_{1}\Omega _{1}s^{*}_{+}+ q), \\ s^{*}& =(I_{1}-\Omega ^{-1}_{2}A_{1}\Omega _{1})s^{*}_{+}-\Omega ^{-1}_{2}q, \\ s^*& =(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s^*_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s^*_{+}-\Omega ^{-1}_{2}q. \end{align*}

Thus, \(s^{*}\) is a solution of 2.

Proof â–¼

In the following theorem, we show that the solution of 2 is unique when the system matrix \(A _1\) of LCP\((q,A_1)\) is a \(P\)-matrix.

Theorem 14

Let \(A_1\) be a \(P\)-matrix and \(A_{1}=(D_1 +\phi )-(L_1+U_1+\phi ) \in {\mathbb R}^{n \times n}\) and \(q\in {\mathbb R}^n\). Then for any positive diagonal matrices \(\Omega _1\) and \(\Omega _2\), 2 has a unique solution.

Proof â–¼
Since \(A_{1}\) is a \(P\)-matrix, for any \(q \in {\mathbb R}^{n}\) LCP\((q, A_{1})\) has a unique solution. Let \( y^{*}\) and \( u^{*}\) be the solutions of 2. Then

\begin{align*} s^*& =(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s^*_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s^*_{+}-\Omega ^{-1}_{2}q, \\ u^*& =(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})u^*_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}u^*_{+}-\Omega ^{-1}_{2}q. \end{align*}

Since \(\Omega _{1}y^{*}_{+} =\Omega _{1}u^{*}_{+}\) \(\implies y^{*}_{+} =u^{*}_{+}\), therefore

\[ y^{*}=u^{*}. \]

In the following, we prove the convergence conditions when \(A_{1}\) is a \(P\)-matrix.

Theorem 15

Let \(A_{1}\) be a \(P\)-matrix with \(A_{1}=(D_1 +\phi )-(L_1+U_1+\phi ) \in {\mathbb R}^{n \times n}\) and \(q\in {\mathbb R}^n\). Assume

\[ \rho \Big(\Big(I-|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}|\Big)^{-1}\Big|I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1}\Big|\Big){\lt}1 \]

and \(s^{*}\) is the solution of 2. Then the sequence \(\{ z^{(k)}\} ^{+\infty }_{k=1}\) generated by \(\cref{mthd1} \) converges to \(z^{*}\) for any initial vector \(s^{(0)}\in {\mathbb R}^{n}\).

Proof â–¼
Suppose \(A_{1}\) is a \(P\)-matrix, then \(s^{*}\) is a unique solution of 2. Thus

\[ s^*=(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})s^*_{+}+\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}s^*_{+}-\Omega ^{-1}_{2}q. \]

From 3, this implies

\begin{equation*} \begin{split} s^{(k+1)}-s^{*}& =(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})(s_{+}^{(k)}-s^*_{+}) \\ & \quad +\Omega ^{-1}_{2} (L_1+ \phi )\Omega _{1}(s_{+}^{(k+1)}-s^*_{+}). \end{split}\end{equation*}

It follows that

\begin{equation*} \begin{split} & |s^{(k+1)}-s^{*}|= \\ & =|(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})(s_+^{(k)}-s^*_{+})+\Omega ^{-1}_{2} (L_1 +\phi )\Omega _{1}(s_+^{(k+1)}-s^*_{+})| \\ & \leq |(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})(s_{+}^{(k)}-s^*_{+})|+|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}(s_+^{(k+1)}-s^*_{+})| \\ & \leq |(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})||s^{(k)}-s^*|+|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}||s^{(k+1)}-s^*| \end{split}\end{equation*}
\[ |s^{(k+1)}-s^{*}|-|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}||s^{(k+1)}-s^*| \leq |(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})||s^{(k)}-s^*| \]
\[ (I-|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}|)|s^{(k+1)}-s^*| \leq |(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})||s^{(k)}-s^*| \]

and

\[ |s^{(k+1)}-s^*| \leq \big(I-|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}|\big)^{-1}\big|(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})\big|\cdot |s^{(k)}-s^*|. \]

Therefore, if \(\rho \Big(\big(I-|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}|\big)^{-1}\big|(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})\big|\Big){\lt}1, \) for any initial vector \(s^{(0)}\in {\mathbb R}^{n}\) the sequence \(\{ z^{(k)}\} ^{+\infty }_{k=1}\) converges to the \(z^{*}\).

Proof â–¼

Now, when \(A _1\) is an \(H _+\)-matrix, we analyze the convergence domains for parameter matrices \(\Omega _1\) and \(\Omega _2\) for MGFP.

Theorem 16

Let \(A_1\) be a \(H_{+ }\)-matrix with \(A_{1}=(D_1 +\phi )-(L_1+U_1+\phi ) \in {\mathbb R}^{n \times n}\) and either one of the following is true:

\((1)\) \( \Omega ^{-1}_{2}\Omega _{1} \) \( \textgreater \) \( (D_{1}+\phi )^{-1} \) and \((2\Omega ^{-1}_{2}\Omega _{1}-(D_{1}+\phi )-|B+\phi |)\), where \(B=L_{1}+U_1\).

\((2)\) \( 0 \) \(\textless \) \( \Omega ^{-1}_{2}\Omega _{1}\) \(\leq \) \((D_{1}+\phi )^{-1}.\)

Then the sequence \(\{ z^{(k)}\} ^{+\infty }_{k=1}\) generated by \(\cref{mthd1}\) converges to \(z^{*}\) for any initial vector \(s^{(0)}\in {\mathbb R}^{n}\).

Proof â–¼
Since \(A_{1}\) is an \( H _+\)-matrix. there LCP\((q,A_{1})\) has unique solution [ 11 ] . Now we will look at the splitting,

\begin{equation*} \begin{split} & (I_{1}-|\Omega ^{-1}_{2}(L_1+\phi )\Omega _{1}|)-|I_{1}-\Omega ^{-1}_{2}(D_1+\phi -U_{1})\Omega _{1}|= \\ & =(I_{1}-|I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi )\Omega _{1}|)-\Omega ^{-1}_{2}|B+\phi |\Omega _{1}. \end{split}\end{equation*}

(1) If \( \Omega ^{-1}_{2}\Omega _{1} \textgreater (D_{1}+\phi )^{-1}\) then,

\begin{equation*} \begin{split} & (I_{1}-|I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi )\Omega _{1}|)-\Omega ^{-1}_{2}|B+\phi |\Omega _{1}= \\ & =2I_{1}-\Omega ^{-1}_{2}(D_1+\phi )\Omega _{1}-\Omega ^{-1}_{2}|B+\phi |\Omega _{1}\\ & =\Omega ^{-1}_{2}(2\Omega ^{-1}_{2}\Omega _{1}-(D_{1}+\phi )-|B+\phi |)\Omega _{1}. \end{split}\end{equation*}

Since \((2\Omega ^{-1}_{2}\Omega _{1}-(D_{1}+\phi )-|B+\phi |)\) is an \(M\)-matrix. Then the splitting \((I_{1}-|\Omega ^{-1}_{2}(L_{1}+\phi )\Omega _{1}|)-|I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1}|\) represent an \(M\)-splitting of the \(M\)-matrix \(\Omega ^{-1}_{2}(2\Omega ^{-1}_{2}\Omega _{1}-(D_{1}+\phi )-|B+\phi |)\Omega _{1}\), hence \(\rho ((I-|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}|)^{-1}|(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})|)\) \(\textless \) \( 1 \).

(2) If \(\Omega ^{-1}_{2}\Omega _{1}\) \(\leq \) \( (D_{1}+\phi )^{-1}\) then,

\begin{align*} (I_{1}-|I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi )\Omega _{1}|)-\Omega ^{-1}_{2}|B+\phi |\Omega _{1}& =\Omega ^{-1}_{2}(D_{1}+\phi -|B+\phi |)\Omega _{1}\\ & =\Omega ^{-1}_{2}\langle A_{1} \rangle \Omega _{1}. \end{align*}

Therefore, \((I_{1}-|\Omega ^{-1}_{2}(L_{1}+\phi )\Omega _{1}|)-|I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1}|\) represents an \(M\)-splitting of \(M\)-matrix \(\Omega ^{-1}_{2}\langle A_{1} \rangle \Omega _{1}\) [ 13 ] . Therefore, from \(\cref{lem1}\) \(\rho ((I-|\Omega ^{-1}_{2} (L_1 + \phi )\Omega _{1}|)^{-1}|(I_{1}-\Omega ^{-1}_{2}(D_{1}+\phi -U_{1})\Omega _{1})|)\) \(\textless \) \( 1 \).

Proof â–¼

4 Numerical examples

In this section, two numerical examples are provided to show the effectiveness of our new method. We use some notation as, IT=number of iteration steps, CPU = CPU time in seconds. The system matrix \(A_{1} \) is generated by

\[ A_{1}(p_{1}, p_{2}, p_{3})= Q+p_{1}I_{1}+p_{2}G+p_{3}H, \]

where \(p_{1}, p_{2}\) and \(p_{3}\) are given constants, \(I_{1}\) is the identity matrix of order \(n\) and \(G\) = tridiag \((0, 0, 1)= \begin{bmatrix} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ 0 & 0 & 0 & 1 & \vdots \\ \vdots & \ldots & 0 & \ddots & 1 \\ 0 & \ldots & 0 & 0 & 0 \\ \end{bmatrix} \in {\mathbb R}^{n \times n} \) and


\(H\)=diag([1, 2, 1, 2, …])\(= \begin{bmatrix} 1 & 0 & 0 & \ldots & 0 \\ 0 & 2 & 0 & \ldots & 0 \\ 0 & 0 & 1 & 0 & 0 \\ \vdots & \ldots & 0 & 2 & \vdots \\ 0 & \ldots & \ldots & 0 & \ddots \\ \end{bmatrix} \in {\mathbb R}^{n \times n}\).

Let \(s^{(0)}=(0,0,0,0,\ldots 0,0,\ldots )^T\in {\mathbb R}^{n}\) be an initial vector. We consider the LCP\((q, A_1)\) which has always a unique solution, where \( q= (1 \; -1 \) \(1 \; -1 \; \ldots 1 \; -1 \ldots )^T \) \(\in {\mathbb R}^{n}\). We set \(\Omega _{1}= I_{1}\) and \(\Omega _{2}= \omega ^{-1} D_{1}\) in the MGFP. The suggested method is compared with GFP [ 17 ] , which is effective in solving LCP\((q,A_{1})\).

Matlab version 2021a was used for all calculations. example 17 and example 18 show the numerical results for GFP [ 17 ] and MGFP respectively.

Example 17

The system matrix \(A_{1}, B_{1}\in {{\mathbb R}}^{n\times n}\) are generated by
\(A_{1}(p_{1}, p_{2}, p_{3})= Q+p_{1}I_{1}+p_{2}G+p_{3}H,\) where \(p_{1}, p_{2}\) and \(p_{3}\) are given constants and

\[ Q=\operatorname {tridiag} (-I_{2},L_{1}, -I_{2})= \begin{bmatrix} L_{1} & -I_{2} & 0 & \ldots & 0 \\ -I_{2} & L_{1} & -I_{2} & \ldots & 0 \\ \vdots & -I_{2} & L_{1} & -I_{2} & \vdots \\ 0 & \ldots & -I_{2} & \ddots & -I_{2} \\ 0 & \ldots & 0 & -I_{2} & L_{1} \\ \end{bmatrix} \in {{\mathbb R}}^{n\times n}, \]
\[ L_{1}=\operatorname {tridiag} (-1,4, -1)= \begin{bmatrix} 4 & -1 & \ldots & \ldots & 0 \\ -1 & 4 & -1 & \ldots & 0 \\ \vdots & -1 & 4 & -1 & \vdots \\ 0 & \ldots & -1 & \ddots & -1 \\ 0 & \ldots & \ldots & -1 & 4 \\ \end{bmatrix} \in {{\mathbb R}}^{m\times m}, \]

where \(I_{2}\) is the identity matrix of order \(m\), where \(n=m^{2}\) with \(m\) being a positive integer.

\(\mathbf{A_1(1, 1, -1)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

18

21

22

23

23

\(\omega =1\)

\(\textbf{CPU}\)

0.0058

0.1267

0.8356

12.8507

76.8588

 

\(\textbf{Res} \)

7.8e-06

7.5e-06

7.3e-06

5.3e-06

7.0e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

15

18

19

20

20

\(\omega =1\)

\(\textbf{CPU}\)

0.0050

0.1028

0.6105

11.2363

67.0586

\(\alpha =0.1\)

\(\textbf{Res} \)

6.4e-06

7.8e-06

6.8e-06

4.4e-06

5.9e-06

\(\mathbf{A_1(0, 1, 0)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

13

14

15

15

15

\(\omega =1\)

\(\textbf{CPU}\)

0.0045

0.0582

0.3728

5.4135

31.8341

 

\(\textbf{Res} \)

5.3e-06

7.6e-06

9.3e-06

2.3e-06

2.6e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

12

13

13

13

13

\(\omega =1\)

\(\textbf{CPU}\)

0.0041

0.0817

0.4127

7.2064

42.3931

\(\alpha =0.1\)

\(\textbf{Res} \)

3.1e-06

3.2e-06

5.3e-06

7.4e-06

9.5e-06

\(\mathbf{A_1(1, 1, 1)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

9

9

10

10

10

\(\omega =1\)

\(\textbf{CPU}\)

0.0046

0.0580

0.447

5.2868

31.3160

 

\(\textbf{Res} \)

2.1e-06

6.7e-06

1.8e-06

2.5e-06

3.2e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

9

9

9

10

10

\(\omega =1\)

\(\textbf{CPU}\)

0.0050

0.0530

0.3177

5.3091

31.5628

\(\alpha =0.02\)

\(\textbf{Res} \)

1.6e-06

5.0e-06

8.3e-06

1.8e-06

2.3e-06

\(\mathbf{A_1(1, 0, 1)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

9

9

9

10

10

\(\omega =1.1\)

\(\textbf{CPU}\)

0.0045

0.0582

0.3728

5.4135

31.8341

 

\(\textbf{Res} \)

5.3e-06

7.6e-06

9.3e-06

2.3e-06

2.6e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

9

9

9

9

10

\(\omega =1.1\)

\(\textbf{CPU}\)

0.0040

0.0507

0.3671

5.4161

31.9112

\(\alpha =0.05\)

\(\textbf{Res} \)

4.7e-06

6.6e-06

8.1e-06

9.3e-06

2.1e-06

Table 1 Results for GFP [ 17 ] and MGFP with \(\phi =\alpha (L_{1}+U^T_{1})\)

Example 18

The system matrix \(A_{1}\in {{\mathbb R}}^{n\times n}\) is generated by

\[ A_{1}(p_{1}, p_{2}, p_{3})= Q+p_{1}I_{1}+p_{2}G+p_{3}H, \]

where \(p_{1}, p_{2}\) and \(p_{3}\) are given constants, \(I_{1}\) is the identity matrix of order \(n\) and

\begin{align*} Q=& \operatorname {tridiag} (-1.5I_2,L_1, -0.5I_{2}) \\ = & \begin{bmatrix} L_{1} & -0.5I_{2} & 0 & \ldots & 0 \\ -1.5I_{2} & L_{1} & -0.5I_{2} & \ldots & 0 \\ \vdots & -1.5I_{2} & L_{1} & -0.5I_{2} & \vdots \\ 0 & \ldots & -1.5I_{2} & \ddots & -0.5I_{2} \\ 0 & \ldots & 0 & -1.5I_{2} & L_{1} \\ \end{bmatrix} \in {{\mathbb R}}^{n\times n}, \end{align*}
\[ L_{1}=\operatorname {tridiag} (-1.5,4, -0.5)= \begin{bmatrix} 4 & -0.5 & \ldots & \ldots & 0 \\ -1.5 & 4 & -0.5 & \ldots & 0 \\ \vdots & -1.5 & 4 & -0.5 & \vdots \\ 0 & \ldots & -1.5 & \ddots & -0.5 \\ 0 & \ldots & \ldots & -1.5 & 4 \\ \end{bmatrix} \in {{\mathbb R}}^{m\times m}, \]

where \(I_{2}\) is the identity matrix of order \(m\).

\(\mathbf{A_1(1, 1, -1)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

13

14

15

15

15

\(\omega =1\)

\(\textbf{CPU}\)

0.0061

0.0807

0.5871

8.3912

48.8538

 

\(\textbf{Res} \)

5.0e-06

6.2e-06

3.5e-06

5.0e-06

6.5e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

11

12

13

15

15

\(\omega =1\)

\(\textbf{CPU}\)

0.0051

0.0607

0.5392

8.44191

49.4862

\(\alpha =0.1\)

\(\textbf{Res} \)

3.0e-06

4.8e-06

7.9e-06

3.1e-06

8.7e-06

\(\mathbf{A_1(0, 1, 0)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

10

10

10

11

11

\(\omega =1\)

\(\textbf{CPU}\)

0.0041

0.0548

0.3621

5.999

35.2935

 

\(\textbf{Res} \)

1.9e-06

5.8e-06

9.4e-06

2.6e-06

3.3e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

8

9

9

9

9

\(\omega =1\)

\(\textbf{CPU}\)

0.0045

0.0.0507

0.3412

4.5795

28.4975

\(\alpha =0.1\)

\(\textbf{Res} \)

7.8e-06

2.05e-06

2.7e-06

3.4e-06

4.0e-06

\(\mathbf{A_1(1, 1, 1)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

7

7

8

8

8

\(\omega =1\)

\(\textbf{CPU}\)

0.0041

0.0407

0.3266

4.1892

24.6431

 

\(\textbf{Res} \)

2.7e-06

6.8e-06

9.7e-06

1.3e-06

1.7e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

6

7

7

7

7

\(\omega =1\)

\(\textbf{CPU}\)

0.0043

0.0474

0.2796

3.4536

22.6106

\(\alpha =0.1\)

\(\textbf{Res} \)

9.7e-06

1.3e-06

1.6e-06

2.0e-06

2.2e-06

\(\mathbf{A_1(1, 0, 1)}\)

n

\({100}\)

\({400}\)

\({900}\)

\({1600}\)

\({2500}\)

\(\textbf{GFP}\)

\(\textbf{IT} \)

8

8

8

8

8

\(\omega =1.1\)

\(\textbf{CPU}\)

0.0041

0.0407

0.2699

4.4741

24.6650

 

\(\textbf{Res} \)

1.9e-06

2.7e-06

3.2e-06

3.8e-06

4.2e-06

\(\textbf{MGFP}\)

\(\textbf{IT}\)

8

8

8

8

8

\(\omega =1.1\)

\(\textbf{CPU}\)

0.0047

0.05303

0.2554

4.0929

24.3851

\(\alpha =0.1\)

\(\textbf{Res} \)

1.7e-06

2.4e-06

2.9e-06

3.4e-06

3.8e-06

Table 2 Results for GFP [ 17 ] and MGFP with \(\phi =\alpha (L_{1}+U^T_{1})\)

From example 17 and example 18, we see that our proposed MGFP have requires less iteration steps than the GFP [ 17 ] respectively.

5 Conclusion

In this article, we introduced a modified general fixed point method based on new matrix splitting for solving the LCP\((q, A _1)\) with parameter matrices \(\Omega _1\) and \(\Omega _2\). Also, we showed how the iterative form is linked to the new matrix splitting and the parameter matrices \(\Omega _1\) and \(\Omega _2\) . This iterative form preserves the big and sparse structure of \(A_1\) during the iteration process. Moreover, we showed the convergence condition for \(P\)-matrix and presented sufficient convergence domains for \(\Omega _1\) and \(\Omega _2\) when system matrix \(A_{1}\) is \(H_{+}\)-matrix. At the end, two examples are discussed to show the efficiency of our proposed method.

Acknowledgements

The authors are grateful to the editor associated with this paper for the excellent cooperation. Also, the authors thank the anonymous referees who spent some precious time reviewing this paper. The authors would like to acknowledge their contribution, due to which there is a significant improvement in the paper.


Bibliography

1

R. Ali, I. Khan, A. Ali, A. Mohamed, Two new generalized iteration methods for solving absolute value equations using \(M\)-matrix, AIMS Math., 7 (2022) no. 5, pp. 8176–8187. https://doi.org/10.3934/math.2022455 \includegraphics[scale=0.1]{ext-link.png}

2

A.K. Das, Properties of some matrix classes based on principal pivot transform, Ann. Oper. Res., 243 (2016), pp. 375–382. https://doi.org/10.1007/s10479-014-1622-6 \includegraphics[scale=0.1]{ext-link.png}

3

A.K. Das, R. Jana, Deepmala, Finiteness of criss cross method in complementarity problem, in: International Conference on Mathematics and Computing, pp. 170–180, Springer, 2017. https://doi.org/10.1007/978-981-10-4642-1_15 \includegraphics[scale=0.1]{ext-link.png}

4

A.K. Das, R. Jana, Deepmala, On generalized positive subdefinite matrices and interior point algorithm, in: Frontiers in Optimization: Theory and Applications, pp. 3–16, Springer, 2016. https://doi.org/10.1007/978-981-10-7814-9_1 \includegraphics[scale=0.1]{ext-link.png}

5

A. Dutta, R. Jana, A.K. Das, On column competent matrices and linear complementarity problem, in: Proceedings of the Seventh International Conference on Mathematics and Computing, pp. 615–625, Springer, 2022. https://doi.org/10.1007/978-981-16-6890-6_46 \includegraphics[scale=0.1]{ext-link.png}

6

A. Frommer, D.B. Szyld, H-splittings and two-stage iterative methods, Numer. Math., 63 (1992), pp. 345–356. https://doi.org/10.1007/BF01385865 \includegraphics[scale=0.1]{ext-link.png}

7

R. Jana, A.K. Das, A. Dutta, On hidden \(z\)-matrix and interior point algorithm, OPSEARCH, 56 (2019). https://doi.org/10.1007/s12597-019-00412-0 \includegraphics[scale=0.1]{ext-link.png}

8

S.K. Neogy, A.K. Das, R. Bapat, Optimization models with economic and game theoretic applications, Ann. Oper. Res., 243 (2016), pp. 1–3. https://doi.org/10.1007/s10479-016-2250-0 \includegraphics[scale=0.1]{ext-link.png}

9

S.K. Neogy, A.K. Das, R. Bapat, Modeling, computation and optimization, 11 (2021).

10

S.K. Neogy, A.K. Das, S. Sinha, A. Gupta, On a mixture class of stochastic game with ordered field property, in: Mathematical programming and game theory for decision making, pp. 451–477, World Scientific, 2008. https://doi.org/10.1142/9789812813220_0025 \includegraphics[scale=0.1]{ext-link.png}

11

Z.Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010) no. 6, pp. 917–933. https://doi.org/10.1002/nla.680 \includegraphics[scale=0.1]{ext-link.png}

12

Z.Z. Bai, D. Evans, Matrix multisplitting methods with applications to linear complementarity problems: parallel synchronous and chaotic methods, Reseaux et Systemes Repartis, Calculateurs Paralleles, 13 (2001), pp. 125–154.

13

A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, SIAM, Philadelphia, 1994.

14

R.W. Cottle, J.S. Pang, R.E. Stone, The Linear Complementarity Problem, Academic Press, London, 1992.

15

C.W. Cryer, The method of Christopherson for solving free boundary problems for infinite journal bearing by means of finite differences, Math. Comput., 25 (1971) no. 115, pp. 435–443. https://doi.org/10.2307/2005205 \includegraphics[scale=0.1]{ext-link.png}

16

M. Dehghan, M. Hajarian, Convergence of SSOR methods for linear complementarity problems, Oper. Res. Lett., 37 (2009) no. 3, pp. 219–223. https://doi.org/10.1016/j.orl.2009.01.013 \includegraphics[scale=0.1]{ext-link.png}

17

Fang Xi Ming, General fixed point method for solving the linear complementarity problem, AIMS Math., 6 (2021) no. 11, pp. 11904–11920. https://doi.org/10.3934/math.2021691 \includegraphics[scale=0.1]{ext-link.png}

18

A. Hadjidimos, L.L. Zhang, Comparison of three classes of algorithms for the solution of the linear complementarity problem with an \(H_{+}\)-matrix, J. Comp. Appl. Math., 336 (2018), pp. 175–191. https://doi.org/10.1016/j.cam.2017.12.028 \includegraphics[scale=0.1]{ext-link.png}

19

Han Xian Li, Yuan Dong Jin, Jiang Shan, Two SAOR iterative formats for solving linear complementarity problems, International Journal of Information Technology and Computer Science, 3 (2011) no. 3, pp. 38–45. http://doi.org/10.5815/ijitcs.2011.02.06 \includegraphics[scale=0.1]{ext-link.png}

20

C.E. Lemke, J.T. Howson, Jr., Equilibrium points of bimatrix games, J. Soc. Ind. Appl. Math., 12 (1964) no. 2, pp. 413–423. https://doi.org/10.1137/0112033 \includegraphics[scale=0.1]{ext-link.png}

21

K.G. Murthy, F.T. Yu, Linear Complementarity, Linear and Nonlinear Programming, Berlin, Heldermann, 3, Sigma Series in Applied Mathematics, 1988.

22

H.S. Najafi, S.A. Edalatpanah, Modification of iterative methods for solving linear complementarity problems, Eng. Comput., 30 (2013) no. 7, pp. 910–923. https://doi.org/10.1108/EC-10-2011-0131 \includegraphics[scale=0.1]{ext-link.png}

23

S.K. Neogy, A.K. Das, A. Gupta, Generalized principal pivot transforms, complementarity theory and their applications in stochastic games, Optim. Lett., 6 (2012) no. 2, pp. 339–356. https://doi.org/10.1007/s11590-010-0261-3 \includegraphics[scale=0.1]{ext-link.png}

24

A.A. Raimondi, J. Boyd, A solution for the finite journal bearing and its application to analysis and design III, ASLE Transactions, 1 (1958) no. 1, pp. 194–209. https://doi.org/10.1080/05698195808972330 \includegraphics[scale=0.1]{ext-link.png}

25

M.H. Xu, G.F. Laun, A rapid algorithm for a class of linear complementarity problems, Appl. Math. Comput., 188 (2007) no. 2, pp. 1647–1655. https://doi.org/10.1016/j.amc.2006.11.184 \includegraphics[scale=0.1]{ext-link.png}

26

L.L. Zhang, Z.R. Ren, Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems, Appl. Math. Lett., 26 (2013) no. 6, pp. 638–642. https://doi.org/10.1016/j.aml.2013.01.001 \includegraphics[scale=0.1]{ext-link.png}

27

H. Zheng, W. Li, S. Vong, A relaxation modulus-based matrix splitting iteration method for solving linear complementarity problems, Numer. Algor., 74 (2017) no. 1, pp. 137–152. https://doi.org/10.1007/s11075-016-0142-7 \includegraphics[scale=0.1]{ext-link.png}