New Accelerated Modulus-Based Iteration Method
for Solving Large and Sparse
Linear Complementarity Problem

Bharat Kumarโˆ—,1, Deepmala2 and Arup K. Das3
(Date: October 25, 2023; accepted: May 03, 2024; published online: July 11, 2024.)
Abstract.

For the large and sparse linear complementarity problem, we provide a family of new accelerated modulus-based iteration methods in this article. We provide some sufficient criteria for the convergence analysis when the system matrix is a P-matrix or an H+-matrix. In addition, we provide some numerical examples of the different parameters to illustrate the efficacy of our proposed methods. These methods help us reduce the number of iterations and the time required by the CPU, which improves convergence performance.

Key words and phrases:
linear complementarity problem, P-matrix, H+-matrix, matrix splitting, convergence.
2005 Mathematics Subject Classification:
65F10, 90C33, 65F50.
โˆ—The first author is thankful to the University Grants Commission (UGC), Government of India under the SRF fellowship Program No. 1068/(CSIR-UGC NET DEC.2017).
1Mathematics Discipline, PDPM-Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, M.P., India, e-mail: bharatnishad.kanpu@gmail.com
2Mathematics Discipline, PDPM-Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, M.P., India, e-mail: dmrai23@gmail.com
3Indian Statistical Institute, 203 B.T. Road, Kolkata-700108, India, e-mail: akdas@isical.ac.in

1. Introduction

The large and sparse matrices are matrices that have a large number of rows and columns but a small number of non-zero elements. In other words, they are matrices where the majority of the elements are zero. Sparse matrices are commonly used to represent complex systems or large datasets in fields such as computer science, mathematics, physics and engineering. The sparsity of the matrix means that it is not practical to store each element individually and specialized data structures and algorithms must be used to efficiently store and manipulate the matrix.

Let us assume that the matrix ๐’œโˆˆโ„nร—n is large and sparse and that it is associated with the vector ฯƒโˆˆโ„n. The objective of the linear complementarity problem, referred to as LCP(ฯƒ,๐’œ), is to determine the solution ฮปโˆˆโ„n to the following system:

(1) ฮปโ‰ฅ0,ฯ‰=๐’œโขฮป+ฯƒโ‰ฅ0,ฮปTโข(๐’œโขฮป+ฯƒ)=0.

Applications of the linear complementarity problem that have received significant study in the literature on mathematical programming include the free boundary problem, the Nash equilibrium point of the bimatrix game, operations research, control theory, mathematical economics, optimization theory, stochastic optimal control, and the American option pricing problem.

The methods available for solving the linear complementarity problems are into two groups namely the pivotal method [26], [18] and the iterative method [12], [4], [7] and [8]. The objective behind the iterative method is to produce a sequence of iterates that lead to a solution, but the pivotal method develops a series of pivot steps that lead to a basic feasible complementary vector through a series of pivot steps.

Reformulating the LCP(ฯƒ,๐’œ) into an equation whose solution must be the same as the LCP (ฯƒ,๐’œ) is one of the most well-known and attractive methods of constructing fast and inexpensive iteration methods. Consequently, certain useful LCP(ฯƒ,๐’œ) equivalent forms have arisen. Mangasarian [22] presented three methods: projected Jacobi over-relaxation, projected SOR and projected symmetric SOR. For additional details regarding developing iteration methods based on Mangasarianโ€™s notion, see also [6], [35] and [10]. Bai has offered the following equivalent form in [34]:

(2) (ฮ˜1+โ„ณ)โขฯฐ=๐’ฉโขฯฐ+(ฮ˜1โˆ’๐’œ)โข|ฯฐ|โˆ’rโขฯƒ,

where r>0 and ฮ˜1โˆˆโ„nร—n is a positive diagonal matrix and developed a class of modulus-based matrix splitting iteration methods. The Equation (2) covers the published works in [2], [28], [20], [13] and [5]. This kind of modulus-based matrix splitting iteration method has been considered efficient for solving the LCP(ฯƒ,๐’œ). For other formulations of Equation (2), see [21], [16], [29], [31] and [17] for more details. Furthermore, this concept has been successfully applied to other complementarity problems, including the horizontal linear complementarity problem [11], the implicit complementarity problem [19], the nonlinear complementarity problem [9] and [33], [14] and the quasi-complementarity problem [27].

Bai [34] solved linear complementarity problems using modulus-based matrix splitting methods. However, the number of iterations in these methods is large enough to accomplish the optimal approximate solution to the numerical instances. In this paper, we introduce a class of new accelerated modulus-based iteration techniques for solving the large and sparse LCP(ฯƒ,๐’œ). These methods are based on the work of Shilang [30] and Bai [34]. We demonstrate that the linear complementarity problem and the fixed point equation are equivalent and both have the same solution. In addition, we present several convergence conditions for the method that we proposed.

The article is structured as follows: In Sectionย 2, we provide necessary definitions, notations and well-known lemmas. All of these things will be used in the discussions in the subsequent sections of this work. A new accelerated modulus-based iteration method is presented in Sectionย 3 and it makes use of the new equivalent fixed point form of the LCP(ฯƒ,๐’œ). In Sectionย 4, we define certain convergence domains for the proposed approach. Sectionย 5 gives some examples of the numerical comparison that is made between the methods that have been suggested and the modulus-based matrix splitting methods that were presented by Bai [34]. Sectionย 6 provides the conclusion.

2. Preliminaries

In this section, we introduce some basic notations, definitions and lemmas, most of which may be found in [32], [3], [25], that will be used throughout the article to examine the convergence analysis of the proposed methods.

The following is a list of related notations that are used for a given large and sparse matrix ๐’œ:

  • โ€ข

    Let ๐’œ=(aiโขj)โˆˆโ„nร—n and โ„ฌ=(biโขj)โˆˆโ„nร—n. We use ๐’œ> (โ‰ฅ) โ„ฌ to denotes aiโขj>(โ‰ฅ) biโขj, โˆ€ 1โ‰คi,jโ‰คn;

  • โ€ข

    (โ‹†)T denotes the transpose of the given matrix or vector;

  • โ€ข

    We use ๐’œ=0โˆˆโ„nร—n to denotes aiโขj=0,โˆ€i,j;

  • โ€ข

    |๐’œ|=(|aiโขj|), โˆ€i,j;

  • โ€ข

    ๐’œโˆ’1 represents the inverse of the matrix ๐’œ;

  • โ€ข

    ฮ˜1 is a real positive diagonal matrix of order n;

  • โ€ข

    โˆฅโ‹†โˆฅ2 is euclidean norm of a vector i.e. let x=(xi)โˆˆโ„n, then โ€–xโ€–2=โˆ‘i=1nxi2;

  • โ€ข

    Let x,yโˆˆโ„n, minโก(x,y) is the vector whose itโขh component is
    minโก(xi,yi);

  • โ€ข

    Assume ๐’œ=๐’Ÿโˆ’โ„’โˆ’๐’ฐ where ๐’Ÿ=diagโก(๐’œ) and โ„’, ๐’ฐ are the strictly lower, upper triangular matrices of ๐’œ, respectively.

Let ๐’œ=(aiโขj)โˆˆโ„nร—n and โ„ฌ=(biโขj)โˆˆโ„nร—n be square matrices. The comparison matrix of ๐’œ is defined as โŸจaiโขjโŸฉ=|aiโขj| if i=j and โŸจaiโขjโŸฉ=โˆ’|aiโขj| if iโ‰ j; a Z-matrix if all of its non-diagonal elements are less than equal to zero; an M-matrix if ๐’œโˆ’1โ‰ฅ0 as well as Z-matrix; an H-matrix, if โŸจ๐’œโŸฉ is an M-matrix and an H+-matrix if ๐’œ is an H-matrix as well as aiโขi>0โขโˆ€iโˆˆ{1,2,โ€ฆ,n}; a P-matrix if all its principle minors are positive such that dโขeโขtโข(๐’œฮฑ1โขฮฑ1)>0 โˆ€ ฮฑ1โŠ†{1,2,โ€ฆ,n}. The splitting ๐’œ=โ„ณโˆ’๐’ฉ is called an M-splitting if โ„ณ is a nonsingular M-matrix and ๐’ฉโ‰ฅ0; an H-splitting if โŸจโ„ณโŸฉโˆ’|๐’ฉ| is an M-matrix; an H-compatible splitting if โŸจ๐’œโŸฉ=โŸจโ„ณโŸฉโˆ’|๐’ฉ|.

Lemma 1 ([23]).

Let x,yโˆˆโ„n. xโ‰ฅ0, yโ‰ฅ0, xTโขy=0 if and only if x+y=|xโˆ’y|.

Lemma 2 ([3]).

Let ๐’œ,โ„ฌโˆˆโ„nร—n. If ๐’œ and โ„ฌ are M and Z-matrices, respectively, with ๐’œโ‰คโ„ฌ then โ„ฌ is an M-matrix. If ๐’œ is an H-matrix then |๐’œโˆ’1|โ‰คโŸจ๐’œโŸฉโˆ’1.

Lemma 3 ([32]).

Let ๐’œโˆˆโ„nร—n be an M-matrix and ๐’œ=โ„ณโˆ’๐’ฉ be an M-splitting, then ย ฯโข(โ„ณโˆ’1โข๐’ฉ) < 1.

Lemma 4 ([3]).

Suppose ๐’œโ‰ฅ0โˆˆโ„nร—n, if there exist v>0โˆˆโ„n and a scalar ฮฑ1>0 such that ๐’œโขvโ‰คฮฑ1โขv, then ฯโข(๐’œ)โ‰คฮฑ1. Moreover, if ๐’œโขv<v, then ฯโข(๐’œ)<1.

3. Main results

Suppose vector ฯฐโˆˆโ„n and ๐’œ=(โ„ณ+Iโˆ’โ„’)โˆ’(๐’ฉ+Iโˆ’โ„’), where I is the identity matrix of order n and โ„’ is the strictly lower triangular matrix of ๐’œ. In the following result, we convert the LCP(ฯƒ,๐’œ) into a fixed point formulation.

Theorem 5.

Suppose ๐’œโˆˆโ„nร—n with the splitting ๐’œ=(โ„ณ+Iโˆ’โ„’)โˆ’(๐’ฉ+Iโˆ’โ„’) and ฯƒโˆˆโ„n. Let ฮป=ฯ„โข(|ฯฐ|+ฯฐ), ฯ‰=ฮ˜1โข(|ฯฐ|โˆ’ฯฐ), where ฯ„โ‰ฅ0, rโ‰ฅ0 and ฮ˜1 is a positive diagonal matrix and the matrix (โ„ณ+ฮ˜1+Iโˆ’โ„’) be a nonsingular. Then the equivalent formulation of the LCP(ฯƒ,๐’œ) in form of fixed point equation is

(3) ฯฐ=(โ„ณ+ฮ˜1+Iโˆ’โ„’)โˆ’1โข[(๐’ฉ+Iโˆ’โ„’)โขฯฐ+(ฮ˜1โˆ’๐’œ)โข|ฯฐ|โˆ’rโขฯƒ]
Proof.

We have ฮป=ฯ„โข(|ฯฐ|+ฯฐ) and ฯ‰=ฮ˜1โข(|ฯฐ|โˆ’ฯฐ), from Equation (1) we obtain

ฮ˜1โข(|ฯฐ|โˆ’ฯฐ) =๐’œโขฯ„โข(|ฯฐ|+ฯฐ)+ฯƒ

Since ๐’œ=(โ„ณ+Iโˆ’โ„’)โˆ’(๐’ฉ+Iโˆ’โ„’), this is implies that

((โ„ณ+Iโˆ’โ„’)โขฯ„+ฮ˜1)โขฯฐ=(๐’ฉ+Iโˆ’โ„’)โขฯ„โขฯฐ+(ฮ˜1โˆ’๐’œโขฯ„)โข|ฯฐ|โˆ’ฯƒ.

Let ฯ„=1r, the above equation can be rewritten as,

ฯฐ=(โ„ณ+Iโˆ’โ„’+ฮ˜1)โˆ’1โข[(๐’ฉ+Iโˆ’โ„’)โขฯฐ+(ฮ˜1โˆ’๐’œ)โข|ฯฐ|โˆ’rโขฯƒ].

Hence, this is the equivalent form of the LCP(ฯƒ,๐’œ) as a fixed point equation. โ– 

In the following, based on Equation (3), we propose an iteration method which is referred to as a โ€œnew accelerated modulus-based iteration methodโ€ to solve the LCP(ฯƒ,๐’œ). Let the Euclidean norm of the error vector be denoted by the term โ€residual,โ€ which is defined as follows:

Resโก(ฮป(ฮท))=โ€–minโก(ฮป(ฮท),๐’œโขฮป(ฮท)+ฯƒ)โ€–2.

Letโ€™s assume that ฮป(0)โˆˆโ„n is an initial vector that is not negative. When the sequence {ฮป(ฮท)}ฮท=0+โˆžโŠ‚โ„n converges or Rโขeโขsโข(ฮป(ฮท)) < 10โˆ’5, the iteration process stops. To calculate ฮป(ฮท+1)โˆˆโ„n, we apply an algorithm that is shown here.

Algorithm 1 (New Accelerated Modulus-Based Iteration Method)

Step 1. Given an initial vector ฯฐ(0)โˆˆโ„n, ฯต>0 and set ฮท=0;ย ย ย ย  ย ย ย  ย  ย ย ย ย ย ย ย  ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย 
Step 2. Generate the sequence ฮป(ฮท) using the following scheme:

(4) ฯฐ(ฮท+1)=(โ„ณ+ฮ˜1+Iโˆ’โ„’)โˆ’1โข[(๐’ฉ+Iโˆ’โ„’)โขฯฐ(ฮท)+(ฮ˜1โˆ’๐’œ)โข|ฯฐ(ฮท)|โˆ’rโขฯƒ]

and set ฮป(ฮท+1)=1rโข(|ฯฐ(ฮท+1)|+ฯฐ(ฮท+1)), where ฮป(ฮท) is a ฮทtโขh approximate solution of LCP(ฯƒ,๐’œ) and ฯฐ(ฮท) is a ฮทtโขh approximate solution of Equation (4);

Step 3. Stop if Rโขeโขsโข(ฮป(ฮท)) < ฯต; otherwise, set ฮท=ฮท+1 and return to Step 2.

Furthermore, the proposed new accelerated modulus-based iteration method offers a generic framework for solving LCP(ฯƒ,๐’œ). We created a new family of accelerated modulus based relaxation methods using matrix splitting. In specifically, the system matrix A is expressed as follows: ๐’œ as ๐’œ=(โ„ณ+Iโˆ’โ„’)โˆ’(๐’ฉ+Iโˆ’โ„’). Then

  1. (a)

    when โ„ณ=๐’œ, ๐’ฉ=0, ฮ˜1=I and r=1, Equation (4) gives the new accelerated modulus iteration method is

    ฯฐ(ฮท+1)=(๐’œ+2โขIโˆ’โ„’)โˆ’1โข[(Iโˆ’โ„’)โขฯฐ(ฮท)+(Iโˆ’๐’œ)โข|ฯฐ(ฮท)|โˆ’ฯƒ].
  2. (b)

    when โ„ณ=๐’œ, ๐’ฉ=0, ฮ˜1=ฮฑ1โขI and r=1, Equation (4) gives the new accelerated modified modulus-based iteration method is

    ฯฐ(ฮท+1)=(๐’œ+(ฮฑ1+1)โขIโˆ’โ„’)โˆ’1โข[(Iโˆ’โ„’)โขฯฐ(ฮท)+(ฮฑ1โขIโˆ’๐’œ)โข|ฯฐ(ฮท)|โˆ’ฯƒ].
  3. (c)

    when โ„ณ=๐’Ÿ, ๐’ฉ=โ„’+๐’ฐ and r=2, Equation (4) gives the new accelerated modulus-based Jacobi iteration method is

    ฯฐ(ฮท+1)=(๐’Ÿ+ฮ˜1+Iโˆ’โ„’)โˆ’1โข[(๐’ฐ+I)โขฯฐ(ฮท)+(ฮ˜1โˆ’๐’œ)โข|ฯฐ(ฮท)|โˆ’2โขฯƒ].
  4. (d)

    when โ„ณ=๐’Ÿโˆ’โ„’, ๐’ฉ=๐’ฐ and r=2, Equation (4) gives the new accelerated modulus-based Gauss-Seidel iteration (NAMGS) method is

    ฯฐ(ฮท+1)=(๐’Ÿโˆ’2โขโ„’+ฮ˜1+I)โˆ’1โข[(๐’ฐ+Iโˆ’โ„’)โขฯฐ(ฮท)+(ฮ˜1โˆ’๐’œ)โข|ฯฐ(ฮท)|โˆ’2โขฯƒ].
  5. (e)

    when โ„ณ=(1ฮฑ1โข๐’Ÿโˆ’โ„’) and ๐’ฉ=(1ฮฑ1โˆ’1)โข๐’Ÿ+๐’ฐ, Equation (4) gives the new accelerated modulus-based successive over-relaxation iteration (NAMSOR) method is

    ฯฐ(ฮท+1) =(๐’Ÿโˆ’2ฮฑ1โ„’+ฮฑ1ฮ˜1+ฮฑ1I))โˆ’1[((1โˆ’ฮฑ1)๐’Ÿ+ฮฑ1๐’ฐ
    +ฮฑ1Iโˆ’โ„’)ฯฐ(ฮท)+(ฮฑ1ฮ˜1โˆ’ฮฑ1๐’œ)|ฯฐ(ฮท)|โˆ’2ฮฑ1ฯƒ].
  6. (f)

    when โ„ณ=(1ฮฑ1)โข(๐’Ÿโˆ’ฮฒ1โขโ„’) and ๐’ฉ=(1ฮฑ1)โข[(1โˆ’ฮฑ1)โข๐’Ÿ+(ฮฑ1โˆ’ฮฒ1)โขโ„’+ฮฑ1โข๐’ฐ], Equation (4) gives the new accelerated modulus-based accelerated over-relaxation iteration (NAMAOR) method is

    ฯฐ(ฮท+1)=(๐’Ÿโˆ’(ฮฒ1+ฮฑ1)โ„’+ฮฑ1ฮ˜1+ฮฑ1I)โˆ’1[((1โˆ’ฮฑ1)๐’Ÿ+(2ฮฑ1โˆ’ฮฒ1)โ„’+ฮฑ1๐’ฐ+ฮฑ1I)ฯฐ(ฮท)+(ฮฑ1ฮ˜1โˆ’ฮฑ1๐’œ)|ฯฐ(ฮท)|โˆ’2ฮฑ1ฯƒ].

The NAMAOR method clearly converts into the following methods:

  1. (a)

    New accelerated modulus-based successive over-relaxation (NAMSOR) method when (ฮฑ1,ฮฒ1) takes the values (ฮฑ1,ฮฑ1).

  2. (b)

    New accelerated Gauss-Seidel (NAMGS) method when ฮฑ1=ฮฒ1=1.

  3. (c)

    New accelerated Jacobi method when ฮฑ1=1 and ฮฒ1=0.

4. Convergence analysis

In the following result, we prove the convergence conditions when the system matrix ๐’œ is a P-matrix. When A is a P-matrix, Equation (1) has a unique solution for every ฯƒโˆˆโ„n [15].

Theorem 6.

Let ๐’œ=(โ„ณ+Iโˆ’โ„’)โˆ’(๐’ฉ+Iโˆ’โ„’)โˆˆโ„nร—n be a P-matrix and ฯฐโˆ— be the solution of Equation (3). Suppose ฯโข(|(โ„ณ+Iโˆ’โ„’+ฮ˜1)โˆ’1|โข(|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’œ|))<1. Then the sequence {ฯฐ(ฮท)}ฮท=1+โˆž generated by Algorithmย 1 converges to the solution ฯฐโˆ— for any starting vector ฯฐ(0)โˆˆโ„n.

Proof.

Let ฯฐโˆ— be the solution of Equation (3), then error is

ฯฐ(ฮท+1)โˆ’ฯฐโˆ— =(โ„ณ+Iโˆ’โ„’+ฮ˜1)โˆ’1[(๐’ฉ+Iโˆ’โ„’)(ฯฐ(ฮท)โˆ’ฯฐโˆ—)
+(ฮ˜1โˆ’๐’œ)(|ฯฐ(ฮท)|โˆ’|ฯฐโˆ—|)].

Using absolute value, both sides

|ฯฐ(ฮท+1)โˆ’ฯฐโˆ—| =|(โ„ณ+Iโˆ’โ„’+ฮ˜1)โˆ’1[(๐’ฉ+Iโˆ’โ„’)(ฯฐ(ฮท)โˆ’ฯฐโˆ—)+(ฮ˜1
โˆ’๐’œ)(|ฯฐ(ฮท)|โˆ’|ฯฐโˆ—|)]|

We have |ฯฐ(ฮท)|โˆ’|ฯฐโˆ—|โ‰ค|ฯฐ(ฮท)โˆ’ฯฐโˆ—|, therefore

โ‰ค|(โ„ณ+Iโˆ’โ„’+ฮ˜1)โˆ’1|(|(๐’ฉ+Iโˆ’โ„’)(ฯฐ(ฮท)โˆ’ฯฐโˆ—)|+|ฮ˜1โˆ’๐’œ)(|ฯฐ(ฮท)โˆ’ฯฐโˆ—|)|.|ฯฐ(ฮท+1)โˆ’ฯฐโˆ—|โ‰ค|(โ„ณ+Iโˆ’โ„’+ฮ˜1)โˆ’1|(|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’œ|)|ฯฐ(ฮท)โˆ’ฯฐโˆ—|.

This implies that

|ฯฐ(ฮท+1)โˆ’ฯฐโˆ—|<|ฯฐ(ฮท)โˆ’ฯฐโˆ—|.

Therefore the sequence {ฯฐ(ฮท)}ฮท=1+โˆž for any starting vector ฯฐ(0)โˆˆโ„n is converges.

Since ฮป(ฮท)=1rโข(|ฯฐ(ฮท)|+ฯฐ(ฮท)), when the sequence {ฯฐ(ฮท)}ฮท=1+โˆž generated by Algorithmย 1 converges to the solution ฯฐโˆ—, then the sequence {ฮป(ฮท)}ฮท=1+โˆž also converges. โ– 

When the system matrix ๐’œ is an H+-matrix, the following result discusses the convergence domain of ฮ˜1 for a new accelerated modulus based iteration method.

Theorem 7.

Let ๐’œ be an H+-matrix and ๐’œ=โ„ณโˆ’๐’ฉ=(โ„ณ+Iโˆ’โ„’)โˆ’(๐’ฉ+Iโˆ’โ„’) be an H-compatible splitting of the matrix ๐’œ, such that โŸจ๐’œโŸฉ=โŸจโ„ณ+Iโˆ’โ„’โŸฉโˆ’|N+Iโˆ’โ„’| and either one of the following conditions holds:
(1) ฮ˜1โ‰ฅ๐’Ÿ;
(2) ฮ˜1<๐’Ÿ and 2โขฮ˜1โˆ’๐’Ÿโˆ’|B|, is an M- matrix, B=โ„’+๐’ฐ. Then the sequence {ฯฐ(ฮท)}ฮท=1+โˆž generated by Algorithmย 1 converges to the solution ฯฐโˆ— for any starting vector ฯฐ(0)โˆˆโ„n.

Proof.

Let ๐’œ=โ„ณโˆ’๐’ฉ=(โ„ณ+Iโˆ’โ„’)โˆ’(๐’ฉ+Iโˆ’โ„’) and it holds that
โŸจ๐’œโŸฉโ‰คโŸจโ„ณ+Iโˆ’โ„’โŸฉโ‰คdiagโก(โ„ณ+Iโˆ’โ„’), (โ„ณ+Iโˆ’โ„’) is an H+-matrix. and it holds that

|(ฮ˜1+โ„ณ+Iโˆ’โ„’)โˆ’1|โ‰ค(ฮ˜1+โŸจโ„ณโŸฉ+Iโˆ’โ„’)โˆ’1.

From Theoremย 6, let T=|(โ„ณ+Iโˆ’โ„’+ฮ˜1)โˆ’1|โข(|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’œ|), then

T =|(โ„ณ+ฮ˜1+Iโˆ’โ„’)โˆ’1|โข[|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’œ|]
โ‰ค(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’1โข[|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’œ|]
โ‰ค(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’1โข[|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’Ÿ+โ„’+๐’ฐ|]
โ‰ค(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’1[(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)
+|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’Ÿ|+|โ„’+๐’ฐ|].

Case 1. ฮ˜1โ‰ฅ๐’Ÿ,

โ‰คIโˆ’(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’1โข[(โŸจโ„ณโŸฉ+Iโˆ’โ„’)โˆ’|๐’ฉ+Iโˆ’โ„’|+๐’Ÿโˆ’|โ„’+๐’ฐ|]โ‰คIโˆ’2โข(ฮ˜1+โŸจโ„ณโŸฉ+Iโˆ’โ„’)โˆ’1โขโŸจ๐’œโŸฉ.

Since โŸจ๐’œโŸฉ is an M-matrix, then there exists a positive vector v>0 such that

โŸจ๐’œโŸฉโขv>0.

Therefore

Tโขvโ‰ค(Iโˆ’2โข(ฮ˜1+โŸจโ„ณโŸฉ+Iโˆ’โ„’)โˆ’1โขโŸจ๐’œโŸฉ)โขv<v.

Now, we are able to establish that ฯโข(T)<1 by making use of the 4.

Case 2. ฮ˜1<๐’Ÿ and โŸจ๐’œโŸฉ+2โขฮ˜1โˆ’๐’Ÿโˆ’|B| is an M-matrix. Then,

T โ‰ค(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’1[(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)
+|๐’ฉ+Iโˆ’โ„’|+|ฮ˜1โˆ’๐’Ÿ|+|โ„’+๐’ฐ|]
โ‰คIโˆ’(โŸจโ„ณโŸฉ+ฮ˜1+Iโˆ’โ„’)โˆ’1[(โŸจโ„ณโŸฉ+Iโˆ’โ„’)โˆ’|๐’ฉ+Iโˆ’โ„’|
+2ฮ˜1โˆ’๐’Ÿโˆ’|โ„’+๐’ฐ|].

Since [โŸจ๐’œโŸฉ+2โขฮ˜1โˆ’|๐’Ÿ|โˆ’|โ„’+๐’ฐ|] is an M-matrix. Then there exists a positive vector v>0 such that

[โŸจ๐’œโŸฉ+2โขฮ˜1โˆ’|๐’Ÿ|โˆ’|โ„’+๐’ฐ|]โขv>0.

Therefore

Tโขvโ‰คIโˆ’(ฮ˜1+โŸจโ„ณโŸฉ+Iโˆ’โ„’)โˆ’1โข[โŸจ๐’œโŸฉ+2โขฮ˜1โˆ’๐’Ÿโˆ’|โ„’+๐’ฐ|]โขv<v.

We are able to establish that ฯโข(T)<1 by making use of the 4. Due to this, the Theoremย 6 states that the iteration sequence {ฯฐ(ฮท)}ฮท=1+โˆž generated by the Algorithmย 1 converges to ฯฐโˆ— for any starting vector ฯฐ(0). โ– 

5. Numerical examples

In this section, several numerical examples are provided to demonstrate how effective our suggested methods are. IT stands for the number of iteration steps, while CPU represents the amount of time utilized on the CPU in seconds. Consider the LCP (ฯƒ,๐’œ), which always provides a unique solution. Let ๐’œ=P1+ฮด1โขI and ฯƒ=โˆ’๐’œโขฮปโˆ—, where ฮปโˆ—=(1,2,โ€ฆ,1,2,โ€ฆ)Tโˆˆโ„n is the unique solution of Equation (1). Let ฯฐ(0)=(1,0,โ€ฆโข1,0,โ€ฆ)Tโˆˆโ„n be initial vector. The proposed methods (NAMGS and NAMSOR) are compared to the modulus-based Gauss-Seidel (MGS) method and the successive over-relaxation (MSOR) method [34], which are effective in solving LCP(ฯƒ,๐’œ). For all computations, Matlab version 2021a is used on an Acer desktop equipped with an Intel Core i7-8700 processor running at 3.2 GHz 3.19 GHz, and 16.00 GB of RAM. Tablesย 1, 2 andย 3 provide the numerical results of the new accelerated modulus-based iteration methods and the modulus-based matrix splitting method described in [34].

Example 8.

๐’œ is the system matrix and it is formed by the expression ๐’œ=P1+ฮด1โขI, where ฮด1 is the positive real parameter, the identity matrix of order m is denoted by the symbol I1 and
P1=[๐’ฏโˆ’I10โ€ฆ0โˆ’I1๐’ฏโˆ’I1โ€ฆ00โˆ’I1๐’ฏโˆ’I100โ€ฆโˆ’I1โ‹ฑโˆ’I10โ€ฆ0โˆ’I1๐’ฏ], ๐’ฏ=[4โˆ’1โ€ฆโ€ฆ0โˆ’14โˆ’1โ€ฆ00โˆ’14โˆ’100โ€ฆโˆ’1โ‹ฑโˆ’10โ€ฆโ€ฆโˆ’14],
where P1โˆˆโ„nร—n, ๐’ฏโˆˆโ„mร—m.

n 10000 40000 160000 640000 1000000
MGS IT 42 43 44 45 45
ฮฑ=1 CPU 0.026006 0.071892 0.36788 0.9905 1.59
Res 8.391e-06 8.630e-06 8.768e-06 8.855e-06 9.9128e-06
NAMGS IT 18 18 19 19 19
ฮฑ1=1 CPU 0.013853 0.033635 0.12218 0.47709 0.70497
Res 5.098e-06 7.3423e-06 4.272e-06 6.069e-06 6.7921e-06
MSOR IT 19 19 20 21 21
ฮฑ=0.85 CPU 0.013312 0.034213 0.12642 0.473 0.73996
Res 4.325e-06 8.943e-06 6.945e-06 5.351e-06 6.7001e-06
NAMSOR IT 13 14 14 15 15
ฮฑ1=0.91 CPU 0.010234 0.027824 0.090817 0.34313 0.52105
Res 5.763e-06 2.763e-06 5.265e-06 2.561e-06 3.1766e-06
Table 1. Results for MGS and MSOR methods [34] and NAMGS and NAMSOR methods, ฮด1=4.
Example 9.

๐’œ is the system matrix and it is formed by the expression ๐’œ=P1+ฮด1โขI, where ฮด1 is the positive real parameter, the identity matrix of order m is denoted by the symbol I1 and
P1=[๐’ฏโˆ’0.5โขI10โ€ฆโˆ’1.5โขI1๐’ฏโˆ’0.5โขI1โ€ฆโ‹ฎโˆ’1.5โขI1โ‹ฑโˆ’0.5โขI10โ€ฆโˆ’1.5โขI1๐’ฏ], ๐’ฏ=[4โˆ’0.5โ€ฆโ€ฆโˆ’1.54โˆ’0.5โ€ฆโ‹ฎโˆ’1.5โ‹ฑโˆ’0.50โ€ฆโˆ’1.54]
P1โˆˆโ„nร—n, ๐’ฏโˆˆโ„mร—m.

n 10000 40000 160000 640000 1000000
MGS IT 27 28 28 29 29
ฮฑ=1 CPU 0.015293 0.04906 0.26455 0.63857 0.9857
Res 7.385e-06 6.193e-06 8.809e-06 7.332e-06 8.2027e-06
NAMGS IT 13 13 14 14 15
ฮฑ1=1 CPU 0.010722 0.026676 0.091557 0.32604 0.54924
Res 5.291e-06 9.578e-06 4.257e-06 8.154e-06 2.3597e-06
MSOR IT 15 16 16 16 17
ฮฑ=0.88 CPU 0.012181 0.028711 0.10672 0.36204 0.58685
Res 6.344e-06 3.485e-06 5.645e-06 9.671e-06 3.7163e-06
NAMSOR IT 9 10 10 10 10
ฮฑ1=0.88 CPU 0.0072254 0.019398 0.066067 0.23302 0.34536
Res 5.874e-06 1.640e-06 3.335e-06 6.727e-06 8.4226e-06
Table 2. Results for MGS and MSOR methods [34] and NAMGS and NAMSOR methods, ฮด1=4.
Example 10.

๐’œ is the system matrix and it is formed by the expression ๐’œ=P1+ฮด1โขI, where ฮด1 is the positive real parameter, the identity matrix of order m is denoted by the symbol I1 and
P1=Tridiagโก(โˆ’I1,๐’ฏ,โˆ’I1)=[๐’ฏโˆ’I1โˆ’I1โ€ฆ00๐’ฏโˆ’I1โˆ’I1000๐’ฏโˆ’I1โˆ’I10โ€ฆ0โ‹ฑโˆ’I10โ€ฆ00๐’ฏ]โˆˆโ„nร—n,
๐’ฏ=Tridiagโก(โˆ’1,4,โˆ’1)=[4โˆ’1โ€ฆโ€ฆ0โˆ’14โˆ’1โ€ฆ00โˆ’14โˆ’100โ€ฆโˆ’1โ‹ฑโˆ’10โ€ฆโ€ฆโˆ’14] โˆˆโ„mร—m.

n 10000 40000 160000 640000 1000000
MGS IT 44 46 47 48 48
ฮฑ=1 CPU 0.021707 0.075858 0.38078 1.0475 1.6284
Res 9.7023e-06 7.152e-06 7.306e-06 7.414e-06 8.3004e-06
NAMGS IT 21 22 22 23 23
ฮฑ1=1 CPU 0.013821 0.038337 0.13761 0.50668 0.79966
Res 8.262e-06 5.740e-06 8.187e-06 5.625e-06 6.2949e-06
MSOR IT 20 21 21 22 22
ฮฑ=0.87 CPU 0.014358 0.039287 0.12957 0.48699 0.7618
Res 5.683e-06 4.798e-06 9.742e-06 7.990e-06 9.9999e-06
NAMSOR IT 17 17 18 19 19
ฮฑ1=.94 CPU 0.01243 0.029982 0.11366 0.42354 0.66273
Res 4.841e-06 8.427e-06 5.388e-06 3.473e-06 4.2525e-06
Table 3. Results for MGS and MSOR methods [34] and NPGS and NPSOR methods, when ฮด1=4.

From Tablesย 1, 2 andย 3, we can observe that the iteration steps required by our proposed NAMGS and NAMSOR methods have lesser number of iteration steps, faster processing (CPU time), and greater computational efficiency than the MGS and MSOR methods in [34] respectively.

6. Conclusion

The article introduces a class of new accelerated modulus-based iteration methods for the solution of large and sparse LCP(ฯƒ,๐’œ) problems using matrix splitting. This iteration form maintains the sparsity and size of the matrix ๐’œ during the iteration process. Additionally, when system matrix ๐’œ is an H+-matrix, we demonstrate some convergence conditions. At last, the efficacy of the proposed methods is demonstrated through the presentation of various numerical instances.

Acknowledgements.

The first author would like to thank the University Grants Commission (UGC), Government of India, for funding the SRF fellowship program No. 1068 /(CSIR-UGC NET DEC. 2017).

References