Return to Article Details New accelerated modulus-based iteration method for solving large and sparse linear complementarity problem

New Accelerated Modulus-Based Iteration Method
for Solving Large and Sparse
Linear Complementarity Problem

Bharat Kumar∗,1, Deepmala2 and Arup K. Das3
(Date: October 25, 2023; accepted: May 03, 2024; published online: July 11, 2024.)
Abstract.

For the large and sparse linear complementarity problem, we provide a family of new accelerated modulus-based iteration methods in this article. We provide some sufficient criteria for the convergence analysis when the system matrix is a P-matrix or an H+-matrix. In addition, we provide some numerical examples of the different parameters to illustrate the efficacy of our proposed methods. These methods help us reduce the number of iterations and the time required by the CPU, which improves convergence performance.

Key words and phrases:
linear complementarity problem, P-matrix, H+-matrix, matrix splitting, convergence.
2005 Mathematics Subject Classification:
65F10, 90C33, 65F50.
The first author is thankful to the University Grants Commission (UGC), Government of India under the SRF fellowship Program No. 1068/(CSIR-UGC NET DEC.2017).
1Mathematics Discipline, PDPM-Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, M.P., India, e-mail: bharatnishad.kanpu@gmail.com
2Mathematics Discipline, PDPM-Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, M.P., India, e-mail: dmrai23@gmail.com
3Indian Statistical Institute, 203 B.T. Road, Kolkata-700108, India, e-mail: akdas@isical.ac.in

1. Introduction

The large and sparse matrices are matrices that have a large number of rows and columns but a small number of non-zero elements. In other words, they are matrices where the majority of the elements are zero. Sparse matrices are commonly used to represent complex systems or large datasets in fields such as computer science, mathematics, physics and engineering. The sparsity of the matrix means that it is not practical to store each element individually and specialized data structures and algorithms must be used to efficiently store and manipulate the matrix.

Let us assume that the matrix 𝒜n×n is large and sparse and that it is associated with the vector σn. The objective of the linear complementarity problem, referred to as LCP(σ,𝒜), is to determine the solution λn to the following system:

(1) λ0,ω=𝒜λ+σ0,λT(𝒜λ+σ)=0.

Applications of the linear complementarity problem that have received significant study in the literature on mathematical programming include the free boundary problem, the Nash equilibrium point of the bimatrix game, operations research, control theory, mathematical economics, optimization theory, stochastic optimal control, and the American option pricing problem.

The methods available for solving the linear complementarity problems are into two groups namely the pivotal method [26], [18] and the iterative method [12], [4], [7] and [8]. The objective behind the iterative method is to produce a sequence of iterates that lead to a solution, but the pivotal method develops a series of pivot steps that lead to a basic feasible complementary vector through a series of pivot steps.

Reformulating the LCP(σ,𝒜) into an equation whose solution must be the same as the LCP (σ,𝒜) is one of the most well-known and attractive methods of constructing fast and inexpensive iteration methods. Consequently, certain useful LCP(σ,𝒜) equivalent forms have arisen. Mangasarian [22] presented three methods: projected Jacobi over-relaxation, projected SOR and projected symmetric SOR. For additional details regarding developing iteration methods based on Mangasarian’s notion, see also [6], [35] and [10]. Bai has offered the following equivalent form in [34]:

(2) (Θ1+)ϰ=𝒩ϰ+(Θ1𝒜)|ϰ|rσ,

where r>0 and Θ1n×n is a positive diagonal matrix and developed a class of modulus-based matrix splitting iteration methods. The Equation (2) covers the published works in [2], [28], [20], [13] and [5]. This kind of modulus-based matrix splitting iteration method has been considered efficient for solving the LCP(σ,𝒜). For other formulations of Equation (2), see [21], [16], [29], [31] and [17] for more details. Furthermore, this concept has been successfully applied to other complementarity problems, including the horizontal linear complementarity problem [11], the implicit complementarity problem [19], the nonlinear complementarity problem [9] and [33], [14] and the quasi-complementarity problem [27].

Bai [34] solved linear complementarity problems using modulus-based matrix splitting methods. However, the number of iterations in these methods is large enough to accomplish the optimal approximate solution to the numerical instances. In this paper, we introduce a class of new accelerated modulus-based iteration techniques for solving the large and sparse LCP(σ,𝒜). These methods are based on the work of Shilang [30] and Bai [34]. We demonstrate that the linear complementarity problem and the fixed point equation are equivalent and both have the same solution. In addition, we present several convergence conditions for the method that we proposed.

The article is structured as follows: In Section 2, we provide necessary definitions, notations and well-known lemmas. All of these things will be used in the discussions in the subsequent sections of this work. A new accelerated modulus-based iteration method is presented in Section 3 and it makes use of the new equivalent fixed point form of the LCP(σ,𝒜). In Section 4, we define certain convergence domains for the proposed approach. Section 5 gives some examples of the numerical comparison that is made between the methods that have been suggested and the modulus-based matrix splitting methods that were presented by Bai [34]. Section 6 provides the conclusion.

2. Preliminaries

In this section, we introduce some basic notations, definitions and lemmas, most of which may be found in [32], [3], [25], that will be used throughout the article to examine the convergence analysis of the proposed methods.

The following is a list of related notations that are used for a given large and sparse matrix 𝒜:

  • Let 𝒜=(aij)n×n and =(bij)n×n. We use 𝒜> () to denotes aij>() bij, 1i,jn;

  • ()T denotes the transpose of the given matrix or vector;

  • We use 𝒜=0n×n to denotes aij=0,i,j;

  • |𝒜|=(|aij|), i,j;

  • 𝒜1 represents the inverse of the matrix 𝒜;

  • Θ1 is a real positive diagonal matrix of order n;

  • 2 is euclidean norm of a vector i.e. let x=(xi)n, then x2=i=1nxi2;

  • Let x,yn, min(x,y) is the vector whose ith component is
    min(xi,yi);

  • Assume 𝒜=𝒟𝒰 where 𝒟=diag(𝒜) and , 𝒰 are the strictly lower, upper triangular matrices of 𝒜, respectively.

Let 𝒜=(aij)n×n and =(bij)n×n be square matrices. The comparison matrix of 𝒜 is defined as aij=|aij| if i=j and aij=|aij| if ij; a Z-matrix if all of its non-diagonal elements are less than equal to zero; an M-matrix if 𝒜10 as well as Z-matrix; an H-matrix, if 𝒜 is an M-matrix and an H+-matrix if 𝒜 is an H-matrix as well as aii>0i{1,2,,n}; a P-matrix if all its principle minors are positive such that det(𝒜α1α1)>0 α1{1,2,,n}. The splitting 𝒜=𝒩 is called an M-splitting if is a nonsingular M-matrix and 𝒩0; an H-splitting if |𝒩| is an M-matrix; an H-compatible splitting if 𝒜=|𝒩|.

Lemma 1 ([23]).

Let x,yn. x0, y0, xTy=0 if and only if x+y=|xy|.

Lemma 2 ([3]).

Let 𝒜,n×n. If 𝒜 and are M and Z-matrices, respectively, with 𝒜 then is an M-matrix. If 𝒜 is an H-matrix then |𝒜1|𝒜1.

Lemma 3 ([32]).

Let 𝒜n×n be an M-matrix and 𝒜=𝒩 be an M-splitting, then  ρ(1𝒩) < 1.

Lemma 4 ([3]).

Suppose 𝒜0n×n, if there exist v>0n and a scalar α1>0 such that 𝒜vα1v, then ρ(𝒜)α1. Moreover, if 𝒜v<v, then ρ(𝒜)<1.

3. Main results

Suppose vector ϰn and 𝒜=(+I)(𝒩+I), where I is the identity matrix of order n and is the strictly lower triangular matrix of 𝒜. In the following result, we convert the LCP(σ,𝒜) into a fixed point formulation.

Theorem 5.

Suppose 𝒜n×n with the splitting 𝒜=(+I)(𝒩+I) and σn. Let λ=τ(|ϰ|+ϰ), ω=Θ1(|ϰ|ϰ), where τ0, r0 and Θ1 is a positive diagonal matrix and the matrix (+Θ1+I) be a nonsingular. Then the equivalent formulation of the LCP(σ,𝒜) in form of fixed point equation is

(3) ϰ=(+Θ1+I)1[(𝒩+I)ϰ+(Θ1𝒜)|ϰ|rσ]
Proof.

We have λ=τ(|ϰ|+ϰ) and ω=Θ1(|ϰ|ϰ), from Equation (1) we obtain

Θ1(|ϰ|ϰ) =𝒜τ(|ϰ|+ϰ)+σ

Since 𝒜=(+I)(𝒩+I), this is implies that

((+I)τ+Θ1)ϰ=(𝒩+I)τϰ+(Θ1𝒜τ)|ϰ|σ.

Let τ=1r, the above equation can be rewritten as,

ϰ=(+I+Θ1)1[(𝒩+I)ϰ+(Θ1𝒜)|ϰ|rσ].

Hence, this is the equivalent form of the LCP(σ,𝒜) as a fixed point equation.

In the following, based on Equation (3), we propose an iteration method which is referred to as a “new accelerated modulus-based iteration method” to solve the LCP(σ,𝒜). Let the Euclidean norm of the error vector be denoted by the term ”residual,” which is defined as follows:

Res(λ(η))=min(λ(η),𝒜λ(η)+σ)2.

Let’s assume that λ(0)n is an initial vector that is not negative. When the sequence {λ(η)}η=0+n converges or Res(λ(η)) < 105, the iteration process stops. To calculate λ(η+1)n, we apply an algorithm that is shown here.

Algorithm 1 (New Accelerated Modulus-Based Iteration Method)

Step 1. Given an initial vector ϰ(0)n, ϵ>0 and set η=0;                                  
Step 2. Generate the sequence λ(η) using the following scheme:

(4) ϰ(η+1)=(+Θ1+I)1[(𝒩+I)ϰ(η)+(Θ1𝒜)|ϰ(η)|rσ]

and set λ(η+1)=1r(|ϰ(η+1)|+ϰ(η+1)), where λ(η) is a ηth approximate solution of LCP(σ,𝒜) and ϰ(η) is a ηth approximate solution of Equation (4);

Step 3. Stop if Res(λ(η)) < ϵ; otherwise, set η=η+1 and return to Step 2.

Furthermore, the proposed new accelerated modulus-based iteration method offers a generic framework for solving LCP(σ,𝒜). We created a new family of accelerated modulus based relaxation methods using matrix splitting. In specifically, the system matrix A is expressed as follows: 𝒜 as 𝒜=(+I)(𝒩+I). Then

  1. (a)

    when =𝒜, 𝒩=0, Θ1=I and r=1, Equation (4) gives the new accelerated modulus iteration method is

    ϰ(η+1)=(𝒜+2I)1[(I)ϰ(η)+(I𝒜)|ϰ(η)|σ].
  2. (b)

    when =𝒜, 𝒩=0, Θ1=α1I and r=1, Equation (4) gives the new accelerated modified modulus-based iteration method is

    ϰ(η+1)=(𝒜+(α1+1)I)1[(I)ϰ(η)+(α1I𝒜)|ϰ(η)|σ].
  3. (c)

    when =𝒟, 𝒩=+𝒰 and r=2, Equation (4) gives the new accelerated modulus-based Jacobi iteration method is

    ϰ(η+1)=(𝒟+Θ1+I)1[(𝒰+I)ϰ(η)+(Θ1𝒜)|ϰ(η)|2σ].
  4. (d)

    when =𝒟, 𝒩=𝒰 and r=2, Equation (4) gives the new accelerated modulus-based Gauss-Seidel iteration (NAMGS) method is

    ϰ(η+1)=(𝒟2+Θ1+I)1[(𝒰+I)ϰ(η)+(Θ1𝒜)|ϰ(η)|2σ].
  5. (e)

    when =(1α1𝒟) and 𝒩=(1α11)𝒟+𝒰, Equation (4) gives the new accelerated modulus-based successive over-relaxation iteration (NAMSOR) method is

    ϰ(η+1) =(𝒟2α1+α1Θ1+α1I))1[((1α1)𝒟+α1𝒰
    +α1I)ϰ(η)+(α1Θ1α1𝒜)|ϰ(η)|2α1σ].
  6. (f)

    when =(1α1)(𝒟β1) and 𝒩=(1α1)[(1α1)𝒟+(α1β1)+α1𝒰], Equation (4) gives the new accelerated modulus-based accelerated over-relaxation iteration (NAMAOR) method is

    ϰ(η+1)=(𝒟(β1+α1)+α1Θ1+α1I)1[((1α1)𝒟+(2α1β1)+α1𝒰+α1I)ϰ(η)+(α1Θ1α1𝒜)|ϰ(η)|2α1σ].

The NAMAOR method clearly converts into the following methods:

  1. (a)

    New accelerated modulus-based successive over-relaxation (NAMSOR) method when (α1,β1) takes the values (α1,α1).

  2. (b)

    New accelerated Gauss-Seidel (NAMGS) method when α1=β1=1.

  3. (c)

    New accelerated Jacobi method when α1=1 and β1=0.

4. Convergence analysis

In the following result, we prove the convergence conditions when the system matrix 𝒜 is a P-matrix. When A is a P-matrix, Equation (1) has a unique solution for every σn [15].

Theorem 6.

Let 𝒜=(+I)(𝒩+I)n×n be a P-matrix and ϰ be the solution of Equation (3). Suppose ρ(|(+I+Θ1)1|(|𝒩+I|+|Θ1𝒜|))<1. Then the sequence {ϰ(η)}η=1+ generated by Algorithm 1 converges to the solution ϰ for any starting vector ϰ(0)n.

Proof.

Let ϰ be the solution of Equation (3), then error is

ϰ(η+1)ϰ =(+I+Θ1)1[(𝒩+I)(ϰ(η)ϰ)
+(Θ1𝒜)(|ϰ(η)||ϰ|)].

Using absolute value, both sides

|ϰ(η+1)ϰ| =|(+I+Θ1)1[(𝒩+I)(ϰ(η)ϰ)+(Θ1
𝒜)(|ϰ(η)||ϰ|)]|

We have |ϰ(η)||ϰ||ϰ(η)ϰ|, therefore

|(+I+Θ1)1|(|(𝒩+I)(ϰ(η)ϰ)|+|Θ1𝒜)(|ϰ(η)ϰ|)|.|ϰ(η+1)ϰ||(+I+Θ1)1|(|𝒩+I|+|Θ1𝒜|)|ϰ(η)ϰ|.

This implies that

|ϰ(η+1)ϰ|<|ϰ(η)ϰ|.

Therefore the sequence {ϰ(η)}η=1+ for any starting vector ϰ(0)n is converges.

Since λ(η)=1r(|ϰ(η)|+ϰ(η)), when the sequence {ϰ(η)}η=1+ generated by Algorithm 1 converges to the solution ϰ, then the sequence {λ(η)}η=1+ also converges.

When the system matrix 𝒜 is an H+-matrix, the following result discusses the convergence domain of Θ1 for a new accelerated modulus based iteration method.

Theorem 7.

Let 𝒜 be an H+-matrix and 𝒜=𝒩=(+I)(𝒩+I) be an H-compatible splitting of the matrix 𝒜, such that 𝒜=+I|N+I| and either one of the following conditions holds:
(1) Θ1𝒟;
(2) Θ1<𝒟 and 2Θ1𝒟|B|, is an M- matrix, B=+𝒰. Then the sequence {ϰ(η)}η=1+ generated by Algorithm 1 converges to the solution ϰ for any starting vector ϰ(0)n.

Proof.

Let 𝒜=𝒩=(+I)(𝒩+I) and it holds that
𝒜+Idiag(+I), (+I) is an H+-matrix. and it holds that

|(Θ1++I)1|(Θ1++I)1.

From Theorem 6, let T=|(+I+Θ1)1|(|𝒩+I|+|Θ1𝒜|), then

T =|(+Θ1+I)1|[|𝒩+I|+|Θ1𝒜|]
(+Θ1+I)1[|𝒩+I|+|Θ1𝒜|]
(+Θ1+I)1[|𝒩+I|+|Θ1𝒟++𝒰|]
(+Θ1+I)1[(+Θ1+I)(+Θ1+I)
+|𝒩+I|+|Θ1𝒟|+|+𝒰|].

Case 1. Θ1𝒟,

I(+Θ1+I)1[(+I)|𝒩+I|+𝒟|+𝒰|]I2(Θ1++I)1𝒜.

Since 𝒜 is an M-matrix, then there exists a positive vector v>0 such that

𝒜v>0.

Therefore

Tv(I2(Θ1++I)1𝒜)v<v.

Now, we are able to establish that ρ(T)<1 by making use of the 4.

Case 2. Θ1<𝒟 and 𝒜+2Θ1𝒟|B| is an M-matrix. Then,

T (+Θ1+I)1[(+Θ1+I)(+Θ1+I)
+|𝒩+I|+|Θ1𝒟|+|+𝒰|]
I(+Θ1+I)1[(+I)|𝒩+I|
+2Θ1𝒟|+𝒰|].

Since [𝒜+2Θ1|𝒟||+𝒰|] is an M-matrix. Then there exists a positive vector v>0 such that

[𝒜+2Θ1|𝒟||+𝒰|]v>0.

Therefore

TvI(Θ1++I)1[𝒜+2Θ1𝒟|+𝒰|]v<v.

We are able to establish that ρ(T)<1 by making use of the 4. Due to this, the Theorem 6 states that the iteration sequence {ϰ(η)}η=1+ generated by the Algorithm 1 converges to ϰ for any starting vector ϰ(0).

5. Numerical examples

In this section, several numerical examples are provided to demonstrate how effective our suggested methods are. IT stands for the number of iteration steps, while CPU represents the amount of time utilized on the CPU in seconds. Consider the LCP (σ,𝒜), which always provides a unique solution. Let 𝒜=P1+δ1I and σ=𝒜λ, where λ=(1,2,,1,2,)Tn is the unique solution of Equation (1). Let ϰ(0)=(1,0,1,0,)Tn be initial vector. The proposed methods (NAMGS and NAMSOR) are compared to the modulus-based Gauss-Seidel (MGS) method and the successive over-relaxation (MSOR) method [34], which are effective in solving LCP(σ,𝒜). For all computations, Matlab version 2021a is used on an Acer desktop equipped with an Intel Core i7-8700 processor running at 3.2 GHz 3.19 GHz, and 16.00 GB of RAM. Tables 1, 2 and 3 provide the numerical results of the new accelerated modulus-based iteration methods and the modulus-based matrix splitting method described in [34].

Example 8.

𝒜 is the system matrix and it is formed by the expression 𝒜=P1+δ1I, where δ1 is the positive real parameter, the identity matrix of order m is denoted by the symbol I1 and
P1=[𝒯I100I1𝒯I100I1𝒯I100I1I100I1𝒯], 𝒯=[410141001410011014],
where P1n×n, 𝒯m×m.

n 10000 40000 160000 640000 1000000
MGS IT 42 43 44 45 45
α=1 CPU 0.026006 0.071892 0.36788 0.9905 1.59
Res 8.391e-06 8.630e-06 8.768e-06 8.855e-06 9.9128e-06
NAMGS IT 18 18 19 19 19
α1=1 CPU 0.013853 0.033635 0.12218 0.47709 0.70497
Res 5.098e-06 7.3423e-06 4.272e-06 6.069e-06 6.7921e-06
MSOR IT 19 19 20 21 21
α=0.85 CPU 0.013312 0.034213 0.12642 0.473 0.73996
Res 4.325e-06 8.943e-06 6.945e-06 5.351e-06 6.7001e-06
NAMSOR IT 13 14 14 15 15
α1=0.91 CPU 0.010234 0.027824 0.090817 0.34313 0.52105
Res 5.763e-06 2.763e-06 5.265e-06 2.561e-06 3.1766e-06
Table 1. Results for MGS and MSOR methods [34] and NAMGS and NAMSOR methods, δ1=4.
Example 9.

𝒜 is the system matrix and it is formed by the expression 𝒜=P1+δ1I, where δ1 is the positive real parameter, the identity matrix of order m is denoted by the symbol I1 and
P1=[𝒯0.5I101.5I1𝒯0.5I11.5I10.5I101.5I1𝒯], 𝒯=[40.51.540.51.50.501.54]
P1n×n, 𝒯m×m.

n 10000 40000 160000 640000 1000000
MGS IT 27 28 28 29 29
α=1 CPU 0.015293 0.04906 0.26455 0.63857 0.9857
Res 7.385e-06 6.193e-06 8.809e-06 7.332e-06 8.2027e-06
NAMGS IT 13 13 14 14 15
α1=1 CPU 0.010722 0.026676 0.091557 0.32604 0.54924
Res 5.291e-06 9.578e-06 4.257e-06 8.154e-06 2.3597e-06
MSOR IT 15 16 16 16 17
α=0.88 CPU 0.012181 0.028711 0.10672 0.36204 0.58685
Res 6.344e-06 3.485e-06 5.645e-06 9.671e-06 3.7163e-06
NAMSOR IT 9 10 10 10 10
α1=0.88 CPU 0.0072254 0.019398 0.066067 0.23302 0.34536
Res 5.874e-06 1.640e-06 3.335e-06 6.727e-06 8.4226e-06
Table 2. Results for MGS and MSOR methods [34] and NAMGS and NAMSOR methods, δ1=4.
Example 10.

𝒜 is the system matrix and it is formed by the expression 𝒜=P1+δ1I, where δ1 is the positive real parameter, the identity matrix of order m is denoted by the symbol I1 and
P1=Tridiag(I1,𝒯,I1)=[𝒯I1I100𝒯I1I1000𝒯I1I100I1000𝒯]n×n,
𝒯=Tridiag(1,4,1)=[410141001410011014] m×m.

n 10000 40000 160000 640000 1000000
MGS IT 44 46 47 48 48
α=1 CPU 0.021707 0.075858 0.38078 1.0475 1.6284
Res 9.7023e-06 7.152e-06 7.306e-06 7.414e-06 8.3004e-06
NAMGS IT 21 22 22 23 23
α1=1 CPU 0.013821 0.038337 0.13761 0.50668 0.79966
Res 8.262e-06 5.740e-06 8.187e-06 5.625e-06 6.2949e-06
MSOR IT 20 21 21 22 22
α=0.87 CPU 0.014358 0.039287 0.12957 0.48699 0.7618
Res 5.683e-06 4.798e-06 9.742e-06 7.990e-06 9.9999e-06
NAMSOR IT 17 17 18 19 19
α1=.94 CPU 0.01243 0.029982 0.11366 0.42354 0.66273
Res 4.841e-06 8.427e-06 5.388e-06 3.473e-06 4.2525e-06
Table 3. Results for MGS and MSOR methods [34] and NPGS and NPSOR methods, when δ1=4.

From Tables 1, 2 and 3, we can observe that the iteration steps required by our proposed NAMGS and NAMSOR methods have lesser number of iteration steps, faster processing (CPU time), and greater computational efficiency than the MGS and MSOR methods in [34] respectively.

6. Conclusion

The article introduces a class of new accelerated modulus-based iteration methods for the solution of large and sparse LCP(σ,𝒜) problems using matrix splitting. This iteration form maintains the sparsity and size of the matrix 𝒜 during the iteration process. Additionally, when system matrix 𝒜 is an H+-matrix, we demonstrate some convergence conditions. At last, the efficacy of the proposed methods is demonstrated through the presentation of various numerical instances.

Acknowledgements.

The first author would like to thank the University Grants Commission (UGC), Government of India, for funding the SRF fellowship program No. 1068 /(CSIR-UGC NET DEC. 2017).

References