A new preconditioned Richardson iterative method

Hassan Jamali and Reza Pourkani
(Date: May 26, 2024; accepted: November 14, 2024; published online: December 18, 2024.)
Abstract.

This paper proposes a new iterative technique for solving a linear system Ax=y based on the Richardson iterative method. Then using the Chebyshev polynomials, we modify the proposed method to accelerate the convergence rate. Also, we present the results of some numerical experiments that demonstrate the efficiency and effectiveness of the proposed methods compared to the existing, state-of-the-art methods.

Key words and phrases:
iterative method, Richardson iteration, convergence rate, Chebyshev polynomials.
2005 Mathematics Subject Classification:
Primary 65F10; Secondary 65F08.
Department of Mathematics, Faculty of Mathematical Sciences, Vali-e-Asr University of Rafsanjan, Rafsanjan, Iran, e-mail: jamali@vru.ac.ir, r.poorkani@gmail.com.

1. Introduction and preliminaries

Linear systems arise in various scientific and engineering applications, including differential equations, signal processing, and optimization [4, 5, 6, 7, 8, 12]. Therefore, it is crucial to find the efficient solution of a linear system in both theoretical and practical contexts. Iterative methods of solving linear systems have been extensively studied because they are more efficient than direct methods when solving large-scale problems. Furthermore, iterative methods have the advantage of being easily adaptable to different types of linear operators, which makes them versatile and applicable to a broad range of problems. Such operators frequently arise in practical applications, including image processing and machine learning.

In the present work, we propose a new iterative method for solving linear systems with bounded, self-adjoint operators and then present a new, modified version of the process by using the Chebyshev polynomials. The method utilizes a flexible preconditioner that adapts to the structure of the operator and exploits its self-adjointness. The resulting algorithms have favorable convergence properties and require fewer iterations than some existing methods. Also, we present the results of some numerical experiments that demonstrate the efficiency and effectiveness of our methods compared to the existing, state-of-the-art methods.

Consider the linear system

(1) Ax=y

in a Hilbert space , where A is a bounded and self-adjoint operator. The basic idea of an iterative method is to use an initial guess x0, and then apply a sequence of updates to the solution, refining the solution with each iteration. There are many different iterative methods, each having its advantages and disadvantages, depending on the properties of the equation to be solved.

The most straightforward approach to an iterative solution is to rewrite (1) as a linear, fixed point iteration. If we consider the function (x)=Ax+xy on , then x is a solution of Ax=y if and only if x is a fixed point of . Thus, it seems natural to consider fixed-point theorems.

One of the best and most widely used methods to do this is the Richardson iterative method [12]. The abstract Richardson iterative method has the form

(2) xk=xk1+αrk1,

where 0<α<2A and rk is the residual vector yAxk. Note that we can rewrite (2) as

xk=(IαA)xk1+αy.

The Conjugate Gradient method is a more sophisticated method that uses information about the residual error to guide the iterative updates [12]. The update for the kth iteration is given by

xk=xk1+αkpk,

where αk is the step size, pk is the search direction given by

pk=rk1+βkpk1,

and

rk=yAxk

is the residual, with βk determined by the Conjugate Gradient method.

However, it is possible to precondition the linear equation (1) by multiplying both sides by an operator B to obtain BAx=By, so that the convergence of the iterative methods is improved. In this case, the residual ByBAxk better reflects the actual error. This method is a very effective technique for solving differential equations, integral equations, and related problems [2, 9, 10].

In general, iterative methods are powerful tools for solving large systems of linear equations. By choosing an appropriate iterative method based on the properties of the operator under consideration, it is possible to obtain fast and accurate solutions to a wide range of linear systems.

2. Preconditioning the problem based on Richardson’s method

Based on the properties of the operator A in equation (1), positive constants c1 and c2 exist such that for each u,

(3) c1uAuc2u.

Although the positive definiteness of A is essential for methods such as conjugate gradient, our method can disregard this property. This is more evident in the Numerical results section.

It is not difficult to show that the optimal parameters c1 and c2 in equation (3) are equal to λmin(A2) and λmax(A2), respectively. Considering the properties of matrix A, there exist constants c1 and c2 that satisfy the aforementioned equation. It is not imperative to utilize the optimized coefficients within this context. Consequently, precise knowledge of the eigenvalues of the matrix A2 is not mandatory; an approximation thereof is sufficient.
Now, based on the Richardson iterative method, we consider the iteration

(4) xk=xk1+4c1+c2(I1c1+c2A2)A(yAxk1).

In this case, the following lemma holds.

Lemma 1.

Let A be a bounded and self-adjoint operator on a Hilbert space . Then

I4c1+c2(I1c1+c2A2)A2(c2c1c2+c1)2,

where c1 and c2 are the constants used in (3).

Proof.

Since A is self-adjoint, for each x we obtain

(I2c1+c2A2)x,x =x22c1+c2A2x,x
=x22c1+c2Ax,Ax=x22c1+c2Ax2
x22c1c1+c2x2=(c2c1c2+c1)x2,

where the inequality follows from (3).

Similarly, we can prove that for every x,

(c2c1c2+c1)x2(I2c1+c2A2)x,x.

Therefore,

(5) I2c2+c1A2(c2c1c2+c1).

Finally, inequality (5) allows us to conclude that

I4c1+c2(I1c1+c2A2)A2 =I4c1+c2A2+4(c1+c2)2A4
=(I2c1+c2A2)2
(I2c1+c2A2)2
(c2c1c2+c1)2,

which completes the proof. ∎

Note that the optimal convergence rate in the Richardson iterative method, obtained by letting α=2λmin(A)+λmax(A), is λmax(A)+λmin(A)λmax(A)+λmin(A).
In the following theorem, we show that the convergence rate of the iterative method (4) is (c2c1c2+c1)2.

Theorem 2.

For any initial guess x0 of the solution of (1), the sequence {xk}k=0 defined by (4) converges to the exact solution x of (1) with convergence rate equal to (c2c1c2+c1)2.

Proof.

The definition of xk in (4) leads to

xxk=xxk14c1+c2(I1c1+c2A2)A(yAxk1),
xxk1(4c1+c2I4(c1+c2)2A2)A2(xxk1)=
=(I4c1+c2A2+4(c1+c2)2A4)(xxk1)
=(I2c1+c2A2)2(xxk1).

Therefore,

xxk =(I2c1+c2A2)2(xxk1)
(I2c1+c2A2)2(xxk1).

Thus, the result follows from the previous lemma. ∎

Note: Assume c1=λmin(A2) and c2=λmax(A2). Let c1=c1δ and c2=c2+δ, then c1+c2=c1+c2 and c2c1=c2c1+2δ. Therefore, since (c2c1c2+c1)2(c2c1c2+c1)2, Lemma 1 and so Theorem 2 hold in this case.

To summarize the results obtained so far, we present an algorithm that generates an approximate solution to equation (1) with prescribed accuracy.

Algorithm 1 [A,c1,c2,ϵ]xϵ
1. Let ρ=(c2c1c2+c1)2.
2. k=0,xk=0.
3. k=k+1,rk1:=yxk1,vk1:=Ark1,wk1=Avk1,
a) xk=xk1+4c2+c1(vk11c2+c1Awk1),
b) ρk=ρk1c1y.
4. If ρk<ϵ, then stop and set uϵ=xk as an approximate solution. Otherwise, go to Step (3).

3. Modification by Chebyshev polynomials

In this section, the properties of the Chebyshev polynomials are used to modify the previous algorithm to accelerate the convergence rate.

Considering the iteration (3), let un=k=1nankxk, where kj=1nank=1. In this case, based on the proof of the previous theorem,

xun =k=1nankxk=1nankxk=k=1nank(xxk)=
=k=1nank(I2c1+c2A2)2k(xx0).

By setting B=(I2c1+c2A2)2 and Pn(x)=k=1nankxk, we see that

(6) xun=k=1nankBk(xx0)=Pn(B)(xx0).

Since A is invertible and self-adjoint, A2 is a positive definite operator. Also, by the previous lemma, the spectrum of B is a subset of the interval [ρ,ρ], where ρ=(c2c1c2+c1)2. Therefore, in view of (6), the spectral theorem leads to

(7) xun =Pn(B)(xx0)Pn(B)xx0
max|x|ρ|Pn(x)|xx0.

In the sequel, we aim to minimize xun. We have to find

(8) minPnnmax|x|ρ|Pn(x)|,

where n:={Pn(x):Pn(x) is a polynomial of degree n with Pn(1)=1.}.

We investigate the solution to this problem in terms of the Chebyshev polynomials [3]. These polynomials are defined by

cn(x)={cos(narccos(x)),|x|1,cosh(ncosh1(x)),|x|>1.

and satisfy the recurrence relations

c0(x)=1,c1(x)=x,cn(x)=2xcn1(x)cn2(x),n2.

The following lemma presents a minimization property of these polynomials which will be used later.

Lemma 3 ([3]).

Let a<b<1 and set

Qn(x)=cn(2xabba)cn(2abba)

for x[a,b]. Then, for each Pnn,

maxaxb|Qn(x)|maxaxb|Pn(x)|.

Furthermore,

maxaxb|Qn(x)|=1cn(2abba).

This lemma shows that the minimization problem (8) can be solved by setting a=ρ and b=ρ. These lead to

(9) Qn(x)=cn(xρ)cn(1ρ),

which solves (8).

Now, we rewrite uk by using the Chebyshev polynomials. First of all, combining (9) with the definition of the Chebyshev polynomials (the recurrence relation) for n2, we obtain

cn(1ρ)Qn(x) =cn(xρ)=2xρcn1(xρ)cn2(xρ)
=2xρcn1(1ρ)Qn1(x)cn2(1ρ)Qn2(x).

Replacing x with B and applying the resulting operator identity to xx0 yield

cn(1ρ)Qn(B)=2Bρcn1(1ρ)Qn1(B)(xx0)cn2(1ρ)Qn2(B)(xx0).

By (6) and the fact that Qn(x) is the solution of the minimization problem (8), the above equation can be recovered as

cn(1ρ)(xun)=2ρcn1(1ρ)B(xun1)cn2(1ρ)(xun2),

or equivalently,

cn(1ρ)(x)cn(1ρ)(un)=
=2ρcn1(1ρ)(I2c1+c2A2)2(xun1)cn2(1ρ)(xun2)
=2ρcn1(1ρ)x+2ρcn1(1ρ)[un1(4c1+c2A24(c1+c2)2A4)(xun1)]
cn2(1ρ)(x)+cn2(1ρ)(un2).

Repeated application of the recurrence relations of the Chebyshev polynomials for n2 leads to

cn(1ρ)un =2ρcn1(1ρ)[un1+(4c1+c2A24(c1+c2)2A4)(xun1)]
cn2(1ρ)(un2),

and hence

(10) un =2ρcn1(1ρ)cn(1ρ)[un1+(4c1+c2A24(c1+c2)2A4)(xun1)]
cn2(1ρ)cn(1ρ)(un2).

If we set

ρn=2ρcn1(1ρ)cn(1ρ),

then according to the properties of the Chebyshev polynomials we obtain

1ρn=cn2(1ρ)cn(1ρ).

Therefore, we can rewrite (10) as

un=ρn[un1+(4c1+c2A24(c1+c2)2A4)(xun1)]+(1ρn)un2,

or equivalently,

un=ρn[un1un2+(4c1+c2A24(c1+c2)2A4)(xun1)]+un2.

This yields

(11) un=ρn[un1un2+4c1+c2(I1(c1+c2)A2)A(yAun1)]+un2.

Also, a straightforward computation gives us the following recursive relation for ρn,

(12) ρn=(1ρ24ρn1)1.

Now, based on the recursive relation (12), we design the following algorithm to approximately solve equation (1).

Algorithm 2 [A,c1,c2,ϵ]uϵ
1. Let ρ=(c2c1c2+c1)2, σ=c12+c222c1c2c12+c22+2c1c2.
2. Set u0=0, u1=4c2+c1(I1c2+c1A2)Ay, ρ1=2, n=1.
3. While 2σn1+σ2nyc1>ϵ Do,
i) n=n+1;
ii) ρn=(1ρ24ρn1)1;
iii) un=ρn[un1un2+4c1+c2(I1(c1+c2)A2)A(yAun1)]+un2.
4. uϵ=un.

The following theorem investigates the convergence rate of the Algorithm.

Theorem 4.

If x is the exact solution of (1), then the approximate solution un given in Algorithm 2 satisfies

xun2σn1+σ2nyc1.

Also, the output uϵ of Algorithm 2 satisfies

xuϵ<ϵ.
Proof.

Letting x0, u0=0, Lemma 3 and relation (7) allow us to write

(13) xun1cn(1ρ)xu0=1cn(1ρ)x1cn(1ρ)yc1.

By definition of Chebyshev polynomials and with a few straightforward calculations, we obtain

cn(1ρ) =cn(c2+c1c2c1)2
=12[((c12+c22+2c1c2)(c12+c222c1c2))n+((c12+c22+2c1c2)(c12+c222c1c2))n].

Thus, by setting

σ=c12+c222c1c2c12+c22+2c1c2

we conclude that

cn(1ρ)=12(1σn+σn)=1+σ2n2σn.

Combining this equality with a relation (13), we obtain the desired result. ∎

It is expected that methods utilizing more spectral information to yield better results compared to those relying solely on matrix-vector products. However, the exact eigenvalues are not necessary for our method, as we assume that the coefficients c1 and c2 exist.

4. Numerical results

In this section, we present several examples to evaluate the efficiency and performance of our algorithms and compare them with several well-known algorithms in some cases. In addition, we compare our algorithms, namely, Algorithm 1 and Algorithm 2, with each other as well as with the Richardson and Conjugate Gradient (CG) algorithms in some cases.

The reported experiments were performed on a 64-bit 2.4 GHz system using MATLAB version 2010.

In the following two examples, we show the efficiency of our novel algorithms by using 3×3 and 5×5 systems, respectively. Both examples use the tolerance threshold ϵ=0.001.

Example 5.

Let

A=[1018031080892803102801064].

Since

A=[101005810830]2

it is straightforward to investigate that A is invertible, self-adjoint and positive definite. Assuming

f=[146]

and concluding c1=λmin(A2)=81 and c2=λmax(A2)=1511700, the following results are obtained for the system Ax=f .

The exact solution for this system is

x=[0.23760.13360.0397]

and as mentioned above, the given tolerance threshold is ϵ=0.001. First, we use Algorithm 1 to approximate the solution of this system. By using Algorithm 1, we obtain the approximate solution gk after 31232 iterations within t=0.167103 sec. Also, Algorithm 2 gives an approximation of the solution within t=0.0073 sec. after only 353 iterations.

As discussed in Section 2, it is unnecessary to utilize the optimal values of c1 and c2 as the eigenvalues of the matrix A2; a mere approximation is sufficient. Methods such as the Power Method [13] and the Jacobi Method [13] can be employed to approximate the eigenvalues of A2.

To demonstrate the efficacy of our algorithms with approximate values of c1 and c2, we employed several approximated values for c1λmin(A2) and c2λmax(A2) in Example 5. The resulting data is summarized in Table 1. In this table, the first column corresponds to the optimal values of c1 and c2, while the second and third columns represent the data corresponding to the approximate values.

c1=81, c2=1511700 c1=80, c2=1511701 c1=75, c2=1511725
Algorithm iterations Err iterations Err iterations Err
Algorithm 1 31232 2.1523×104 31661 1.6309×104 33934 1.2082×104
Algorithm 2 353 2.6629×104 360 1.9264×104 373 1.0464×104
Table 1. The number of iterations needed to converge to x and the final error Err for Algorithm 1 and Algorithm 2 in Example 5, for ϵ=0.001 with several approximated c1 and c2.

As shown in Table 1, the further we deviate from the optimal values λmin(A2) and λmax(A2), the number of iterations increases in both our algorithms, but the computational error decreases due to the increased number of iterations. This demonstrates that our algorithms also perform well with approximate values for c1 and c2.

Example 6.

Suppose that B is the 5×5 matrix

B=[1374344160274374799441822164119975602716431572232109243741192232228626537999751092265317214].

Then, similar to the previous example, we conclude that B is self-adjoint, invertible and positive definite. Using approximations c1=80080 and c2=387614987 of the exact optimal values for λmin(A2)=81087 and λmax(A2)=387609486, and if

f=[780253]

then the exact solution of the system Bx=f is

x=[0.00380.02020.03180.05640.0052].

Applying Algorithm 1 with ϵ=0.001, the number of iterations required to converge to the system solution is 5455 within t=0.031623 sec., while this number is equal to 129 with t=0.003826 sec. for Algorithm 2.

Refer to caption
Figure 1. The graph of the error function Err(k) concerning iteration step k for Algorithm 1 (blue curve) and Algorithm 2 (red curve) algorithms in Example 6.

Here, the error function can be defined by Err(k)=xkx at each step k of the iterative method, where xk is the kth approximation of the solution and x is the exact solution of the system. We indicate the value of this function at the final iteration by Err. For the approximate solution obtained from Algorithm 1 in the previous example, this value is 6.4613×104, which is a little less than that of Algorithm 2 with Err=6.5223×104. Nevertheless, these error function values indicate that the final approximation obtained from each of our two algorithms is accurate up to four decimal places. Fig. 1 shows how Algorithm 1 and Algorithm 2 converge to the exact solution of the system in Example 6.

Although the Richardson iterative method is a light calculation method that is used in many applications of iterative methods in the solution of linear systems, our examples show that in many cases, the number of iterations required to reach the desired solution in Richardson’s method is greater than that of our second method. For instance, in the following example and the special case ϵ=0.001, the number of iterations required in Richardson’s method is equal to 30322 and the final error value is equal to Err=2.1836×104. But, in our second algorithm, this number is equal to 23896 with an error value equal to Err=1.2679×104.

Example 7.

Let

D=[10157.4589.457.4661497589.44973500].

It is straightforward to investigate that D is invertible and self-adjoint. Assuming

f=[0.128],

c1=0.2918 and c2=13556700, the following results are obtained for the system Dx=f .

The maximum error for the approximate solution of this system is Err=2.4138. For the case ϵ=0.01, Richardson’s method obtains the approximate solution after 22893 iterations with time t=0.084436 sec. and Err=0.0057, but for Algorithm 2, these numbers are 18747 iterations with t=0.301137 and Err=0.0024. In this case, Algorithm 1 works slowly but finally converges to the exact solution of the system. Here, the CG method does not work at all.

The optimal values for c1 and c2 in the previous example are λmin(D2)=0.3090 and λmax(D2)=13556690. As can be seen, in this example, there is no need to use optimal values for c1 and c2. Table 2 presents the numerical results of the iterations required to converge to the exact solution of the systems in Example 7.

Algorithms epsilon ϵ 0.01 0.001 0.0001
The Richardson algorithm 22893 30322 37968
Algorithm 1 >1million >1.5million >2.5million
Algorithm 2 18747 23896 29038
The CG algorithm No Respond No Respond Not Respond
Table 2. The number of iterations needed to converge to x for the Richardson iterative method, Algorithm 1, Algorithm 2 and the CG method in Example 7, for ϵ=0.01,0.001 and 0.0001.

As shown in Table 2, in this special example, the well-known method CG does not work at all, while our first algorithm works, although slowly!

Also, we can see in Table 2 that the required number of iterations with different values of ϵ in Algorithm 2 is less than that of Richardson’s method.

According to the above example, if we ignore the condition of positive definiteness, then the algorithm of the Conjugate Gradient method fails to work. Moreover, if the matrix of the system has negative eigenvalues, then the Richardson iterative method fails to work either. The following examples illustrate such systems.

Algorithms epsilon ϵ 0.01 0.001 0.0001
The Richardson algorithm No Respond No Respond No Respond
Algorithm 1 2 58 115
Algorithm 2 2 11 19
The CG algorithm No Respond No Respond No Respond
Table 3. The number of iterations needed to converge to x for the Richardson iterative method, Algorithm 1, Algorithm 2 and the CG method in Example 8, for ϵ=0.01,0.001 and 0.0001.
Algorithms epsilon ϵ 0.01 0.001 0.0001
The Richardson algorithm No Respond No Respond No Respond
Algorithm 1 0.000033 0.000329 0.000674
Algorithm 2 0.000171 0.000263 0.000391
The CG algorithm No Respond No Respond No Respond
Table 4. The run-time of the Richardson iterative method, Algorithm 1, Algorithm 2 and the Conjugate Gradient method in seconds for ϵ=0.01,0.001 and 0.0001 in Example 8.
Example 8.

Assume that

D=[1346811470360811470171804380360843804196]

and

f=[82514]

then the eigenvalues of D are negative. Therefore the parameter α in Richardson’s method is negative so it will not work for solving the system Dx=f. But in this case, with c1=λmin(D2)=8122414 and c2=λmax(D2)=799751706, both our two algorithms lead to the approximated solution for the system properly.

In addition, due to the non-definite positivity of D, the CG method does not respond.

Table 3 shows detailed information about the required iterations to converge to the exact solution of the system by using different tolerance thresholds ϵ. Also, a summary of the obtained information about the runtime of our algorithms is given in Table 4.

Algorithms Number of iterations Run-time Err
Algorithm 1 60 0.000741 5.894×104
Algorithm 2 12 0.000277 6.8232×104
Table 5. Run-time, number of iterations and final error Err needed for Algorithm 1 and Algorithm 2 to converge, by using perturbed values c1=7923000 and c2=799841230 in Example 8 for ϵ=0.001.
Algorithms Number of iterations Run-time Err
Algorithm 1 12 0.222026 5.0480×104
Algorithm 2 6 0.007273 5.0355×104
Table 6. Run-time, number of iterations, and final error Err needed for Algorithm 1 and Algorithm 2 to converge using perturbed values c1=3698 and c2=27918 in Example 9 for ϵ=0.001.

In Fig. 2, we show the way our algorithms converge to the exact solution during successive iterations. We use the broken-line diagram to compare the convergence of our algorithms at each step k. As seen in Fig. 2, Algorithm 2 converges faster than Algorithm 1. Also, the CG algorithm operates at only one step and fails to converge. Note that this diagram corresponds to the error threshold ϵ=0.001.

Here is another example in which Richardson’s method fails to converge due to a negative parameter α.

Example 9.

Suppose that in the linear system Ax=y, the matrix of A is given by

[10450165012020162060]

and

y=[82210].

Then, λmin(A)=168.6671 and λmax(A)=53.8876. So, α<0 and this means that the Richardson iterative method fails to converge in this case. The maximum error of the approximate solution is equal to the norm of the exact solution, which is 0.3623. Let ϵ=0.001. Then, by using c1=λmin(A2)=3766.2 and c1=λmax(A2)=27404, Algorithm 1 converges in 16 steps with final error Err=1.8055×104. Also, Algorithm 2 converges in 8 steps, and its final error is Err=1.8868×104. This value equals Err=0.0074 for the CG method, which is greater than the threshold ϵ=0.001. Thus, in this case, the CG method fails to converge and operates only one step. Fig. 3 compares the convergence speeds of our algorithms with each other and with the Conjugate Gradient method in this example.

To demonstrate that an approximation of the optimal values c1 and c2 suffices for both our algorithms, we have obtained approximated values for c1 and c2 in the two examples above applying eigenvalue approximation methods, particularly the Power Method [13]. We then re-executed Example 8 and Example 9 using these approximations with a tolerance of ϵ=0.001. The results are summarized in Table 5 for Example 8 and Table 6 for Example 9.

As illustrated in Table 5 and Table 6, our algorithms exhibit favorable performance when utilizing the approximate values of c1 and c2.

Refer to caption
Figure 2. The graph of the error function Err(k) concerning iteration step k for Algorithm 1, Algorithm 2, and CG algorithm in Example 8.
Refer to caption
Figure 3. The graph of the error function Err(k) concerning iteration step k for Algorithm 1 and Algorithm 2 in Example 9.
Example 10.

Let A be the following matrix,

A=(2100012100012000002100012)

that is a finite difference discretization of the Laplace PDE of dimension 150 [11], and y=[123150]. Then A is self-adjoint and invertible but not positive definite, hence in this case CG method does not work at all. Since eigenvalues of A are all negative, the Richardson method cannot be applied. But our first method converges in t=334.3829 seconds at 239591 steps and our second algorithm converges to the solution of the system Ax=y in just 1417 iterations within 23.09 seconds with final error Err=7.8411×104 for ϵ=0.001. This final error for our first algorithm is Err=7.7815×104.

Here, c1=1.8735×107 and c2=15.9965 are optimal values satisfying in equation (3).

5. Conclusions

In this paper, we proposed two new iterative methods for solving an operator equation Ax=y, with A being bounded, self-adjoint, and positive definite. Our first method used an iterative relation with a convergence rate equal to (λmax(A2)λmin(A2)λmax(A2)λmin(A2))2 and the second method used the Chebyshev polynomials to accelerate the convergence rate of the first method.

Although the Richardson and Conjugate Gradient methods are among the most popular methods used in various applications, they are inefficient for a wide range of linear systems. Our algorithms worked decently well in such cases. Both the Richardson and Conjugate Gradient methods have limitations in their fields of application. For instance, Richardson’s algorithm is limited by the assumption that the parameter α is positive, and so, fails to work if α0. Nevertheless, our algorithms worked properly in most of these cases. Meanwhile, our second iterative method needed fewer iterations than Richardson’s method to converge. Also, the positive definiteness of the operator is an essential condition for the CG method, and the method fails to respond if the system’s operator Ax=y is not positive definite. However, our algorithms can converge if the operator is only invertible and self-adjoint.

Since iterative methods have various applications in other branches of science, conducting research on these methods and their development can greatly help other researchers in different scientific fields. We hope that this work will motivate researchers to develop such methods and help other researchers in the field of numerical solutions of linear systems.

References