A new preconditioned Richardson iterative method
Abstract.
This paper proposes a new iterative technique for solving a linear system
Key words and phrases:
iterative method, Richardson iteration, convergence rate, Chebyshev polynomials.2005 Mathematics Subject Classification:
Primary 65F10; Secondary 65F08.1. Introduction and preliminaries
Linear systems arise in various scientific and engineering applications, including differential equations, signal processing, and optimization [4, 5, 6, 7, 8, 12]. Therefore, it is crucial to find the efficient solution of a linear system in both theoretical and practical contexts. Iterative methods of solving linear systems have been extensively studied because they are more efficient than direct methods when solving large-scale problems. Furthermore, iterative methods have the advantage of being easily adaptable to different types of linear operators, which makes them versatile and applicable to a broad range of problems. Such operators frequently arise in practical applications, including image processing and machine learning.
In the present work, we propose a new iterative method for solving linear systems with bounded, self-adjoint operators and then present a new, modified version of the process by using the Chebyshev polynomials. The method utilizes a flexible preconditioner that adapts to the structure of the operator and exploits its self-adjointness. The resulting algorithms have favorable convergence properties and require fewer iterations than some existing methods. Also, we present the results of some numerical experiments that demonstrate the efficiency and effectiveness of our methods compared to the existing, state-of-the-art methods.
Consider the linear system
(1) |
in a Hilbert space
The most straightforward approach to an iterative solution is to rewrite (1) as a linear, fixed point iteration. If we consider the function
One of the best and most widely used methods to do this is the Richardson iterative method [12]. The abstract Richardson iterative method has the form
(2) |
where
The Conjugate Gradient method is a more sophisticated method that uses information about the residual error to guide the iterative updates [12]. The update for the
where
and
is the residual, with
However, it is possible to precondition the linear equation (1) by multiplying both sides by an operator
In general, iterative methods are powerful tools for solving large systems of linear equations. By choosing an appropriate iterative method based on the properties of the operator under consideration, it is possible to obtain fast and accurate solutions to a wide range of linear systems.
2. Preconditioning the problem based on Richardson’s method
Based on the properties of the operator
(3) |
Although the positive definiteness of
It is not difficult to show that the optimal parameters
Now, based on the Richardson iterative method, we consider the iteration
(4) |
In this case, the following lemma holds.
Lemma 1.
Let
where
Proof.
Similarly, we can prove that for every
Therefore,
(5) |
Finally, inequality (5) allows us to conclude that
which completes the proof. ∎
Note that the optimal convergence rate in the Richardson iterative method, obtained by letting
In the following theorem, we show that the convergence rate of the iterative method (4) is
Theorem 2.
Proof.
Note: Assume
To summarize the results obtained so far, we present an algorithm that generates an approximate solution to equation (1) with prescribed accuracy.
3. Modification by Chebyshev polynomials
In this section, the properties of the Chebyshev polynomials are used to modify the previous algorithm to accelerate the convergence rate.
Considering the iteration (3), let
By setting
(6) |
Since
(7) | ||||
In the sequel, we aim to minimize
(8) |
where
We investigate the solution to this problem in terms of the Chebyshev polynomials [3]. These polynomials are defined by
and satisfy the recurrence relations
The following lemma presents a minimization property of these polynomials which will be used later.
Lemma 3 ([3]).
Let
for
Furthermore,
This lemma shows that the minimization problem (8) can be solved by setting
(9) |
which solves (8).
Now, we rewrite
Replacing
By (6) and the fact that
or equivalently,
Repeated application of the recurrence relations of the Chebyshev polynomials for
and hence
(10) | ||||
If we set
then according to the properties of the Chebyshev polynomials we obtain
This yields
(11) |
Also, a straightforward computation gives us the following recursive relation for
(12) |
Now, based on the recursive relation (12), we design the following algorithm to approximately solve equation (1).
The following theorem investigates the convergence rate of the Algorithm.
Theorem 4.
If
Also, the output
Proof.
By definition of Chebyshev polynomials and with a few straightforward calculations, we obtain
Thus, by setting
we conclude that
Combining this equality with a relation (13), we obtain the desired result. ∎
It is expected that methods utilizing more spectral information to yield better results compared to those relying solely on matrix-vector products. However, the exact eigenvalues are not necessary for our method, as we assume that the coefficients
4. Numerical results
In this section, we present several examples to evaluate the efficiency and performance of our algorithms and compare them with several well-known algorithms in some cases. In addition, we compare our algorithms, namely, Algorithm 1 and Algorithm 2, with each other as well as with the Richardson and Conjugate Gradient (CG) algorithms in some cases.
The reported experiments were performed on a 64-bit 2.4 GHz system using MATLAB version 2010.
In the following two examples, we show the efficiency of our novel algorithms by using
Example 5.
Let
Since
it is straightforward to investigate that
and concluding
The exact solution for this system is
and as mentioned above, the given tolerance threshold is
As discussed in Section 2, it is unnecessary to utilize the optimal values of
To demonstrate the efficacy of our algorithms with approximate values of
|
|
|
||||
---|---|---|---|---|---|---|
Algorithm |
|
|
|
|||
Algorithm 1 |
|
|
|
|||
Algorithm 2 |
|
|
|
As shown in Table 1, the further we deviate from the optimal values
Example 6.
Suppose that
Then, similar to the previous example, we conclude that
then the exact solution of the system
Applying Algorithm 1 with

Here, the error function can be defined by
Although the Richardson iterative method is a light calculation method that is used in many applications of iterative methods in the solution of linear systems, our examples show that in many cases, the number of iterations required to reach the desired solution in Richardson’s method is greater than that of our second method. For instance, in the following example and the special case
Example 7.
Let
It is straightforward to investigate that
The maximum error for the approximate solution of this system is
The optimal values for
|
|||
---|---|---|---|
The Richardson algorithm |
|
||
Algorithm 1 |
|
||
Algorithm 2 |
|
||
The CG algorithm | No Respond | No Respond | Not Respond |
As shown in Table 2, in this special example, the well-known method CG does not work at all, while our first algorithm works, although slowly!
Also, we can see in Table 2 that the required number of iterations with different values of
According to the above example, if we ignore the condition of positive definiteness, then the algorithm of the Conjugate Gradient method fails to work. Moreover, if the matrix of the system has negative eigenvalues, then the Richardson iterative method fails to work either. The following examples illustrate such systems.
|
|||
---|---|---|---|
The Richardson algorithm | No Respond | No Respond | No Respond |
Algorithm 1 |
|
||
Algorithm 2 |
|
||
The CG algorithm | No Respond | No Respond | No Respond |
|
|||
---|---|---|---|
The Richardson algorithm | No Respond | No Respond | No Respond |
Algorithm 1 |
|
||
Algorithm 2 |
|
||
The CG algorithm | No Respond | No Respond | No Respond |
Example 8.
Assume that
and
then the eigenvalues of
In addition, due to the non-definite positivity of
Algorithms | Number of iterations | Run-time | |
---|---|---|---|
Algorithm 1 | |||
Algorithm 2 |
Algorithms | Number of iterations | Run-time | |
---|---|---|---|
Algorithm 1 | |||
Algorithm 2 |
In Fig. 2, we show the way our algorithms converge to the exact solution during successive iterations. We use the broken-line diagram to compare the convergence of our algorithms at each step
Here is another example in which Richardson’s method fails to converge due to a negative parameter
Example 9.
Suppose that in the linear system
and
Then,
To demonstrate that an approximation of the optimal values
As illustrated in Table 5 and Table 6, our algorithms exhibit favorable performance when utilizing the approximate values of


Example 10.
Let
that is a finite difference discretization of the Laplace PDE of dimension 150 [11], and
Here,
5. Conclusions
In this paper, we proposed two new iterative methods for solving an operator equation
Although the Richardson and Conjugate Gradient methods are among the most popular methods used in various applications, they are inefficient for a wide range of linear systems. Our algorithms worked decently well in such cases. Both the Richardson and Conjugate Gradient methods have limitations in their fields of application. For instance, Richardson’s algorithm is limited by the assumption that the parameter
Since iterative methods have various applications in other branches of science, conducting research on these methods and their development can greatly help other researchers in different scientific fields. We hope that this work will motivate researchers to develop such methods and help other researchers in the field of numerical solutions of linear systems.
References
- [1]
- [2] S.F. Ashby, T.A. Manteuffel and J.S. Otto, A comparison of adaptive Chebyshev and least squares polynomial preconditioning for Hermitian positive definite linear systems, SIAM J. Sci. Statist. Comput., 13 (1992), pp. 1–29. https://doi.org/10.1137/0913001
- [3] C.C. Cheny, Introduction to Approximation Theory, McGraw Hill, New York, 1996.
- [4] S. Dahlke, M. Fornasier and T. Raasch, Adaptive frame methods for elliptic operator equations, Advances in comp. Math., 27 (2007), pp. 27–63.
- [5] I. Daubechies, G. Teschke and L. Vese, Iteratively solving linear inverse problems under general convex constraints, Inverse Probl. Imaging, 1 (2007), pp. 29–46.
- [6] H. Jamali and M. Kolahdouz, Using frames in steepest descent-based iteration method for solving operator equations, Sahand Commun. Math. Anal., 18 (2021), pp. 97–109. https://doi.org/10.22130/scma.2020.123786.771
- [7] H. Jamali and M. Kolahdouz, Some iterative methods for solving operator equations by using fusion frames, Filomat, 36 (2022), pp. 1955–1965. https://doi.org/10.2298/fil2206955j
- [8] H. Jamali and R. Pourkani, Using frames in GMRES-based iteration method for solving operator equations, JMMRC, 13(2023) no.2, pp. 107–119.
- [9] C.T. Kelley, A fast multilevel algorithm for integral equations, SIAM J. Numer. Anal., 32 (1995), pp. 501–513.
- [10] C.T. Kelley and E.W. Sachs, Multilevel algorithms for constrained compact fixed point problems, SIAM J. Sci. Comput., 15 (1994), pp. 645–667. https://doi.org/10.1137/0915042
- [11] R.J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems, SIAM, 2007.
- [12] Y. Saad, Iterative methods for Sparse Linear Systems, PWS press, New York, 2000.
- [13] Y. Saad, Iterative methods for Sparse Linear Systems(2nd ed.), SIAM, 2011.
- [14]