Abstract
We study the approximation of an eigenpair (an eigenvalue and a corresponding eigenvector) of a a linear operator T from X to X, X be a Banach space.
The eigenpair is regarded as a solution of a nonlinear system obtained by considering the usual definition plus a norming function and then applying the Chebyshev or the Newton method.
We take into account that the augmented mapping has the third Frechet derivative the null operator, and we give a semilocal convergence result, which ensures the r-order three (cubic r-order) of the iterates.
We consider next the finite dimensional case, and analyse the computation of an eigenpair of a matrix. We also compare in this case the efficiency of the Newton and Chebyshev method, and find out that the later is more efficient.
Authors
Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)
Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)
Keywords
linear operator; eigenvalue; eigenvector; eigenpair; Newton method; Chebyshev method; semilocal convergence; r-convergence order.
Cite this paper as:
E. Cătinaş, I. Păvăloiu, On the Chebyshev method for approximating the eigenvalues of linear operators, Rev. Anal. Numér. Théor. Approx., 25 (1996) nos. 1-2, pp. 43-56.
Scanned paper: on the journal website.
Latex-pdf version of the paper. (soon)
About this paper
Publisher Name
Paper on journal website
Print ISSN
2457-6794
Online ISSN
2501-059X
Google Scholar citations
[1] M.P. Anselone, L.B. Rall, The solution of characteristic value-vector problems by Newton method, Numer. Math. 11 (1968), pp. 38–45.
CrossRef
[2] P.G. Ciarlet, Introduction a l’analyse numerique matricielle et a l’optimisation, Mason, Paris, 1990.
[3] F. Chatelin, Valeurs propres de matrices, Mason, Paris, 1988.
[4] L. Collatz, Functionalanalysis und Numerische Mathematik, Springer-Verlag, Berlin, 1964.
[5] V.S. Kartısov, F.L. Iuhno, O nekotorıh Modifikatiah Metoda Niutona dlea Resenia Nelineinoi Spektralnoi Zadaci, J. Vıcisl. matem. i matem. fiz. (33) (1973) 9, pp. 1403–1409.
[6] I. Pavaloiu, Sur les procedes iteratifs a un ordre eleve de convergence, Mathematica (Cluj) 12 (35) (1970) 2, pp. 309–324.
paper
[7] J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall Inc., Englewood Cliffs, N. J., 1964.
Paper (preprint) in HTML form
On the Chebyshev method for approximating
the eigenvalues of linear operators
1. Introduction
Approaches to the problem of approximating the eigenvalues of linear operators by the Newton method have been done in a series of papers ([1], [3], [4], [5]). There is a special interest in using the Newton method because the operatorial equation to be solved has a special form, as we shall see. We shall study in the following the convergence of the Chebyshev method attached to this problem and we shall apply the results obtained for the approximation of an eigenvalue and of a corresponding eigenvector for a matrix of real or complex numbers. It is known that the convergence order of Newton method is usually 2 and the convergence order of Chebyshev method is usually 3. Taking into account as well the number of operations made at each step, we obtain that the Chebyshev method is more efficient than the Newton method.
Let be a Banach space over , where or , and a linear operator. It is well known that the scalar is an eigenvalue of if the equation
(1) |
has at least one solution , where is the null element of the space . The elements that satisfy equation (1) are called eigenvectors of the operator , corresponding to the eigenvalue .
For the simultaneously determination of the eigenvalues and eigenvectors of we can proceed in the following way.
Consider the real Banach space , with the norm given by
(3) |
In this space we consider the operator given by
(4) |
If we denote by the null element of the space , then the eigenvalues and the corresponding eigenvectors of the operator are solutions of the equation
(5) |
Obviously, is not a linear operator.
It can be easily seen that the first order Fréchet derivative of has the following form [4]:
(6) |
where and For the second order derivative of we obtain the expression
(7) |
where
The Fréchet derivatives of order higher than of are null.
Considering the above forms of the Fréchet derivatives of , in the following we shall study the convergence of the Chebyshev method for the operators having the third order Fréchet derivative the null operator.
2. The convergence of Chebyshev method
The iterative Chebyshev method for solving equation (5) consists in the successive construction of the elements of the sequence given by
(8) |
where
Let and be two real numbers. Denote If
then
Consider the numbers
(9) |
With the above notations, the following theorem holds:
Theorem 1.
If the operator is three times differentiable with for all ( being the -linear null operator) and if, moreover, there exist , such that the following relations hold
i. the operator has a bounded inverse for all , and
ii. the numbers and given by satisfy the relations
and
then the following properties hold:
j. given by is convergent;
jj. if then and
jjj.
jv.
Proof.
Denote by the following mapping:
(10) |
where
It can be easily seen that for all the following identity holds
whence we obtain
(11) |
or
(12) |
Similarly, by (10) and taking into account the notations we made, we get
(13) |
Using the hypotheses of the theorem, the inequality (12) and the fact that , we obtain the following inequality:
Suppose now that the following properties hold:
-
a)
,
-
b)
The inequalities b) and (14) lead us to
(16) |
We have that
(17) |
3. The approximation of the eigenvalues and eigenvectors of matrices
In the following we shall apply the previously studied method to the approximation of eigenvalues and eigenvectors of matrices with elements real or complex numbers.
Let and the matrix , where , .
Using the above notations, we shall consider and . Any solution of equation
(19) |
being fixed, where and will lead us to an eigenvalue of and to a corresponding eigenvector. For the sake of simplicity denote , so that equation 19 is represented by the system
(20) |
where
(21) |
and
(22) |
Let . Then the first order Fréchet derivative of the operator at has the following form
(23) |
where If we write then for the second order Fréchet derivative we get
(24) |
Denote by the inverse of the matrix attached to the operator and Let We obtain the following representation
(25) |
From this relation we can easily deduce the equalities
(26) |
Denoting and supposing that is an approximation of the solution of system (20), then the next approximation given by method (8) is obtained by
(27) |
Consider with the norm of an element given by the equality
(28) |
and consequently
(29) |
It can be easily seen that for all Let be an initial approximation of the solution of system (20). Consider a real number and the set . Denote by
where being the inverse of the matrix attached to the operator .
Taking into account the results obtained in Section 2, the following consequence holds:
Corollary 2.
If and are chosen such that the matrix attached to the operator is nonsingular for all , and the following inequalities hold:
then the following properties are true
-
j.
the sequence generated by is convergent;
-
jj.
if , then
-
jjj.
-
jv.
Remark.
If the radius of the ball is given and there exists and , then
for all and in the above Corollary, taking into account the proof of Theorem 1, we can take
4. A comparison to the Newton method
Note that if in (27) we neglect the term , then we get the Newton method:
In order to compare the Newton method and the Chebyshev method, we shall consider that the linear systems which appear in both methods are solved by the Gauss method.
While the Newton method require at each step the solving of a system , the Chebyshev method require the solving of two linear systems with the same matrix and the vector depending on the solution of the first system. So, we shall adapt the Gauss method in order to perform as few as possible multiplications and divisions. When comparing the two methods we shall neglect the number of addition and subtraction operations.
The solving of a given linear system , (where we have denoted ) using the Gauss method consists in two stages. In the first stage, the given system is transformed into an equivalent one but with the matrix of the coefficients being upper triangular. In the second stage the unknowns are determined by backward substitution.
The first stage. There are performed steps, at each one vanishing the elements on the same column below the main diagonal.
We write the initial system in the form
Suppose that being called the first pivote. The first line in the system is multiplied by and is added to the second one, which becomes after performing multiplication or division operations. After such transformations the system becomes
Hence at the first step there were performed operations.
In the same manner, at the -th step we have the system
Supposing the -th pivote and performing operations we get
At each step , the elements below the -th pivote vanish, so they are not needed any more in the solving of the system.
The corresponding memory in the computer is used keeping in it the coefficients
which, of course, will be needed only for solving another system , with depending on , the solution of .
At the first stage there are performed
The second stage. Given the system
the solution is computed in the following way:
At this stage there are performed operations. In the both stages, there are totally performed
In the case when we solve the systems , where the vector depends on the solution , we first apply the Gauss method for the system and at the first stage we keep below the main diagonal the coefficients by which the pivotes were multiplied.
Then we apply to the vector the transformations performed to the vector when solving .
Denote
At the first step
At the -th step
At the -th step
There were performed operations.
Now the second stage of the Gauss method is applied to
In addition to the case of a single linear system, in this case were performed operations, getting
and taking into account we add more operations, obtaining
Remark. At the first stage, if for some we have , then an element must be find, and the lines and in and be swapped.
In order to avoid the error accumulations, a partial or total pivote strategy is recommended even if for .
For partial pivoting, the pivote is chosen such that
The interchange of lines can be avoided by using a permutation vector , which is first initialized so that . The elements in and are then referred to as and , and swapping the lines and is done by swapping the -th and -th elements in .
For the Chebyshev method, the use of the vector cannot be avoided by the very interchanging of the lines, because we must keep track for the permutations made, in order to apply them in the same order to the vector .
Moreover, we need two extra vectors and . In we store the transpositions applied to the lines in which are then successively applied to . At the first stage of the Gauss method, when the -th pivote is and , the -th and -th elements in are swapped, and we assign to indicate that at the -th step we applied to the transposition .
After computing the solution of , we initialize the vector by (25), the permutation vector by , and then we successively apply the transforms operated to , taking into account the eventual transpositions.
The algorithm for solving the second linear system in the Chebyshev method is as follows.
for todo
begin
if
then {at the -th step the transposition}
begin { has been applied to }
end,
for to do
end;
for downto do {the solution is now computed}
begin
for to do {now }
end.
We adopt as the efficiency measure of an iterative method the number
where is the convergence order and is the number of operations needed at each step.
We obtain
for the Newton method and
for the Chebyshev method.
It can be easily seen that we have for , i.e. the Chebyshev method is more efficient than the Newton method.
5. Numerical example
Consider the real matrix
which has the following eigenvalues and eigenvectors
Taking the initial value , and applying the two methods we obtain the following results:
Newton method
0 | 1.0 | -1.500 000 000 0 | -2.000 000 000 0 | -1.500 000 000 0 | -1.000 000 000 0 |
---|---|---|---|---|---|
1 | 1.0 | -0.900 000 000 0 | -0.800 000 000 00 | -0.900 000 000 00 | -1.600 000 000 0 |
2 | 1.0 | -1.012 500 000 0 | -1.025 000 000 0 | -1.012 500 000 0 | -2.050 000 000 0 |
3 | 1.0 | -1.000 152 439 0 | -1.000 304 878 0 | -1.000 152 439 0 | -2.000 609 756 1 |
4 | 1.0 | -1.000 000 023 2 | -1.000 000 046 5 | -1.000 000 023 2 | -2.000 000 092 9 |
5 | 1.0 | -1.000 000 000 0 | -1.000 000 000 0 | -1.000 000 000 0 | -2.000 000 000 0 |
Chebyshev method
0 | 1.0 | -1.500 000 000 0 | -2.000 000 000 0 | -1.500 000 000 0 | -1.000 000 000 0 |
---|---|---|---|---|---|
1 | 1.0 | -0.972 000 000 00 | -0.944 000 000 00 | -0.972 000 000 00 | -1.888 000 000 0 |
2 | 1.0 | -0.999 950 001 89 | -0.999 900 003 77 | -0.999 950 001 89 | -1.999 800 007 5 |
3 | 1.0 | -1.000 000 000 0 | -1.000 000 000 0 | -1.000 000 000 0 | -2.000 000 000 0 |
References
-
[1]
M.P. Anselone, L.B. Rall, The solution of
characteristic value-vector problems by Newton method, Numer. Math. 11
(1968), pp. 38–45.
- [2] P.G. Ciarlet, Introduction à l’analyse numérique matricielle et à l’optimisation, Mason, Paris, 1990.
- [3] F. Chatelin, Valeurs propres de matrices, Mason, Paris, 1988.
- [4] L. Collatz, Functionalanalysis und Numerische Mathematik, Springer-Verlag, Berlin, 1964.
- [5] V.S. Kartîşov, F.L. Iuhno, O nekotorîh Modifikaţiah Metoda Niutona dlea Resenia Nelineinoi Spektralnoi Zadaci, J. Vîcisl. matem. i matem. fiz. (33) (1973) 9, pp. 1403–1409.
-
[6]
I. Păvăloiu,
Sur les procédés itératifs
à un ordre élevé de convergence, Mathematica (Cluj) 12 (35)
(1970) 2, pp. 309–324.
- [7] J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall Inc., Englewood Cliffs, N. J., 1964.
Received March 7, 1996.