Some numerical aspects in the approximation of eigenpairs of matrices by the Newton method

Abstract

Authors

E. Cătinaş, Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy

Emil Cătinaş

I. Păvăloiu, Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy

Ion Păvăloiu

Keywords

?

Paper coordinates

E. Cătinaş, I. Păvăloiu, Some numerical aspects in the approximation of eigenpairs of matrices by the Newton method, Acta Technica Napocensis, Series: Applied Mathematics and Mechanics, 42 (1999) vol. 1, pp. 41-44.

PDF

??

About this paper

Publisher Name
DOI
Print ISSN
Online ISSN

google scholar link

??

Paper (preprint) in HTML form

Untitled Document

SOME NUMERICAL ASPECTS IN THE APPROXIMATION OF EIGENPAIRS OF MATRICES BY THE NEWTON METHOD

E. CǍTINAŞ, I. PǍVǍLOIU

Abstract. The eigenpairs of matrices may be approximated using the Newton method. The nonlinear mapping in such a case may be chosen in different ways, depending on the norming function. We investigate the difficulties of solving the linear systems from such iterations by evaluating the condition number at the solution for each of the resulted Jacobians in the case of two test matrices.

1. The eigenpairs of matrices as zeroes of nonlinear mappings

Denote V=𝕂n and let A=(aij)𝕂n×n, where 𝕂= or . We recall that the scalar λ𝕂 is an eigenvalue of A if there exists vV, v0 such that

Avλv=0. ((1))

The vector v is called an eigenvector corresponding to the eigenvalue λ. Since for an eigenvalue λ the eigenpair (v,λ) is not uniquely determined, it is necessary to impose a supplementary condition. As in [9] we shall consider a ”norming” function G:V𝕂, G(0)1 and besides (1) the equation

G(v)1=0.

The function G may be chosen in different ways:

G(v) =v(i0),for some fixed i0{1,,n} (I)
G(v) =12v22, (II)

where 2 stands for the euclidean norm.

In the paper [2] we proposed the choice

G(v)=12nv22. (III)

Considering 𝕂n+1=V×𝕂 endowed with the max norm , we are led to the nonlinear mapping F:n+1n+1,

F(x)=(AvλvGv1),x=(vλ)𝕂n+1.

Its first n components are

F1(x) =(a11x(n+1))x(1)+a12x(2)++a1nx(n)
Fn(x) =an1x(1)+an2x(2)++(annx(n+1))x(n).

The last component is determined by the choice of G. We get for each corresponding case:

Fn+1(x) =x(i0)1 (I)
Fn+1(x) =12(x(1))2+12(x(2))2++12(x(n))21 (II)
Fn+1(x) =12n(x(1))2+12n(x(2))2++12n(x(n))21. (III)

It can be seen now that the eigenpairs of A may be regarded as the solutions of the nonlinear system

F(x)=0.

For the solving of such a problem we may use the Newton method, which requires the derivative of F. It is known that the first and the second order derivatives of F are given by

F(x0)h =(Aλ0Inv0ei0t0)h,F′′(x0)hk=(αwβu0), (I)
F(x0)h =(Aλ0Inv0v0t0)h,F′′(x0)hk=(αwβuwtu), (II)
F(x0)h =(Aλ0Inv01nv0t0)h,F′′(x0)hk=(αwβu1nwtu), (III)

for all x0=(v0λ0), h=(uα), k=(wβ)𝕂n+1, where ei is the i-th vector from the canonical basis.

Some direct computations show that the norms of the second derivatives of F are given by

F′′ =2, (I)
F′′ =n, (II)
F′′ =2, (III)

regardless of xn+1, since F is a polynomial of degree 2.

Different Newton-type methods for solving such nonlinear systems were studied in several papers [9], [6], [12], [14], [1], [2] and [3]. It is interesting to note that the Rayleigh quotient method is equivalent with a certain Newton method (”the scaled Newton method” [7]).

2. THE ROLE OF CERTAIN QUANTITIES IN THE CONVERGENCE OF THE NEWTON METHODS

We are interested how the different quantities specific to a nonlinear mapping influence the convergence of the Newton iterates.

The following result was obtained by Deuflhard and Potra

Theorem 1: [5] Let F:D𝕂n+1𝕂n+1 be a nonlinear operator defined on the convex domain D. Suppose that F is Fréchet differentiable on D, that F(x) invertible for all xD, and that

F(x)1(F(x+t(yx))F(x))(yx)tωyx2,

x,yD, t[0,1]. Moreover, assume that xD is a solution of the nonlinear system F(x)=0. Let x0D be such that B¯x0x(x)={x𝕂n+1|xxx0x}D and x0B2/ω(x). Then the Newton iterates

F(xk)sk =F(xk)
xk+1 =xk+sk,k=0,1,

remain in the open ball Bx0x(x), converge to x and satisfy

xk+1xω2xkx2,k=0,1,

Similar but somehow weaker results were obtained by Rheinboldt [10], Dennis and Schnabel [4] and by Ortega and Rheinboldt [8].

Remark. If F(x)1β and F′′(x)K for all xD, one can take ω=βK.

As explicitly noted by Dennis and Schnabel [4], the smaller are β and K, the faster is the convergence of the iterates. On the other hand, as β and K decrease, the radius of the attraction ball becomes larger.

We see that in the case of large matrices, the choice III yields a much smaller constant K compared to choice II (the choice I also yields small K, but its use may theoretically not be safe [2]). Of course that taking the norming function III, the constant β may grow compared to the corresponding one from II but we do not analyse here this aspect.

There is another aspect important in the practical use of the Newton method. When the iterates converge, the matrices tend to F(x). The condition number of a nonsingular matrix A, defined as κ(A)=AA1, is a measure of the sensitivity of a linear system Ax=b to perturbations. The larger is κ(A), the more inaccurate results one may obtain when using direct methods.

The Newton methods for eigenproblems were also considered in [13], in the context of preconditioning techniques.

The following section will be devoted to the numerical study of the condition numbers of two test matrices.

NUMERICAL ASPECTS

We shall consider two test matrices (these matrices are available from MatrixMarket at the following address: http://math.nist.gov/MatrixMarket/) in order to study the condition numbers. The programs were written in Matlab 5.0 and were run on a PC.

Pores1 matrix. This matrix arise from oil reservoir simulation. It is real, unsymmetric, of dimension 30 and has 20 real eigenvalues. We have plotted in fig. 1 the evaluations of the condition numbers of F at the real solutions (the eigenpairs were computed by Matlab).

[Uncaptioned image]

Fig. 1.

One can see that in this case the condition numbers are very close. At the nineth eigenvalue, the condition number of F(x) is slightly better conditioned with choice III than with choice II.

Fidap002 matrix. This real symmetric matrix of dimension n=441 arise from finite element modeling. Its eigenvalues are all simple and range from 7108 to 3106. Figure 2 contains the plot of the difference between the condition numbers computed with choices II and III.

[Uncaptioned image]

Fig. 2.

One can see that in this case the differences that appear between the two condition numbers are on the whole small, but for some certain few eigenpairs the differences are large in favour of choice II. However, we cannot conclude that one of the choices has better overall condition numbers.

REFERENCES

1. E. Cǎtinaş and I. Pǎvǎloiu, On the Chebyshev Method for Approximating the Eigenvalues of Linear Operators, Rev. Anal. Numér. Théor. Approx., 25 (1996) no. 1-2, pp. 43-56.

2. E. Cǎtinaş and I. Pǎvǎloiu, On a Chebyshev-Type Method for Approximating the Solutions of Polynomial Operators Equations of Degree 2, Approximation and Optimization. Proceedings of the International Conference on Approximation and Optimization, Romania (ICAOR), Cluj-Napoca, July 29 - August 1, 1996, vol. I, pp. 219-226.

3. E. Cǎtinaş and I. Pǎvǎloiu, On Approximating the Eigenvalues and Eigenvectors of Linear Continuous Operators, Rev. Anal. Numér. Théor. Approx., 26 (1997) no. 1-2, pp. 19-27.

4. J.E. Dennis, Jr. and R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall Series in Computational Mathematics, Englewood Cliffs, NJ, 1983.

5. P. Deuflhard and F.A. Potra, Asymptotic Mesh Independence of Newton-Galerkin Methods via a Refined Mysovskii Theorem, SIAM J. Numer. Anal., 29 (1992) no.5, pp. 1395-1412.

6. J.J. Dongarra, C.B. Moler and J.H. Wilkinson, Improving the Accuracy of the Computed Eigenvalues and Eigenvectors, SIAM J. Numer. Anal., 20 (1983) no. 1, pp. 23-45.

7. S.M. Grzegórski, On the Scaled Newton Method for the Symmetric Eigenvalue Problem, Computing, 45 (1990), pp. 277-282.

8. J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.

9. G. Peters and J.H. Wilkinson, Inverse Iteration, Ill-Conditioned Equations and Newton’s Method, SIAM Review, 21 (1979) no. 3, pp. 339-360.

10. W.C. Rheinboldt, An Adaptive Continuation Process for Solving Systems of Nonlinear Equations, Polish Academy of Science, Banach Ctr. Publ., 3 (1977), pp. 129-142.

11. M.C. Santos, A Note on the Newton Iteration for the Algebraic Eigenvalue Problem, SIAM J. Matrix Anal. Appl., 9 (1988) no. 4, pp. 561-569.

12. R.A. Tapia and L.D. Whitley, The Projected Newton Method has Order 1+2 for the Symmetric Eigenvalue Problem, SIAM J. Numer. Anal., 25 (1988) no. 6, pp. 1376-1382.

13. K. Wu, Y. Saad and A. Stathopoulos, Inexact Newton Preconditioning Techniques for Eigenvalue Problems, Lawrence Berkeley National Laboratory report number 41382 and Minnesota Supercomputing Institute report number UMSI 98-10, 1998.

14. T. Yamamoto, Error Bounds for Computed Eigenvalues and Eigenvectors, Numer. Math. 34 (1980), pp. 189-199.

Aspecte numerice în aproximarea valorilor şi vectorilor proprii ai matricilor prin metoda lui Newton

Valorile şi vectorii proprii ai matricilor pot fi aproximaţ prin metoda lui Newton. Aplicaţia neliniarǎ în acest caz poate fi aleasǎ în mai multe moduri, depinzând de funcţia de normare. In prezenta notǎ investigǎm dificultǎţile de rezolvare a sistemelor liniare din astfel de iteraţii prin evaluarea numǎrului de condiţionare în soluţie pentru fiecare din Jacobienii rezultaţi în cazul unor matrici test.

Cerc. pr. gr. III Emil Cǎtinaş, ”T. Popoviciu” Institute of Numerical Analysis, P.O. Box 68, 3400 Cluj-Napoca, Romania (ecatinas@ictp.math.ubbcluj.ro)

Cerc. pr. gr. I Ion Pǎvǎloiu, ”T. Popoviciu” Institute of Numerical Analysis, P.O. Box 68, 3400 Cluj-Napoca, Romania (pavaloiu@ictp.math.ubbcluj.ro)

1999

Related Posts