Abstract

We present a semilocal convergence result for a Newton-type method applied to a polynomial operator equation of degree (2).

The method consists in fact in evaluating the Jacobian at every two steps, and it has the r-convergence order at least (3). We apply the method in order to approximate the eigenpairs of matrices.

We perform some numerical examples on some test matrices and compare the method with the Chebyshev method. The norming function we have proposed in a previous paper shows a better convergence of the iterates than the classical norming function for both the methods.

Authors

Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis)

Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis)

Keywords

nonlinear equations; abstract polynomial equations of degree 2; r-convergence order.

Cite this paper as:

E. Cătinaş, I. Păvăloiu, On a third order iterative method for solving polynomial operator equations, Rev. Anal. Numér. Théor. Approx., 31 (2002) no. 1, pp. 21-28. https://doi.org/10.33993/jnaat311-705

PDF

PDF-LaTeX file (on the journal website).

About this paper

Print ISSN

1222-9024

Online ISSN

2457-8126

MR

1222-9024

Online ISSN

2457-8126

Google Scholar citations

[1] M.P. Anselone and L.B. Rall, The solution of characteristic value-vector problems by Newton method, Numer. Math., 11(1968), pp. 38–45.

[2] I.K. Argyros, Quadratic equations and applications to Chandrasekhar’s and related equations , Bull. Austral. Math. Soc., 38 (1988), pp. 275–292.

[3] E. Catinas and I. Pavaloiu, On the Chebyshev method for approximating the eigenvalues of linear operators, Rev. Anal. Numer. Theor. Approx., 25 (1996) nos. 1–2, pp.  43-56.

[4] E. Catinas  and I. Pavaloiu, On a Chebyshev-type method for approximating the solutions of polynomial operator equations of degree 2, Proceedings of International Conference on Approximation and Optimization, Cluj-Napoca, July 29 – august 1, 1996, vol. 1, pp. 219-226.

[5] E. Catinas  and I. Pavaloiu, On approximating the eigenvalues and eigenvectors of linear continuous operators, Rev. Anal. Numer. Theor. Approx., 26 (1997) nos. 1–2, pp. 19–27.

[6] E. Catinas and I. Pavaloiu, On some interpolatory iterative methods for the second degree polynomial operators (I), Rev. Anal. Numer. Theor. Approx., 27(1998) no. 1, pp. 33-45.

[7] E. Catinas  and I. Pavaloiu, On some interpolatory iterative methods for the second degree polynomial operators (II) , Rev. Anal. Numer. Theor. Approx., 28 (1999) no. 2, pp. 133-143.

[8] L. Collatz, Functionalanalysis und Numerische Mathematik, Springer-Verlag, Berlin,1964.

[9] A. Diaconu, On the convergence of an iterative method of Chebyshev type, Rev. Anal. Numer. Theor. Approx. 24 (1995) nos. 1–2, pp. 91–102.

[10] J.J. Dongarra, C.B. Moler and J.H. Wilkinson, Improving the accuracy of the computed eigenvalues and eigenvectors , SIAM J. Numer. Anal., 20 (1983) no. 1, pp. 23–45.

[11] S.M. Grzegorski, On the scaled Newton method for the symmetric eigenvalue problem, Computing, 45 (1990), pp. 277–282.

[12] V.S. Kartisov and F.L. Iuhno, O nekotorih Modifikat ̧ah Metoda Niutona dlea Resenia Nelineinoi Spektralnoi Zadaci, J. Vicisl. matem. i matem. fiz., 33 (1973) no. 9, pp. 1403–1409 (in Russian).

[13] J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables , Academic Press, New York, 1970.

[14] I. Pavaloiu, Sur les procedes iteratifs a un order eleve de convergence, Mathematica (Cluj), 12 (35) (1970) no. 2, pp. 309–324.

[15] Pavaloiu, I., Introduction to the Theory of Approximating the Solutions of Equations, Ed. Dacia, Cluj-Napoca, Romania, 1986 (in Romanian).

[16] I. Pavaloiu and E. Catinas, Remarks on some Newton and Chebyshev-type methods for approximating the eigenvalues and eigenvectors of matrices, Computer Science Journal of Moldova, 7(1999) no. 1, pp. 3–17.

[17] G. Peters and J.H. Wilkinson, Inverse iteration, ill-conditioned equations and Newton’s method, SIAM Review, 21(1979) no. 3, pp. 339–360.

[18] M.C. Santos, A note on the Newton iteration for the algebraic eigenvalue problem, SIAM J. Matrix Anal. Appl., 9 (1988) no. 4, pp. 561–569.

[19] R.A. Tapia and L.D. Whitley, The projected Newton method has order 1+√2 for the symmetric eigenvalue problem, SIAM J. Numer. Anal., 25 (1988) no. 6, pp. 1376–1382.

[20] F. Tisseur, Newton’s method in floating point arithmetic and iterative refinement of generalized eigenvalue problems, SIAM J. Matrix Anal. Appl., 22 (2001) no. 4, pp. 1038–1057.

[21] K. Wu, Y. Saad and A. Stathopoulos, Inexact Newton preconditioning techniques for large symmetric eigenvalue problems, Electronic Transactions on Numerical Analysis, 7 (1998) pp. 202–214.

[22] T. Yamamoto, Error bounds for computed eigenvalues and eigenvectors, Numer. Math., 34 (1980), pp. 189–199.

Paper (preprint) in HTML form

ON A THIRD ORDER ITERATIVE METHOD FOR SOLVING POLYNOMIAL OPERATOR EQUATIONS∗

ON A THIRD ORDER ITERATIVE METHOD
FOR SOLVING POLYNOMIAL OPERATOR EQUATIONS

Emil Cătinaş and Ion Păvăloiu
(Date: November 9, 2001.)
Abstract.

We present a semilocal convergence result for a Newton-type method applied to a polynomial operator equation of degree 2. The method consists in fact in evaluating the Jacobian at every two steps, and it has the r-convergence order at least 3.

We apply the method in order to approximate the eigenpairs of matrices. We perform some numerical examples on some test matrices and compare the method to the Chebyshev method. The norming function we have proposed in a previous paper shows a better convergence of the iterates than the classical norming function for both the methods.

This work was supported by the Romanian Academy under grant GAR 45/2002.
“T. Popoviciu” Institute of Numerical Analysis, P.O. Box 68-1, 3400 Cluj-Napoca, Romania ({ecatinas,pavaloiu}@ictp.acad.ro).
{amssubject}

65H10.

{keyword}

two-step Newton method, Chebyshev method, eigenpair problems.

1. Introduction

Let F:XX be a nonlinear mapping, where (X,) is a Banach space, and consider the equation

(1) F(x)=0.

We shall assume that F is a polynomial operator of degree 2, i.e., it is indefinitely differentiable on X, with F(i)(x)=θi, for all xX and i3, where θi is the i-linear null operator.

Besides (1) we shall also consider another equation, equivalent with it

(2) xG(x)=0,

where G:XX. More exactly, we shall assume that the solutions of (1) coincide with the solutions of (2) and viceversa.

In [15] it was shown that the following iterations

(3) xk+1=G(xk)F(xk)1F(G(xk)),k=0,1,,x0X

have the convergence order with one order higher than the convergence order of the iterates xk+1=G(xk).

Obviously, if we take as G the Newton operator, i.e.,

(4) G(x)=xF(x)1F(x),

then we obtain a method with the convergence order at least 3.

In the present paper we shall study the convergence of the iterations (3), with G given by (4), i.e.,

(5) xk+1=xkF(xk)1F(xk)F(xk)1F(xkF(xk)1F(xk)),

in order to solve the polynomial equation (1).

By (5), for some known approximation xk, the next approximation xk+1 may be determined as

1. Solve F(xk)s=F(xk)
Set u=xk+s
2. Solve F(xk)t=F(u)
Set xk+1=u+t.

We notice that at each iteration step we need to solve two linear equations, but for the same linear operator, F(xk). The iterations may be viewed as being given by the Newton method in which the Jacobian is evaluated at every two steps.

A possible advantage of the above method over the Chebyshev method

xk+1=xkF(xk)1F(xk)12F(xk)1F′′(xk)(F(xk)1F(xk))2,

which has the same convergence order, is that it does not require the second derivative of F, which may have a complicate form.

The study of such methods for second degree polynomial equations is important, since such equations often arise in practice. We mention the eigenvalue problem (see, e.g., [21], [20]), some integral equations (see, e.g., [2], [8]), etc.

We shall apply this study to the approximation of the eigenpairs of the matrices, and we shall consider some numerical examples for some test matrices. We shall also compare the method (5) to the Chebyshev method.

2. A semilocal convergence result

Since F is a second degree polynomial, one can easily show that

(6) F(x)=F(y)+F(y)(xy)+12F′′(y)(xy)2,x,yX.

Assuming that F(x)1 exists, denote by φ(x) the following expression:

(7) φ(x)=F(x)1F(x)F(x)1F(xF(x)1F(x)).

Taking into account (6), we get

(8) F(xF(x)1F(x))=12F′′(x)(F(x)1F(x))2.

For the given norm , by (6)–(8) it follows that

(9) F(x)+F(x)φ(x)+12F′′(x)φ2(x)
12F′′(x)2F(x)14F(x)3+18F′′(x)3F(x)16F(x)4.

Analogously, by (7) and (8), we get for all xX

(10) φ(x)F(x)1F(x)+12F′′(x)F(x)13F(x)2.

Since the bilinear operator F′′(x) does not depend on x, denote K=F′′(x).

Let x0X, r>0, denote β0=F(x0)1, and assume that β0Kr<1. Using the Banach lemma, it easily follows that for all xB¯r(x0)={xX:xx0r}, there exists F(x)1, and

(11) F(x)1β01β0Kr=notβ.

With the above notations, by (9) and (10) it follows for any xB¯r(x0) that

(12) F(x)+F(x)φ(x)+12F′′(x)φ2(x)
12β4K2(1+14β2KF(x))F(x)3
(13) φ(x) β(1+12β2KF(x))F(x).

Now take

(14) a0 =12β4K2(1+14β2KF(x0)),
b0 =β(1+12β2KF(x0)).

From (12), taking into account (5) and (6), we get

(15) F(x1) =F(x0)+F(x0)(x1x0)+12F′′(x0)(x1x0)2
=F(x0)+F(x0)φ(x0)+12F′′(x0)φ2(x0)
a0F(x0)3.

Relations (5) and (13) imply

(16) x1x0b0F(x0).

We obtain the following result regarding the convergence of (5).

Theorem 1.

If the mapping F, the initial approximation x0X, and the real numbers β0, K, r satisfy:

  1. i.

    F is a second degree polynomial;

  2. ii.

    β0Kr<1, assuming that F(x0)1exists and F(x0)1=β0;

  3. iii.

    q=a0F(x0)<1;

  4. iv.

    b0qa0(1q2)r,

then the following statements are true:

  1. j.

    the sequence (xk)k0 given by (5) is well defined;

  2. jj.

    equation (1) has (at least) a solution xB¯r(x0);

  3. jjj.

    one has the estimates

    (17) xxk b0q3ka0(1q2),
    (18) xk+1xk b0F(xk),k=0,1,
Proof.

Assumptions iii. and (6) imply that x1B¯r(x0).

Relation (15) may also be written as

(19) F(x1)1a0q3.

which, by ii., leads to F(x1)<F(x0). Denoting

a1 =12β4K2(1+14β2KF(x1)),b1=β(1+12β2KF(x1))

then, obviously, a1<a0 and b1<b0.

Assume now the following relations:

  1. a)

    xiB¯r(x0),i=1,,k;

  2. b)

    F(xi)1a0q3i,i=1,,k;

  3. c)

    ai<ai1 and bi<bi1, i=1,,k.

We shall prove that

(20) xk+1B¯r(x0),F(xk+1)1a0q3k+1,ak+1<ak,bk+1<bk,

where

ai=12β4K2(1+14β2KF(xi)),bi=β(1+12β2KF(xi)),

i=1,,k+1.

From (5) and (7), we deduce

xk+1xkφ(xk)bkF(xk)bka0q3k.

But bk<b0 and hence

xk+1xkb0a0q3k.

This implies

xk+1x0i=0kxi+1xib0a0i=0kq3ib0qa0(1q2)r,

i.e., xk+1B¯r(x0).

From (5) and (12) we obtainF(xk+1)akF(xk)3, and taking into account that akF(xk)<1, we get F(xk+1)<F(xk), i.e., ak+1<ak and bk+1<bk.

Hence

F(xk+1)1a0q3k+1.

Now we show that the sequence (xk)k0 is fundamental. From the above relations we have

(21) xk+mxki=km1xi+1xib0a0i=km1q3ib0q3ka0(1q2),

for all m,k. Since q<1, we get that the sequence is Cauchy. Denote x=limkxk.

By (21), for m we get

xxkb0q3ka0(1q2),k=0,1,.

The continuity of F implies that F(x)=0. Obviously, xB¯r(x0). ∎

3. Application and numerical examples

We shall study this method when applied to approximate the eigenpairs of matrices.

Denote V=𝕂n and let A 𝕂n×n, where 𝕂= or . For computing the eigenpairs of A one may consider a norming function G:V𝕂 with G(0)1. The eigenvalues λ 𝕂 and eigenvectors vV of A are the solutions of the nonlinear system

F(x)=(AvλvG(v)1)=0,

where x=(vλ)V×𝕂=𝕂n+1, (x(1),x(2),,x(n))=v and x(n+1)=λ. The first n components of F, Fi, i=1,,n, are given by

Fi(x)=ai1x(1)++ai,i1x(i1)+(aiix(n+1))x(i)+ai,i+1x(i+1)++ainx(n).

The standard choice for G is

G(v)=αv2,

with α=12. We have proposed in [4] (see also [7]), the choice α=12n, which has shown a better behavior for the iterates than the standard choice.

In both cases we can write

Fn+1(x)=α((x(1))2++(x(n))2)1.

The first and the second order derivatives of F are given by

F(x)h=
(a11x(n+1)a12a1nx(1)a21a22x(n+1)a2nx(2)an1an2annx(n+1)x(n)2αx(1)2αx(2)2αx(n)0)(h(1)h(2)h(n)h(n+1)),

and

F′′(x)hk=(k(n+1)00k(1)0k(n+1)0k(2)00k(n+1)k(n)2αk(1)2αk(2)2αk(n)0)(h(1)h(2)h(n)h(n+1)),

where x=(x(i))i=1,n+1,h=(h(i))i=1,n+1,k=(k(i))i=1,n+1𝕂n+1.

We shall consider two test matrices from the Harwell Boeing collection111These matrices are available from MatrixMarket at the following address: http://math.nist.gov/MatrixMarket/. in order to study the behavior of the method (5) and of the Chebyshev method for approximating the eigenpairs. The programs were written in Matlab. As in [21], we used the Matlab operator ’\’ for solving the linear systems.

Fidap002 matrix. This real symmetric matrix of dimension n=441 arises from finite element modeling. Its eigenvalues are all simple and range from 7108 to 3106. As in [21], we have chosen to study the smallest eigenvalue, which is well separated. The initial approximations were taken λ0=λ+102=6.999 6108+100, and for the initial vector v0 we perturbed the solution v (computed by Matlab and then properly scaled to fulfill the norming equation) with random vectors having the components uniformly distributed on (-ε,ε), ε=0.5. The following results are typical for the runs made (we have considered a common perturbation vector); Table 1 contains the norms of the vectors F(xk). For the choice I, we took a=12 in G, while for the choice II, a=12n.

It is interesting to note that the norm of F (even at the computed solution) does not decrease below 108.

Table 1. The Fidap002 matrix.
Choice I Choice II
k Method (5) Chebyshev method Method (5) Chebyshev method
0 1.000 310+2 1.746 710+9 2.510 710+0 1.746 710+9
1 1.007 810+1 8.733 510+8 1.255 310+0 8.733 510+8
2 3.485 310+1 1.646 210+2 5.287 710-2 3.564 110-3
3 2.136 810+0 2.337 310+1 6.180 410-5 4.904 810-6
4 4.476 110-1 6.352 110-1 5.985 810-7 4.430 910-6
5 6.521 410-3 9.223 810-3
6 1.561 710-5 2.296 110-5
7 5.960 510-7 8.564 710-8

Sherman1 matrix. This matrix arises from oil reservoir simulation. It is real, unsymmetric, of dimension 1000 and all its eigenvalues are real. We have chosen to study the smallest eigenvalue λ=5.0449, which is not well separated (the closest eigenvalue is 4.9376). The initial approximation was taken λ0=λ0.002 and for the initial vector v0 we considered ε=0.01.

The following results are typical for the runs made (we have considered again a same random perturbation vector for the four initial approximations).

Table 2. Pores1 matrix.
Choice I Choice II
k Method (5) Chebyshev method Method (5) Chebyshev method
0 7.717 710-01 7.717 710-01 7.763 810-01 7.763 810-01
1 6.924 210-05 3.223 710-02 1.218 810-06 3.224 810-05
2 2.359 310-13 4.994 810-04 2.754 110-14 5.197 310-10
3 7.117 110-16 1.246 610-07 4.020 710-14
4 7.814 310-15
5 9.165 510-16

For this particular matrix and eigenvalue, the Chebyshev method has shown a greater sensitivity to the size of the perturbations than method (5). Increasing ε leads to the loss of the convergence of the Chebyshev iterates, while method (5) still converge.

Though for the Sherman1 matrix method (5) displayed a better behavior than the Chebyshev method, some extensive tests must be performed before affirming that the first method is superior. In any case, the choice II has shown again that is more advantageous to use.

References

  • [1] M.P. Anselone and L.B. Rall, The solution of characteristic value-vector problems by Newton method, Numer. Math., 11 (1968), pp. 38–45.
  • [2] I.K. Argyros, Quadratic equations and applications to Chandrasekhar’s and related equations, Bull. Austral. Math. Soc., 38 (1988), pp. 275–292.
  • [3] E. Cătinaş and I. Păvăloiu, On the Chebyshev method for approximating the eigenvalues of linear operators, Rev. Anal. Numér. Théor. Approx., 25 (1996) nos. 1–2, pp. 43–56.
  • [4] E. Cătinaş and I. Păvăloiu, On a Chebyshev-type method for approximating the solutions of polynomial operator equations of degree 2, Proceedings of International Conference on Approximation and Optimization, Cluj-Napoca, July 29 - august 1, 1996, vol. 1, pp. 219-226.
  • [5] E. Cătinaş and I. Păvăloiu, On approximating the eigenvalues and eigenvectors of linear continuous operators, Rev. Anal. Numér. Théor. Approx., 26 (1997) nos. 1–2, pp. 19–27.
  • [6] E. Cătinaş and I. Păvăloiu, On some interpolatory iterative methods for the second degree polynomial operators (I), Rev. Anal. Numér. Théor. Approx., 27 (1998) no. 1, pp. 33-45.
  • [7] E. Cătinaş and I. Păvăloiu, On some interpolatory iterative methods for the second degree polynomial operators (II), Rev. Anal. Numér. Théor. Approx., 28 (1999) no. 2, pp. 133-143.
  • [8] L. Collatz, Functionalanalysis und Numerische Mathematik, Springer-Verlag, Berlin, 1964.
  • [9] A. Diaconu, On the convergence of an iterative method of Chebyshev type, Rev. Anal. Numér. Théor. Approx. 24 (1995) nos. 1–2, pp. 91–102.
  • [10] J.J. Dongarra, C.B. Moler and J.H. Wilkinson, Improving the accuracy of the computed eigenvalues and eigenvectors, SIAM J. Numer. Anal., 20 (1983) no. 1, pp. 23–45.
  • [11] S.M. Grzegórski, On the scaled Newton method for the symmetric eigenvalue problem, Computing, 45 (1990), pp. 277–282.
  • [12] V.S. Kartîşov, and F.L. Iuhno, O nekotorîh Modifikaţiah Metoda Niutona dlea Resenia Nelineinoi Spektralnoi Zadaci, J. Vîcisl. matem. i matem. fiz., 33 (1973) no. 9, pp. 1403–1409 (in Russian).
  • [13] J.M. Ortega, and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.
  • [14] I. Păvăloiu, Sur les procédés itératifs à un order élevé de convergence, Mathematica (Cluj), 12(35) (1970) no. 2, pp. 309–324.
  • [15] Pǎvǎloiu, I., Introduction to the Theory of Approximating the Solutions of Equations, Ed. Dacia, Cluj-Napoca, Romania, 1986 (in Romanian).
  • [16] I. Păvăloiu and E. Cătinaş, Remarks on some Newton and Chebyshev-type methods for approximating the eigenvalues and eigenvectors of matrices, Computer Science Journal of Moldova, 7 (1999) no. 1, pp. 3–17.
  • [17] G. Peters, and J.H. Wilkinson, Inverse iteration, ill-conditioned equations and Newton’s method, SIAM Review, 21 (1979) no. 3, pp. 339–360.
  • [18] M.C. Santos, A note on the Newton iteration for the algebraic eigenvalue problem, SIAM J. Matrix Anal. Appl., 9 (1988) no. 4, pp. 561–569.
  • [19] R.A. Tapia and L.D. Whitley, The projected Newton method has order 1+2 for the symmetric eigenvalue problem, SIAM J. Numer. Anal., 25 (1988) no. 6, pp. 1376–1382.
  • [20] F. Tisseur, Newton’s method in floating point arithmetic and iterative refinement of generalized eigenvalue problems, SIAM J. Matrix Anal. Appl., 22 (2001) no. 4, pp. 1038–1057.
  • [21] K. Wu, Y. Saad, and A. Stathopoulos, Inexact Newton preconditioning techniques for large symmetric eigenvalue problems, Electronic Transactions on Numerical Analysis, 7 (1998) pp. 202–214.
  • [22] T. Yamamoto, Error bounds for computed eigenvalues and eigenvectors, Numer. Math., 34 (1980), pp. 189–199.
2002

Related Posts