ON A THIRD ORDER ITERATIVE METHOD FOR SOLVING POLYNOMIAL OPERATOR EQUATIONS

. We present a semilocal convergence result for a Newton-type method applied to a polynomial operator equation of degree 2. The method consists in fact in evaluating the Jacobian at every two steps, and it has the r -convergence order at least 3. We apply the method in order to approximate the eigenpairs of matrices. We perform some numerical examples on some test matrices and compare the method to the Chebyshev method. The norming function we have proposed in a previous paper shows a better convergence of the iterates than the classical norming function for both the methods.


INTRODUCTION
Let F : X → X be a nonlinear mapping, where (X, • ) is a Banach space, and consider the equation (1) F (x) = 0.
We shall assume that F is a polynomial operator of degree 2, i.e., it is indefinitely differentiable on X, with F (i) (x) = θ i , for all x ∈ X and i ≥ 3, where θ i is the i-linear null operator.
Besides (1) we shall also consider another equation, equivalent with it (2) x − G (x) = 0, where G : X → X.More exactly, we shall assume that the solutions of (1) coincide with the solutions of (2) and viceversa.
In [15] it was shown that the following iterations (3) , k = 0, 1, . . ., x 0 ∈ X have the convergence order with one order higher than the convergence order of the iterates x k+1 = G (x k ) .
Obviously, if we take as G the Newton operator, i.e., then we obtain a method with the convergence order at least 3.
In the present paper we shall study the convergence of the iterations (3), with G given by (4), i.e., (5) x in order to solve the polynomial equation (1).By (5), for some known approximation x k , the next approximation x k+1 may be determined as We notice that at each iteration step we need to solve two linear equations, but for the same linear operator, F ′ (x k ).The iterations may be viewed as being given by the Newton method in which the Jacobian is evaluated at every two steps.
A possible advantage of the above method over the Chebyshev method which has the same convergence order, is that it does not require the second derivative of F , which may have a complicate form.
We shall apply this study to the approximation of the eigenpairs of the matrices, and we shall consider some numerical examples for some test matrices.We shall also compare the method (5) to the Chebyshev method.
Theorem 1.If the mapping F , the initial approximation x 0 ∈ X, and the real numbers β 0 , K, r satisfy: iv.
b 0 q √ a 0 (1−q 2 ) ≤ r, then the following statements are true: j. the sequence (x k ) k≥0 given by (5) is well defined; jj.equation ( 1) has (at least) a solution x * ∈ Br (x 0 ) ; jjj.one has the estimates Proof.Assumptions iii.and (6) imply that x 1 ∈ Br (x 0 ).Relation (15) may also be written as which, by ii., leads to F (x 1 ) < F (x 0 ) .Denoting Assume now the following relations: We shall prove that where From ( 5) and ( 7), we deduce But b k < b 0 and hence This implies i.e., x k+1 ∈ Br (x 0 ) .
From ( 5) and ( 12) we obtain F (x k+1 ) ≤ a k F (x k ) 3 , and taking into account that √ a k F (x k ) < 1, we get F (x k+1 ) < F (x k ) , i.e., a k+1 < a k and b k+1 < b k . Hence Now we show that the sequence (x k ) k≥0 is fundamental.From the above relations we have (21) x for all m, k ∈ N. Since q < 1, we get that the sequence is Cauchy.Denote x * = lim k→∞ x k .By (21), for m → ∞ we get The continuity of F implies that F (x * ) = 0. Obviously, x * ∈ Br (x 0 ).

APPLICATION AND NUMERICAL EXAMPLES
We shall study this method when applied to approximate the eigenpairs of matrices.
Denote V = K n and let A ∈ K n×n , where K = R or C. For computing the eigenpairs of A one may consider a norming function G : V → K with G (0) = 1.The eigenvalues λ ∈ K and eigenvectors v ∈ V of A are the solutions of the nonlinear system 1) , x (2) , . . ., x (n) = v and x (n+1) = λ.The first n components of F , F i , i = 1, . . ., n, are given by The standard choice for G is 2 .We have proposed in [4] (see also [7]), the choice α = 1 2n , which has shown a better behavior for the iterates than the standard choice.
In both cases we can write The first and the second order derivatives of F are given by 2αx (2)  . . .
h (2)  . . . , We shall consider two test matrices from the Harwell Boeing collection1 in order to study the behavior of the method (5) and of the Chebyshev method for approximating the eigenpairs.The programs were written in Matlab.As in [21], we used the Matlab operator '\' for solving the linear systems.
Fidap002 matrix.This real symmetric matrix of dimension n = 441 arises from finite element modeling.Its eigenvalues are all simple and range from −7 • 10 8 to 3 • 10 6 .As in [21], we have chosen to study the smallest eigenvalue, which is well separated.The initial approximations were taken λ 0 = λ * + 10 2 = −6.9996 • 10 8 + 100, and for the initial vector v 0 we perturbed the solution v * (computed by Matlab and then properly scaled to fulfill the norming equation) with random vectors having the components uniformly distributed on (-ε,ε), ε = 0.5.The following results are typical for the runs made (we have considered a common perturbation vector); Table 1 contains the norms of the vectors F (x k ).For the choice I, we took a = 1 2 in G, while for the choice II, a = 1 2n .It is interesting to note that the norm of F (even at the computed solution) does not decrease below 10 −8 .
Sherman1 matrix.This matrix arises from oil reservoir simulation.It is real, unsymmetric, of dimension 1000 and all its eigenvalues are real.We have chosen to study the smallest eigenvalue λ * = −5.0449,which is not well separated (the closest eigenvalue is −4.9376).The initial approximation was taken λ 0 = λ * − 0.002 and for the initial vector v 0 we considered ε = 0.01.The following results are typical for the runs made (we have considered again a same random perturbation vector for the four initial approximations).For this particular matrix and eigenvalue, the Chebyshev method has shown a greater sensitivity to the size of the perturbations than method (5).Increasing ε leads to the loss of the convergence of the Chebyshev iterates, while method (5) still converge.
Though for the Sherman1 matrix method (5) displayed a better behavior than the Chebyshev method, some extensive tests must be performed before affirming that the first method is superior.In any case, the choice II has shown again that is more advantageous to use.