ON THE CONVERGENCE OF ITERATES TO FIXED POINTS OF ANALYTIC OPERATORS

. The results in this study deal with the question: given that an analytic operator has ﬁxed point, when is it true that iterates (under the operator) of nearby points converge to the ﬁxed point? We take advantage of the analyt-icity of the operator to show that it is possible to enlarge the convergence radius for the method of successive substitutions or Newton’s method. A numerical example is ﬁnally given to show that under our conditions there exists a wider choice of initial guesses than before.


INTRODUCTION
A fixed point x * of an operator F which maps a convex subset D of a Banach space X into X is by definition a solution of the equation (1) x = F (x).
One of the earliest and most useful fixed point theorems is the Picard fixed point theorem (or contraction mapping principle) which states that a contraction operator F of a Banach space into itself has a unique fixed point [4], [8], [9], [10].
An iteration method for solving equation (1) consists of the construction of a sequence {x n } (n ≥ 0) which converges to x * , given a suitable starting value x 0 , and a procedure for calculating the value of x n+1 , once x n is known.The form of equation (1) suggests immediately the classical method of successive substitutions, in which one chooses some x 0 ∈ D, and uses the relationship (2) to construct the remainder of the sequence {x n } (n ≥ 0).The results proved here are concerned with the following question: Given that operator F has a fixed point x * , when is it true that method (2) converges to x * ?Such a question is clearly of interest in numerical analysis since many numerical problems can be reduced to the problem of locating fixed points.Sufficient conditions for the convergence of method (2) are well known [4], [8].Here we assume that operator F is analytic on D. We provide local convergence results for method (2) as well as the convergence radius.It turns out that under our assumptions the convergence radius can be enlarged compared with earlier results.Finally we provide a numerical example to show that our results can provide a wider choice of initial guesses x 0 than before [7]- [13].This observation is important and can find applications in steplength selection in predictorcorrector continuation procedures [1]- [5], [14].

CONVERGENCE ANALYSIS
It is convenient to define: We can show the following local convergence theorem for method (2) involving analytic operators.
Theorem 1.Let F : D ⊆ X → X be an analytic operator and x * ∈ D be a fixed point of F .Moreover, assume: there exists α such that

and
(5) where, Then, if the method of successive substitutions {x n } (n ≥ 0) generated by (2) remains in U (x * , r * ) for all n ≥ 0 and converges to x * for any x 0 ∈ U (x * , r * ).Moreover, the following error bounds hold for all n ≥ 0: (8) where, (9) Proof.The analyticity of F gives ( 10) Using ( 2), (3), and x * = F (x * ) we successively obtain and ( 12) We notice that by hypothesis, β 1 < β 0 = 1.Then, as , and in general β n+1 < β n for all n ≥ 0. Now, we proceed by induction.Taking into account (11) and (12) for n = 0 we have Let us assume now that Then, by (12) and taking into account and the induction is complete. Consequently, As β 1 < 1, the sequence {x n } converges to x * .Note that β ∈ [0, 1) by the choice of r * .That completes the proof of Theorem 1.
The above result was based on the assumption that the sequence (13) is bounded γ.In the case where the assumption of boundedness does not necessarily hold, we have the following local alternative.
Theorem 2. Let F : D ⊆ X → X be an analytic operator and x * ∈ D be a fixed point of F .Moreover, assume: there exist α, δ with α ∈ [0, 1), δ ∈ (α, 1) such that (4) holds, Then, the method of successive substitutions {x n } (n ≥ 0) generated by (2) remains in U (x * , r 0 ) for all n ≥ 0 and converges to x * for any x 0 ∈ U (x * , r 0 ).Moveover the following error bounds hold for all n ≥ 0: (17) Proof.Using mathematical induction on n we see that the left hand side inequality in (17) follows from ( 12) and ( 13), where as the right hand side from ( 14) and (15).The rest of the proof follows from the choice of δ (δ ∈ [0, 1)) and the estimate (18) That completes the proof of Theorem 2.
Remark 2. The results obtained here can be modified to hold for other iterative methods.In the case of Newton's method for approximating a solution x * of equation ( 20) provided that the inverse of the Fréchet-derivative F (x) exists (x ∈ D).We can have and F (x * ) = 0, i.e., α = 0.In order for us to compare our results with earlier ones, let us assume that x * is a simple zero of G and It was shown in [10] (see also [14]) that the convergence radius for Newton's method is given by ( 25) Moreover, in [12, p. 259] the radius of convergence is given by ( 26) where, ( 27) In the example at the end of the study we can use Theorem 1 to show that: (28) r N < r * and r wz < r * .
That is, our Theorem 1 provides a wider choice of initial guesses x 0 than before.This observation is important and also finds applications in steplength selection in predictor-corrector continuation procedures [1]- [5], [14].
Remark 3. The condition on the analyticity of F on D can be dropped.Indeed using the well-known spectral radius formula, A. M. Ostrowski [9] proved that there exists a neighborhood N of x * such that lim (c) the spectral radius of the derivative of F at x * is less than 1.However, this result does not give us information on how to construct N .We finally complete this section with a more general result whose proof is left as an exercise as similar to Theorems 1 and 2.
Theorem 3. Let F (x) be analytic at its fixed point x * in the sense that the power series is normally convergent in some neighborhood of x * .Assume that F (x * ) < 1 and that F (k) (x * ) = 0 for some k ≥ 2. Then the function is defined for r sufficiently small; the equation ψ(r) = 1 has a positive root r 0 ; and for any x 0 ∈ U (x * , r 0 ), the sequence of iterates {x n } satisfies

APPLICATIONS
Remark 4. As noted in [1]- [5], [8]- [14] the local results obtained here can be used for projection methods such as Arnoldi's, the generalized minimum residual method (GMRES), the generalized conjugate residual method (GCR), and for continued Newton-like/finite-difference projection methods.
Remark 5.The local results obtained here can also be used to solve equations of the form G(x) = 0, where F satisfies the autonomous differential equation [4], [8]: where, T : Y → X is a known continuously sufficiently many times Fréchet differentiable operator.Since we can apply the results obtained here without actually knowing the solution x * of equation (20).
We complete our study with such an example: Then it can easily be seen that we can set T (x) = x + 1 in (29).Using ( 23)-( 27) we obtain α 0 = 1, = e, γ * = .5,r N = .245253,and r wz = .171572875.Note also that all hypotheses of Remark 3 are satisfied.In particular the spectral radius of F is 0.
Moreover, if F is defined by (21) we still get (31), which allows us to use Newton's method with convergence radius r * instead of r N (or r wz ).Finally we note that method (2) converges even if x 0 = 1.Indeed, we have ), and define function G on D by (30) G(x) = e x − 1.