LOCAL CONVERGENCE OF NEWTON’S METHOD USING KANTOROVICH CONVEX MAJORANTS

. We are concerned with the problem of approximating a solution of an operator equation using Newton’s method. Recently in the elegant work by Ferreira and Svaiter [6] a semilocal convergence analysis was provided which makes clear the relationship of the majorant function with the operator involved. However these results cannot provide information about the local convergence of Newton’s method in their present form. Here we have rectiﬁed this problem by using two ﬂexible majorant functions. The radius of convergence is also found. Finally, under the same computational cost, we show that our radius of convergence is larger, and the error estimates on the distances involved is ﬁner than the corresponding ones [1], [11]–[13].


INTRODUCTION
In this study we are concerned with the problem of approximating a solution x of equation (1.1) where F is a continuously Fréchet-differentiable operator defined on a convex subset D of a Banach space X with values in a Banach space Y.
A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equations.For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems.For the sake of simplicity, assume that a time-invariant system is driven by the equation ẋ = Q(x), for some suitable operator Q, where x is the state.Then the equilibrium states are determined by solving equation (1.1).Similar equations are used in the case of discrete systems.The unknowns of engineering equations can be functions (difference, differential, and integral equations), vectors (systems of linear or nonlinear algebraic equations), or real or complex numbers (single algebraic equations with single unknowns).Excpet in special cases, the most commonly used solution methods are iterative-when starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation.Iteration methods are also applied for solving optimization problems.In such cases, the iteration sequences converge to an optimal solution of the problem at hand.Since all of these methods have the same recursive structure, they can be introduced and discussed in a general framework.
The most popular method for generating a sequence {x n } approximating x is undoubtedly Newton's method: Here F (x) ∈ L(X , Y), the space of bounded linear operators from X into Y, denotes the Fréchet-derivative of operator F [4], [8].
There is an extensive literature on local as well as semilocal convergence theorems for Newton's method, see, e.g.[1]- [4], and the references there.
In particuar, we are motivated by the elegant work of Ferreira and Svaiter [6] where a semilocal convergence was provided for Kantorovich's theorem [8] which makes clear the relationship of the majorant function and operator F .However the main result (see Theorem 2 in [6]) cannot provide in its present form information about the local convergence of Newton's method.Here, we rectify this problem.We introduce two flexible majorant functions to provide a local convergence for Newton's method (1.2).The radius of convergence is also given.
Finally, under the same computational cost we show that for special choices of the majorants functions involved, our radius of convergence is larger, and the error estimates on the distances involved is finer than the corresponding ones [1], [11]- [13].
(1) For any s 0 ∈ int (I), the correspondence s −→ g(s 0 )−g(s) s 0 −s , s ∈ I, s = s 0 , is increasing, and there exist in (−∞, +∞) (2) If s, v, t ∈ I, s < t, and s ≤ t ≤ v, then g(t) − g(s) ≤ (g(v) − g(s)) t−s v−s .We now state a portion of a theorem (see Theorem 2 in [6]) due to Ferreira and Svaiter, needed for what follows: Theorem 2.2.Let F : D ⊆ X −→ Y be a continuous operator, continuously Fréchet-differentiable on int (D).Take x 0 ∈ int (D) with F (x 0 ) −1 ∈ L(Y, X ).Suppose there exist R > 0, and a continuously differentiable function H5 f is convex and strictly increasing and f (t) = 0 for some t ∈ (0, R).Then f has a smallest zero t in (0, R), the sequences generated by Newton's method (1.2) for solving f (t) = 0, and F (x) = 0 with starting point t 0 = 0 and x 0 , respectively, are well defined, {t n } is strictly increasing, is contained in [0, t ), and converges to t , {x n } is contained in U (x 0 , t ), and converges to a point x in U (x 0 , t ), which is the unique zero of F in U (x 0 , t ).
Theorem 2.2 provides a semilocal convergence result for Newton's method, and cannot give us information about the local convergence of Newton's method in this form.Indeed for e.g. when x 0 = x , hypotheses (H2) and (H3) are contradicting each other.In what follows we rectify this problem.
We state the main local convergence result for Newton's method (1.2): Theorem 2.3.Let F : D ⊆ X −→ Y be a continuous operator, continuously Fréchet-differentiable on int (D).Suppose that there exist: x ∈ int (D) with F (x ) −1 ∈ L(Y, X ), and F (x ) = 0; R > 0, and continuously differentiable function f 0 , and f : [0, R) −→ (−∞, +∞), such that U (x , R) ⊂ D, and for x, y ∈ U (x , R), and x − x + y − x < R; functions f 0 and f are convex and strictly increasing with and set where Then, sequence {x n } generated by Newton's method (1.2), is well defined, remains in U (x , t ) for all n ≥ 0, and converges to x Q-linearly, so that then the following estimate holds for all n ≥ 0 (2.10) Furthermore, x is the unique zero of F in U (x , t ).
From now one we assume hypotheses of Theorem 2.3 hold, with the exception of (2.9), which will be considered to hold only when explicitly stated.
We shall show Theorem 2.3 through a series of lemmas:
Lemma 2.5.Let x, y ∈ U (x , R), and we have the following estimates and where funtion r f 0 , f is given by (2.5).
Proof.Using the convexity of f , hypothesis (2.2), and the definition of operator E we obtain in turn: which implies estimates (2.13).
As in [6] let us denote by η f 0 , f and N F the maps: (2.17) , and (2.18) According to (2.4) and Lemma 2.4, we have: Let x ∈ U (x , t ), then N F (x) may not belong to U (x , t ) or even not belong in the domain of F .That is, we can only guarantee, on U (x , t ), well definedness of only the first iteration.Therefore, we need additional results to guarantee that Newton iterations can be repeated indefinitely.
Let us define subsets of U (x , t ) on which Newton's method (2.18) is "well behaved":
By hypothesis x 0 ∈ U (x , t ).Let us assume x k ∈ U (x , t ) for all k ≤ n.We shall show x k+1 ∈ U (x , t ).Using (2.21), and Lemma 2.5 for y = x , x = x n , we get which show x k+1 ∈ U (x , t ), and lim The proof of estimates (2.8) and (2.10) is given as in [6] with function f 0 replacing f in the denominator of the estimates involved.
Finally, to show uniqueness in U (x , t ), let y be a zero of F in U (x , t ).Define linear operator M by Using (2.1), and the estimate (2.12) for x + θ (y − x ) ∈ U (x , t ), replacing x, we conclude M −1 exists.It then follows from the identity (2.24) that x = y .That completes the proof of Theorem 2.3.

APPLICATIONS
Example 3.1.Let us assume there exist L > 0 such that the Lipschitz condition (3.1) and set It then follows from (2.16) that we can set: 3 L , which is the radius of convergence obtained by Rheinboldt [11], [4].
It follows from (3.1) that there exists L 0 > 0 such that: It then follows from (2.16) that we can set: Note that if strict inequality holds in (3.6), then so does in (3.9).
We have also used (3.11) to provide a convergence analysis for the Secant method [5] (see also [4]).
Let us also define function f 0 by By solving (2.16) we obtain for analytic operators F Smale's radius of convergence [12]: 6 γ , and for twice Fréchet continuously differentiable operator F Wang's [13]: In what follows we shall show that we can enlarge radii given by (3.15) and (3.16).
These results are also obtained under the same computational cost since in practice the evaluation of L (or γ) requires that L 0 (or γ 0 ).Remark 3.3.As noted in [1], [5], [6], [7], [10], [12] the local results obtained here can be used for projection method such us Arnold's, the generalized minimum residual method (GMRES), the generalized conjugate residual method (GCR), for combined Newton/finite projection methods, and in connection with the mesh independence principle to develop the cheapest and most efficient mesh refinement strategies.Then, for x = 0, we can set T (x) = x + 1 in (3.27).

2 .
b = 1 − a, and define scalar polynomial P a by (3.20) P a (t) = 3 a 2 t 3 + a (6 b − a) t 2 + (3 b 2 − 2 a b − 1) t − b By the definition of polynomial P a and for fixed a, we get (3.21)P a (0) = −b 2 ≤ 0, and P a (1) = 1.Using (3.21) and the intermediate value theorem we conclude that there exists t a ∈ [0, 1) such that P a (t a ) = 0. Let us denote by t a the minimal number in [0, 1) satisfying P a (t a ) = 0. Define (3.22) t a = 1−ta γ .

t 1 ≤
t a .