SOLVING EQUATIONS USING NEWTON’S METHOD UNDER WEAK CONDITIONS ON BANACH SPACES WITH A CONVERGENCE STRUCTURE

. We provide new semilocal results for Newton’s method on Banach spaces with a convergence structure. Using more precise majorizing sequence we show that, under weaker convergence conditions than before, we can obtain ﬁner error bounds on the distances involved and a more precise information on the location of the solution.


INTRODUCTION
In this study we are concerned with the problem of approximating a locally unique zero x * of the operator (1) where G is an operator Banach space X with a convergence structure (to be precized later) and A is meant to be an approximation of G (x 0 ) −1 ∈ L (X, X), the space of bounded linear operators from X into X.
A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equation.For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems.For the sake of simplicity, assume that a time-invariant system is driven by the equation ẋ = τ (x) (for some suitable operator τ ), where x is the state.Then the equilibrium states are determined by solving equation (1).Similar equations are used in the case of discrete systems.The unknowns of engineering equations can be functions (difference, differential, and integral equations), vectors (systems of linear or nonlinear algebraic equations), or real or complex numbers (single algebraic equations with single unknowns).Except in special cases, the most commonly used solution methods are iterative -when starting from one or several initial approximations, a sequence is constructed that converges to a solution of the equation.Iteration methods are also applied for solving optimization problems.In such cases, the iteration sequences converge to an optimal solution of the problem at hand.Since all of these methods have the same recursive structure, they can be introduced and discussed in a general framework.
Using more precise majorizing sequences than before we show that under weaker conditions than before [1]- [3], [5], [6] we can obtain using Newton's method (see (10)), finer error bounds on the distances |x n − x * | and a more precise information on the location of the solution x * .

PRELIMINARIES
We will need the definitions: The set U (a) = {x ∈ X | (x, a) ∈ E} defines a sort of generalized neighborhood of zero.
Let us given some motivational examples for X =: R m with the maximumnorm: (a) Case (a) involves classical convergence analysis in a Banach space, (b) allows componentwise analysis and error estimates, and (c) is used for monotone convergence analysis.
The convergence analysis will be based on monotonicity considerations in the space X × V. Let (x n , e n ) be an increasing sequence in E N , then If e n → e, we obtain: 0 ≤ (x n+k − x n , e − e n ) and hence by (C 5 ) hence {x n } (n ≥ 0) is a Cauchy sequence.When deriving error estimates, we shall as well use sequences e n = w 0 − w n with a decreasing sequence where for m ≥ 0 and L (V ) denotes the space of m-linear, symmetric, bounded operators on V .

Definition 3. The set of bounds for an operator
if this limit exists.
Then the operator is well defined and continuous.

SEMILOCAL CONVERGENCE ANALYSIS
We can show the main semilocal convergence result for Newton's method: (10) Theorem 8. Assume: there exists a Banach space X with convergence structure (X, V, E) where , and a point a ∈ C such that the following conditions hold: L 0 is order-convex on [0, a] and satisfies for all x ∈ U (a) , L is order-convex on [0, a] and satisfies Then sequence {x n } (n ≥ 0) generated by Newton's method (2) is well defined and converges to the unique zero x * of F in U (a).

Define sequences d
where ) are well defined, monotone and satisfy Proof.First, we observe that the conditions of the Theorem are satisfied if b replaces a.For n = 1 we have to solve By (15), ( 16) and the order convexity of L we get for w = b Hence, x 1 is well defined, (x 1 , b) ∈ E. We also have and by the order convexity of L, Assume sequences x n , d n , (x n , d n ) are well defined and monotone up to We must solve Using (13), ( 15) and ( 16) we get in turn We must solve That is x n+1 is well defined and (32) Moreover d n+1 , d n+1 are well defined also and Furthermore the monotonicity of By the definition of d n , {d n } we get inductively ), and by (30) x * is a zero of F .
To show uniqueness consider the modified Newton's method (37) Then sequence {y n } converges and (x n , L * (0)) is monotone in X × V. Assume y * ∈ U (a) is a zero of F .Then we can easily get by induction on n that (38) That is y n → y * as n → ∞.However, we have shown y n → x * as n → ∞.Hence we deduce: x * = y * .That completes the proof of the theorem.
Remark 9.As in [1], [2], [5], [6] we note that from the proof of Theorem 8 we have the error bound where for q we may use any solution of L (q) ≤ q.We can obtain better a posteriori error estimates if we use instead the solutions of R n (q) ≤ q with monotone operators R n .
Under the hypotheses of Theorem 8 define: is well defined and monotone.A reasonable choice for q n is a − d n , since Other ways of choosing a suitable q n are given in the Lemmas that follows: Therefore, we have That completes the proof of the Lemma.
Lemma 11.Assume: • conditions of Theorem 1 hold; • there exists a solution q n ∈ I n of R n (q) ≤ q.
Then for Proof.Using induction we immediately have That is which implies the monotonicity of That completes the proof of the Lemma.
The properties of R n imply the existence of R ∞ n (0) which is a reasonable choice for q n in the Lemma above.
Hence we have: Lemma 12. Assume conditions of Theorem 1 hold.Then any solution q ∈ I n of R n (q) ≤ q implies the a posteriori estimate As suggested in [1], [2], [5], [6] in precise we may want to use a monotone operator satisfying P n (q) ≤ q ⇒ R n (q) ≤ q, where Remark 13.(a) If L 0 = L then our Theorem 8 and Lemmas 10-12 reduce to the corresponding ones in [6].However, if strict inequality holds in (16) or (17), then the error bounds on the distances |x * − x n | are finer and the information on the location of the solution x * more precise.(since d n ≤ d n and b * ≤ b).Note that these improvements are made under the same hypotheses as in [6] (since practically the computation of operator L requires that of L 0 ).
(b) One hopes that in general we can find conditions weaker than say (18) since the convergence of (20) (and not (21), which depends on (18)) suffices for the existence of x * (in U (b * )).
As an example we consider the case of a Banach space with a real norm denoted by • .To check the conditions of the theorem, assume F (0) = I and that the Fréchet-derivative F of operator F can be estimated by some monotone operator : [0, a] → R such that F (x) − F (y) ≤ ( x − y ) x − y for all x, y ∈ U (a) .
To show that sequence (20) converges under weaker conditions than (39) in this case, assume there exists a monotone operator p : [0, a] → R such that F (x) − F (0) ≤ p 1 ( x ) x for all x ∈ U (a) .
Then it can easily be seen that sequence (20) converges, provided that (40) which is weaker than (39).Note also that holds in general and that 1 p 1 can be arbitrarily large [3].Hence, the above justify the claims made at the introduction and in the Remark above.