NEWTON’S METHOD AND REGULARLY SMOOTH OPERATORS

. A semilocal convergence analysis for Newton’s method in a Banach space setting is provided in this study. Using a combination of regularly smooth and center regularly smooth conditions on the operator involved, we obtain more precise majorizing sequences than in [7]. It then follows that under the same computational cost and the same or weaker hypotheses than in [7] the following beneﬁts are obtained: larger convergence domain; ﬁner estimates on the distances involved, and an at least as precise information on the location of the solution of the corresponding equation. Numerical examples are given to further validate the results obtained in this study.


INTRODUCTION
In this study we are concerned with the problem of approximating a locally unique solution x * of equation (1) F (x) = 0, where F is a Fréchet-differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equations. For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems. For the sake of simplicity, assume that a time-invariant system is driven by the equatioṅ x = Q(x), for some suitable operator Q, where x is the state. Then the equilibrium states are determined by solving equation 1. Similar equations are used in the case of discrete systems. The unknowns of engineering equations can be functions (difference, differential, and integral equations), vectors (systems of linear or nonlinear algebraic equations), or real or complex numbers (single algebraic equations with single unknowns). Except in special cases, the most commonly used solution methods are iterative -when starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation. Iteration methods are also applied for solving optimization problems. In such cases, the iteration sequences converge to an optimal solution of the problem at hand. Since all of theses methods have the same recursive structure, they can be introduced and discussed in a general framework.
Newton's method is undoubtedly the most popular iterative procedure for generating a sequence There is an extensive literature on local as well as semilocal convergence results for Newton's method under various assumptions [1]- [5], [6]- [10], [12].
The hypothesis of w-smoothness: has been used to provide a semilocal convergence analysis for Newton's method, where w : [0, ∞) → [0, ∞) is a continuous non-decreasing function which vanishes at zero, and it is positive elsewhere [1], [4], [5], [7], [12]. In the case: w(r) = cr, condition (3) reduces to the common Lipschitz hypothesis whereas, when w(r) = cr p p ∈ [0, 1) we obtain the Hölder assumption. Recently in [3]- [5] we introduced the center w-smoothness condition: where w 0 is a function with the same for all x ∈ D, properties as w.
Note that condition (3) implies (4). Using weaker (4) (which is what is really needed for finding bounds on F (x n ) −1 F (x 0 ) ) instead of condition (3), leads to more precise majorizing sequences, which in turn are used to provide under the same hypotheses a finer semilocal convergence analysis with the following advantages over the earlier mentioned works: larger convergence domain; finer error bounds on the distances involved, and an at least as precise information on the location of the solution x * . Note that the above advantages are obtained under the same computational cost, since in practice finding function w requires that of w 0 . In this study we show that the above advantages hold true if operator F is w-regularly smooth on D [7] (to be precised in Definition 1).
Numerical examples are also provided to further validate the results obtained in this study.
We need the definition: Given w ∈ N, we say that F is w-regularly smooth on D, if there exists an h∈ [0,h(F )] such that for all x, y ∈ D : where, The operator is regularly smooth on D, if it is w-regularly smooth there for some w ∈ N.
Throughout this study w −1 denotes the function whose closed epigraph cl{(s, t) : s ≥ 0, and t ≥ w −1 (s)} is symmetrical to closed of the subgraph of w with respect to the axis t = s. Due to the convexity of w −1 , each w-regularly smooth operator is also w-smooth but not necessarily vice versa [7]. Several properties for operators F that are w-regularly smooth can be found in [7].
It follows from condition (5) that for x = x fixed there exists a function w 0 with the same properties as w such that for all y ∈ D : holds in general, and w(s) w 0 (s) can be arbitrarily large [3]- [5]. It is convenient for us to introduce suitable notations. The superscript t means the non-negative part of a real number: a + := max{a, 0}. Denote: for all p ≥ 0, and r ≥ 0.
In order for us to apply Newton's method (2) to equation (1), (1) is equivalent to the equation F 0 (x) = 0, and the Newton iterations for F and F 0 starting at x 0 are identical. Let h be a lower bound for h (F 0 ) : and let w ∈ N, w 0 ∈ N satisfy (5) and (6), respectively with F 0 instead of F and x = x 0 . Then F 0 is w-regularly smooth, and center-w 0 -smooth on D.
Condition (15) can be replaced by (16) t n < w −1 0 (1) for all n ≥ 0. We can now state the main semilocal convergence theorem for Newton's method (2) for operator F 0 that are w-regularly smooth on D.
Theorem 2. Let the operator F 0 be w-regularly smooth, and center-w 0regularly smooth on D. Assume then, sequence {x n } generated by Newton's method (2) (with F 0 replacing F ) is well defined, remains in U (x 0 ,t * ) for all n ≥ 0, and converges to a solution Moreover the following estimates hold true for all n ≥ 0 : h,2 stands for the inverse of the restriction of function g h on the interval Proof. The proof is similar to the corresponding one given in Theorem 4.3 in [7], p. 83)]. Simply use (6) instead of (5) (used in [7]), in the derivation of the estimate (21). To avoid duplications we will only sketch the above mentioned differences in the proofs.

It follows by Lemma 2.2 in [7] that:
Hence, we obtain by (21), (29) and (30) that: which completes the induction for (24). It now follows: It also follows from (32) that sequence {x n } is Cauchy (since {t n } is a convergent sequence) in a Banach space X, and as such it converges to some Estimate (20) follows from (32) (i.e. (19)) by using standard majorization techniques [4], [9]. Moreover, estimate (22) is simply (30) for The uniqueness part of the proof as identical to the corresponding one in [7] is omitted. That completes the proof of the Theorem.   (10)) replacing k (given by (11)) in (13). Moreover, replace condition (17) by weaker (16) (with t 2 n replacing t n in (16)). It then follows from the proof of Theorem 2 that with the above changes the conclusions of this theorem hold true with the exception of the uniqueness part which holds true on U (x 0 , t * 2 ), t * 2 = lim n→∞ t 2 n . That is we arrived at: Theorem 5. Let the operator F 0 be w-regularly smooth, and center-w 0regularly smooth on D. Assume: x 0 ∈ D satisfies: , and converges to a solution x of equation . Moreover the following estimates hold true for all n ≥ 0 : . Furthermore the solution x * is unique in U (x 0 , w −1 0 (1)). Proof. In view of the proof of Theorem 2, we only need to show the uniqueness part. Let y * be a solution of equation ). Let us, denote Using (6) for x = x 0 , y = y * + θ(x * − y * ), θ ∈ [0, 1], and F replaced by F 0 , as in (27) That completes the proof of the theorem.
In the next result we compare majorizing sequences {t n }, {t 1 n }, {t 2 n } : Moreover the following estimates hold true for all n ≥ 0 : Proof. The proof follows immediately by induction on n ≥ 0, the definition of the "t" sequences and (8).
That completes the proof of the Proposition. Note also that sufficient convergence conditions other than the ones given here for the satisfaction of conditions (16), (17), (33) (or (15)), weaker (in general), have been given by us in [4], [5] for operators F that are w-smooth. Clearly, those conditions can replace (16), (17), (33) in the above results provided that operator F is w-regularly smooth.
As an application, we compare the "t" iterations in the interesting case, when (41) w 0 (s) = c 0 s and w(s) = cs.
Using (41), and the definitions of the "t" iterations, we obtain: (17) is satisfied provided that the famous Newton-Kantorovich hypothesis: holds true [3], [5], [9]. It then also follows that sequences {t n }, {t 1 n } given by (42), and (43) converge to t * , t * 1 respectively with so that the other conclusions of Theorem 2 also hold true.