APPROXIMATING SOLUTIONS OF EQUATIONS USING NEWTON’S METHOD WITH A MODIFIED NEWTON’S METHOD ITERATE AS A STARTING POINT

. In this study we are concerned with the problem of approximating a locally unique solution of an equation in a Banach space setting using Newton’s and modiﬁed Newton’s methods. We provide weaker convergence conditions for both methods than before [6]–[8]. Then, we combine Newton’s with the modiﬁed Newton’s method to approximate locally unique solutions of operator equations in a Banach space setting. Finer error estimates, a larger convergence domain, and a more precise information on the location of the solution are obtained under the same or weaker hypotheses than before [6]–[8]. Numerical examples are also provided.


INTRODUCTION
In this study we are concerned with the problem of approximating a locally unique solution x * of nonlinear equation ( 1) where F is a Fréchet-differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y .A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equations.For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems.For the sake of simplicity, assume that a time-invariant system is driven by the equation ẋ = Q(x) for some suitable operator Q, where x is the state.Then the equilibrium states are determined by solving equation (1.1).Similar equations are used in the case of discrete systems.The unknowns of engineering equations can be functions (difference, differential, and integral equations), vectors (systems of linear or nonlinear algebraic equations), or real or complex numbers (single algebraic equations with single unknowns).Except in special cases, the most commonly used solution methods are iterative -when starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation.Iteration methods are also applied for solving optimization problems.In such cases, the iteration sequences converge to an optimal solution of the problem at hand.Since all of these methods have the same recursive structure, they can be introduced and discussed in a general framework.
The most popular methods for approximation x * are undoubtedly Newton's method (2) x and the modified Newton's method There is an extensive literature on the semilocal as well as the local convergence results for both methods under various hypotheses.Such results can be found in [4], [6], [7], and the references there.
Here, we show that in the case of the modified method (3), for convergence, condition (7) can be replaced by ( 13) and ( 6) by weaker condition (10).Finer error estimates on the distances involved, a larger convergence domain, and a more precise information on the location of the solution than in earlier results [6] are also obtained this way (see Theorem 3 for method (2), and Theorem 5 for method (3)).Using (13) (whenever (7) (or (12) do not hold), we can employ method (3) for a finite number of steps, say N until condition (7) (or (12)) is satisfied for x 0 = y N .Then faster method (2) takes over from method (3).

SEMILOCAL CONVERGENCE ANALYSIS
The following semilocal convergence result for methods (2) and (3) can be found in [7]: Assume there exist x 0 ∈ D, and constants > 0, η > 0 such that where Then sequences {y n }, {x n } are well defined, remain in U (x 0 , s * ) for all n ≥ 0 and converge to a unique solution x * of equation F (x) = 0 in U (x 0 , s * ).Moreover the following estimates hold: and where, There is a plethora of estimates on the distances [6], [7].However we decided to list only the estimates related to what we need in this study.In the case of Newton's method (2) we showed in [2], [3] the following improvement of Theorem 1. Theorem 3. [2], [3].Let F : D ⊆ X → Y be a differentiable operator.Assume there exist x 0 ∈ D, and constants 0 > 0, > 0, η ≥ 0 such that where, Then sequence {x n } (n ≥ 0) generated by Newton's method (2) is well defined, remains in U (x 0 , t * ) for all n ≥ 0 and converges to a unique solution x * of equation F (x) = 0 in U (x 0 , t * ).
Moreover the following estimates hold for all n ≥ 0: Remark 4. Note also that ( 15) and ( 16) hold as strict inequalities if 0 < [2]- [4].Moreover we have: but not vice versa unless if 0 = .That is under the same computational cost we managed to weaken (7) since in practice the computation of also requires the computation of 0 .Furthermore, in Example 9, we show that (12) holds but condition ( 7) is violated.
Concerning the semilocal convergence of the modified Newton's method we show that (13) replaces condition (7). and Then sequence {y n } (n ≥ 0) generated by the modified Newton's method (3) is well defined, remains in U (x 0 , s * 0 ) for all n ≥ 0 and converges to a unique solution x * of equation F (x) = 0 in U (x 0 , s * 0 ).Moreover the following estimates hold for all n ≥ 0: Proof.We shall show that the assumptions of the contraction mapping principle are satisfied for the operator (20) . Then we can obtain the identity This identity together with (10) implies the estimate Consequently, P is a contraction operator in the ball U (x 0 , s * 0 ).To complete the proof, it remains to show that Then by (20) we can obtain in turn by the choice of s * 0 .That completes the proof of Theorem 5.
Remark 6.Note that by (21) the operator P satisfies a Lipschitz condition with constant q 0 in the ball U (x 0 , s * 0 ).The modified Newton's method thus converges at the rate of a geometric progression with quotient q 0 .The above analysis of method (3) relates to the simplest case.More subtle arguments (see, e.g.Kantorovich and Akilov [6]) show that Theorem 5 remains valid if the sign < in (19) is replace by ≤.Therefore from now on we can replace (19) by (13) in Theorem 5.

, and
for L 0 = 0, where [r] denotes the integer part of real number r.

Set
x 0 = y N .Moreover, assume: Then the following hold: Newton's method (2), starting at x 0 = x 0 converges to a unique solution x * * of equation and Proof.Using Theorem 5, and the estimates It follows by Theorems 1 and 3, with L 0 , L, η N , r N , r N replacing 0 , , η, s * , t * 0 respectively, that there exists a unique solution x * * of equation Choose y 0 = 1.Using ( 5), ( 6), ( 10) and ( 28), we obtain The Newton-Kantorovich hypothesis ( 7) becomes for all a ∈ 0, 1 2 .That is according to Theorem 1 there is no guarantee that either methods (2) or (3) starting at x 0 = y 0 = 1 converge to x * .However according to Theorem 3 condition (12) becomes: (31) Using condition (13) we can do even better since (33) which improves the choice for a given by (32).However only linear and not quadratic convergence is guaranteed.
Our motivation for introducing condition (10) instead of ( 6) for the convergence modified Newton's method (3) can also be seen in the following examples.
where c 1 , c 2 are real parameters and i > 2 an integer.Then F (x) = x 1 i + c 1 is not Lipschitz on D. However, center Lipschitz condition (10) holds for Indeed, we have Example 11.We consider the integral equation Here, f is a given continuous function satisfying For example, when G(s, t) is the Green kernel, the corresponding integral equation is equivalent to the boundary value problem These type of problems have been considered in [2]- [7].
Equations of the form (37) generalize equations of the type studies in [4], [6], [7].Instead of (37) we can try to solve the equation F (u) = 0 where and The norm we consider is the max-norm.
The derivative F is given by First of all, we notice that F does not satisfy a Lipschitz-type condition in Ω.Let us consider, for instance, [a, b] = [0, 1], G(s, t) = 1 and y(t) = 0. Then F (y)v(s) = v(s) and If F were a Lipschitz function, then or, equivalently, the inequality (39) x(s), would hold for all x ∈ Ω and for a constant L 2 .But this is not true.Consider, for example, the functions . If these are substituted into (39) This inequality is not true when j → ∞.Therefore, condition (6) fails in this case.However, condition (10) holds.To show this, let x 0 (t) = f (t) and α = min b a G(s, t)dt, and 0 = F (x 0 ) −1 K .Finally note that condition (13) is satisfied for sufficiently small λ.

LOCAL CONVERGENCE ANALYSIS
In order for us to cover the local convergence of methods ( 2) and (3) we start the theorem [7]: Theorem 12. Let F : D ⊆ X → Y be a differentiable operator.Assume there exist x * ∈ D and a constant K > 0 such that: where, (43) r RN = 2 3 K .Then (a) sequence {x n } generated by Newton's method (2) is well defined, remains in U (x * , r RN ) for all n ≥ 0, converges to x * provided that x 0 ∈ U (x * , r RN ) and (44) where, (46) r RM = 2 5 K , then (b) sequence {y n } generated by modified Newton's method (3) is well defined, remains in U (x * , r RM ) for all n ≥ 0, converges to x * provided that x 0 ∈ U (x * , r RM ), and (47) then it can easily be seen that p M r AM ≤ r AN and consequently according to (a) of Theorem 11 we can set In case of K 0 = K according to (a) of Theorem 10 we can set where,

Finally we observe
RN < r AM if , which can hold, since K K 0 can be arbitrarily large [2].The ideas presented here can be extended to other Newton-type iterative methods [1], [3], [4], [5] along the same lines.

CONCLUSION
The famous for its simplicity and clarity Newton-Kantorovich hypothesis (7) is the crucial sufficient semilocal convergence condition for both the quadratically convergent Newton's method (2), and the linearly convergent modified Newton's method (3) [6].There exist simple numerical examples to show that both methods converge even when condition (7) is violated [4], [6].In fact, it is common practice even condition (7) is violated for a certain initial guess, to still use Newton's method (2) for a few iterates until condition (7) is satisfied.However, this approach is considered a shot in the dark, and it is not working in general [4], [7].Here we have introduced a certain approach that works in case condition (7)is violated.First, we showed that condition (13) is a weaker sufficient convergence hypothesis for the modified Newton's method (3) than (7).That is we extended the convergence region of method (3).Then, if (7) is violated but (13) holds, we start with slower method (3) until we reach, (after a finite number of steps) a certain iterate for which condition (7) also holds true.We then continue with faster method (2) using this iterate.
Moreover, if inclusion (27) holds by the uniqueness of the solution x * in U (y 0 , s * 0 ), we deduce x * = x * * .That completes the proof of Proposition 8. Let us provide an example.Example 9. Let X = Y = R, D = [a, 2 − a], a ∈ 0, 1 2 and define scalar function F on D by (28) F (x) = x 3 − a.
and the kernel G is continuous and positive in [a, b] × [a, b].