ON NEWTON’S METHOD FOR SUBANALYTIC EQUATIONS

. We present local and semilocal convergence results for Newton’s method in order to approximate solutions of subanalytic equations. The local convergence results are given under weaker conditions than in earlier studies such as [9], [10], [14], [15], [24], [25], [26], resulting to a larger convergence ball and a smaller ratio of convergence. In the semilocal convergence case contravariant conditions not used before are employed to show the convergence of Newton’s method. Numerical examples illustrating the advantages of our approach are also presented in this study.


INTRODUCTION
In this study we are concerned with the problem of approximating a solution x * of the equation (1) F (x) = 0, where F is a continuous mapping from a subset D of R n into R n . Many problems in computational sciences and other disciplines can be brought in a form like (1) using mathematical modeling [3], [7], [8], [9], [14], [16], [17], [22], [24]- [28]. In general the solutions of equation (1) can not be found in closed form. Therefore iterative methods are used for obtaining approximate solutions of (1). In Numerical Functional Analysis, for finding solution x * of equation (1) is essentially connected to variants of Newton's method. Newton's method is defined by (2) x where x 0 is an initial point and F is a continuously Fréchet differentiable function on D, i.e., F is a smooth function.
The study about convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. There exist many studies which deal with the local and semilocal convergence analysis of Newton's methods (2) under various Lipschitz-type conditions on F . We refer the reader to  and the references therein for this type of results.
However, in many interesting applications F is not a smooth function [3], [7], [8], [23], [24], [26], [28]. In particular, we are interested in the case when F is a semismooth function. Then, we define Newton's method where x 0 ∈ R n is an initial point and Λ(x k ) ∈ ∂F (x k ) the generalized Jacobian of F as defined by Clarke [14]. We present local as well as semilocal convergence results under weaker conditions than in earlier studies such as [9], [10], [14], [15], [24], [25], [26]. In the case of local convergence, our convergence ball is larger and the ratio of convergence smaller than before [9], [10], [14], [15], [24]- [26]. These advantages are also obtained under weaker hypotheses. This type of improved convergence results are important in computational Mathematics, since this way we have a wider choice of initial guesses and we compute less iterates in order to obtain a desired error tolerance. The rest of the paper is organized as follows: In order to make the paper as self contained as possible, we provide the definitions of semismooth, semianalytic and subanalytic functions as well as earlier results in Section 2. The semilocal and local convergence analysis of Newton's method is given in Section 3. Finally the numerical examples illustrating the theoretical results are given in the concluding Section 4.

SEMISMOOTH, SEMIANALYTIC AND SUBANALYTIC FUNCTIONS
In order to make the paper as self contained as possible we state some standard definitions and results. In [24], Milfflin introduced the concept of semismoothness for functionals, later in [26], L.Qi and J. Sun extended this concept for functions of several variable. In fact they showed that semismoothness is equivalent to the uniform convergence of directional derivatives in all directions.
Definition 1. (see [14, p. 70])Let F : R n → R n be a locally Lipschitz continuous function. The limiting Jacobian of F at x ∈ R n is defined as Definition 2. [26]We say that F is semismooth at x ∈ R n if F is locally Lipschitzian at x and lim Note that convex functions and smooth functions are semismooth functions, Further the product and sums of semismooth functions are semismooth functions (see [10]). Moreover, semismoothness of F implies Definition 3. [15] A subset X of R n is semianalytic if for each a ∈ R n there is a neighbourhooh U of a and real analytic functions f i,j on U such that Remark 4. X is said to be semianalytic if X = R n and f i,j are polynomials.
Definition 5. [15] A subset X of R n is subanalytic if for each a ∈ R n admits a neighborhood U such that X ∩U is a projection of a relatively compact semianalytic set: there is a semianalytic bounded set A in R n+p such that X ∩ U = (A) where : R n+p → R n is the projection. Definition 6. Let Xbe a subset of R n . A function F : X → R n is semianalytic (resp. subanalytic) if its graph is semianalytic (resp. subanalytic).
It can be seen that the class of semianalytic (resp. subanalytic) sets are closed under elementary set operations, further the closure, the interior and the connected components of a semianalytic (resp. subanalytic) set are semianalytic(resp. subanalytic). But, the image of a bounded semianalytic set by a semianalytic functions is not stable under algebraic operations (see [23], [13]). That is why the subanalytic functions are introduced. If X is a subanalytic and relatively compact set the image of X by a subanalytic function is subanalytic (see [23], [9]). Further, if F and g are subanalytic continuous functions defined on a compact subanalytic set K then F + g is subanalytic.
[10] Let F : R n → R n be locally Lipschitz and subanalytic, there exists a positive rational number γ such that: where y is close to x, Λ(y) is any element of ∂F (y) and C x is a positive constant.

CONVERGENCE
We present semilocal and local convergence results for Newton's method. First, we present a semilocal result for Newton's method. Let U (x, ρ),Ū (x, ρ) denote the open and closed balls in R n with center x and of radius ρ > 0.
Let us assume that (6) holds for all i ≤ k and x i ∈Ū (x 0 , r). Then, by simply using x i−1 , x i in place of x 0 , x 1 in (8)-(9) we get that 1−α η ≤ r, which complete the induction for (6) and x i+1 ∈Ū (x 0 , r). It follows that sequence {x k } is complete in R n and as such it converges to some x * ∈Ū (x 0 , r) (sinceŪ (x 0 , r) is a closed set). By letting i → ∞ in the estimate By letting i → ∞ in (10) we obtain (7). (3) does not necessarily imply that Λ is Lipschitz and cannot be avoided if you want to show convergence.
(c) Notice that due to the estimate (3) [26]. Otherwise, our Theorem 10 is an extension of the one in [26]. Moreover, it is an improvement in the case γ 0 = γ = 0, since our results are given in affine invariant form. The advantages of affine over non-affine invariant form results are well known in the literature [17].
(e) It was shown in [10] that if F : D ⊆ R n → R n is locally Lipschitz and subanalytic, then (12) always holds. Therefore, (2) holds for K = K 1 K 2 .
Next, we present a local convergence result for Newton's method.
Proof. We have that x 1 ∈ U (x * , R) by the choice of R. Then, using the estimate and (14), we get that by the choice of R, which shows (15) and (16) for n = 0. Suppose that (15) and (16) hold for each k ≤ n. Then, we have that (17) x so by (14) and (17) we get that which shows (15), (16) for all n and that lim k→∞ x k = x * .
(ii) If λ < λ 1 , the new error bounds on the distances x n − x * are tighter and the ratio of convergence smaller. That means in practice fewer iterates are required to achieve a given error tolerance. Hence, the applicability of Newton's method is expanded under less computational cost. Notice also that the computation of constant λ requires the computation of constant λ 1 as a special case (see, Example 4.2).

Proof.
By hypothesis x 0 ∈ Q. Suppose x k ∈ Q. Then, using Newton's method (3) we get the approximation Using (20), (21), (22) we get in turn that Since x k ∈ Q, we have that Therefore, we get that Notice also that F (x k+1 γ < F (x k γ < 1+γ µ . We also have the implication . Then, we have in turn But we have that γ . Hence, we obtain lim k→∞ s k = 0, which imply lim k→∞ F (x k ) = 0. The set Q is bounded, so there exists an accumulation point x * ∈ Q of sequence {x k } such that F (x * ) = 0.
If F is Fréchet differentiable and D is a convex set, then due to the estimate by repeating the proof of Theorem 14 using (23), we arrive at the following semilocal convergence result for Newton's method (2) under contravariant conditions.
for some µ 1 > 0 and each x, y ∈ D; Define the set Q 1 by Then, sequence {x k } generated by Newton's method (2) is well defined, remains in Q for each n = 0, 1, 2, . . . and converges to some x * ∈ Q such that F (x * ) = 0. Moreover, sequence {F (x k )} converges to zero and satisfies Remark 16. If γ = 1, Theorem 15 reduces to the corresponding Theorem in [26]. However, there are examples where γ = 1 (see Example 17). Then, in this case the results in [26] cannot apply.

NUMERICAL EXAMPLES
We present numerical examples to illustrate the theoretical results. First, we present an example under contravariant conditions.
Then, the Fréchet-derivative is given by Therefore, for any x, y ∈ D, we have in turn that We also have that F (x) ≤ 2 for each x ∈ D. Then, we get that Therefore, we can choose γ = 1 2 , µ 1 = 3 The, conclusions of Theorem 15 hold and Newton's method converges to x * = (1, 1).
Next, we present an example for the local convergence case.
Then, we have that x * = 0. Using (24) we get in turn that Hence, we can choose λ 1 = e 2 − 3 and β = 1. Therefore, we obtain That is, we deduce that the new convergence ball is larger and the ratio of convergence smaller than the old convergence ball and the old ratio of convergence. Finally, the convergence of Newton's method is guaranteed by Theorem 12 provided that x 0 ∈ U (x 0 , R).