<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>On newton’s method for subanalytic equations: On newton’s method for subanalytic equations</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">


<div class="titlepage">
<h1>On newton’s method for subanalytic equations</h1>
<p class="authors">
<span class="author">Ioannis K. Argyros\(^\ast \) Santhosh George\(^\S \)</span>
</p>
<p class="date">September 23, 2014.</p>
</div>
<p>\(^\ast \)Cameron University, Department of Mathematicsal Sciences, Lawton, OK 73505, USA, e-mail: <span class="tt">ioannisa@cameron.edu</span>. </p>
<p>\(^\S \)National Institute of Technology Karnataka, Department of Mathematical and Computational Sciences, India-575 025, e-mail: <span class="tt">sgeorge@nitk.ac.in</span>. </p>

<div class="abstract"><p> We present local and semilocal convergence results for Newton’s method in order to approximate solutions of subanalytic equations. The local convergence results are given under weaker conditions than in earlier studies such as <span class="cite">
	[
	<a href="#9" >9</a>
	]
</span>, <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span>, <span class="cite">
	[
	<a href="#14" >14</a>
	]
</span>, <span class="cite">
	[
	<a href="#15" >15</a>
	]
</span>, <span class="cite">
	[
	<a href="#24" >24</a>
	]
</span>, <span class="cite">
	[
	<a href="#25" >25</a>
	]
</span>, <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>, resulting to a larger convergence ball and a smaller ratio of convergence. In the semilocal convergence case contravariant conditions not used before are employed to show the convergence of Newton’s method. Numerical examples illustrating the advantages of our approach are also presented in this study. </p>
<p><b class="bf">MSC.</b> 65H10, 65G99, 65K10, 47H17, 49M15. </p>
<p><b class="bf">Keywords.</b> Newton’s methods, convergence ball, local-semilocal convergence, subanalytic functions. </p>
</div>
<h1 id="a0000000002">1 Introduction</h1>
<p>In this study we are concerned with the problem of approximating a solution \(x^\ast \) of the equation </p>
<div class="equation" id="1.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{1.1} F(x)=0,\end{equation}
  </div>
  <span class="equation_label">1</span>
</p>
</div>
<p>where \(F\) is a continuous mapping from a subset \(D\) of \(\mathbb {R}^n\) into \(\mathbb {R}^n.\) </p>
<p>Many problems in computational sciences and other disciplines can be brought in a form like (<a href="#1.1">1</a>) using mathematical modeling <span class="cite">
	[
	<a href="#3" >3</a>
	]
</span>, <span class="cite">
	[
	<a href="#7" >7</a>
	]
</span>, <span class="cite">
	[
	<a href="#8" >8</a>
	]
</span>, <span class="cite">
	[
	<a href="#9" >9</a>
	]
</span>, <span class="cite">
	[
	<a href="#14" >14</a>
	]
</span>, <span class="cite">
	[
	<a href="#16" >16</a>
	]
</span>, <span class="cite">
	[
	<a href="#17" >17</a>
	]
</span>, <span class="cite">
	[
	<a href="#22" >22</a>
	]
</span>,&#160;<span class="cite">
	[
	<a href="#24" >24</a>
	]
</span>–<span class="cite">
	[
	<a href="#28" >28</a>
	]
</span>. In general the solutions of equation (<a href="#1.1">1</a>) can not be found in closed form. Therefore iterative methods are used for obtaining approximate solutions of (<a href="#1.1">1</a>). In Numerical Functional Analysis, for finding solution \(x^\ast \) of equation (<a href="#1.1">1</a>) is essentially connected to variants of Newton’s method. Newton’s method is defined by </p>
<div class="equation" id="1.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{1.2} x_{k+1}=x_k-F'(x_k)^{-1}F(x_k), \qquad \textnormal{for each}\, \, k=0,1,2,\ldots , \end{equation}
  </div>
  <span class="equation_label">2</span>
</p>
</div>
<p>where \(x_0\) is an initial point and \(F\) is a continuously Fréchet differentiable function on \(D\), i.e., \(F\) is a smooth function. </p>
<p>The study about convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. There exist many studies which deal with the local and semilocal convergence analysis of Newton’s methods (<a href="#1.2">2</a>) under various Lipschitz-type conditions on \(F'.\) We refer the reader to <span class="cite">
	[
	<a href="#1" >1</a>
	, 
	<a href="#2" >2</a>
	, 
	<a href="#3" >3</a>
	, 
	<a href="#4" >4</a>
	, 
	<a href="#5" >5</a>
	, 
	<a href="#6" >6</a>
	, 
	<a href="#7" >7</a>
	, 
	<a href="#8" >8</a>
	, 
	<a href="#9" >9</a>
	, 
	<a href="#10" >10</a>
	, 
	<a href="#11" >11</a>
	, 
	<a href="#12" >12</a>
	, 
	<a href="#13" >13</a>
	, 
	<a href="#14" >14</a>
	, 
	<a href="#15" >15</a>
	, 
	<a href="#16" >16</a>
	, 
	<a href="#17" >17</a>
	, 
	<a href="#18" >18</a>
	, 
	<a href="#19" >19</a>
	, 
	<a href="#20" >20</a>
	, 
	<a href="#21" >21</a>
	, 
	<a href="#22" >22</a>
	, 
	<a href="#23" >23</a>
	, 
	<a href="#24" >24</a>
	, 
	<a href="#25" >25</a>
	, 
	<a href="#26" >26</a>
	, 
	<a href="#27" >27</a>
	, 
	<a href="#28" >28</a>
	]
</span> and the references therein for this type of results. </p>
<p>However, in many interesting applications \(F\) is not a smooth function <span class="cite">
	[
	<a href="#3" >3</a>
	]
</span>, <span class="cite">
	[
	<a href="#7" >7</a>
	]
</span>, <span class="cite">
	[
	<a href="#8" >8</a>
	]
</span>, <span class="cite">
	[
	<a href="#23" >23</a>
	]
</span>, <span class="cite">
	[
	<a href="#24" >24</a>
	]
</span>, <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>, <span class="cite">
	[
	<a href="#28" >28</a>
	]
</span>. In particular, we are interested in the case when \(F\) is a semismooth function. Then, we define Newton’s method </p>
<div class="equation" id="1.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{1.3} x_{k+1}=x_k-\Lambda (x_k)^{-1}F(x_k),\qquad \textnormal{for each},\qquad k=0,1,2,\ldots , \end{equation}
  </div>
  <span class="equation_label">3</span>
</p>
</div>
<p>where \(x_0\in \mathbb {R}^n\) is an initial point and \(\Lambda (x_k)\in \partial F(x_k)\) the generalized Jacobian of \(F\) as defined by Clarke <span class="cite">
	[
	<a href="#14" >14</a>
	]
</span>. We present local as well as semilocal convergence results under weaker conditions than in earlier studies such as <span class="cite">
	[
	<a href="#9" >9</a>
	]
</span>, <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span>, <span class="cite">
	[
	<a href="#14" >14</a>
	]
</span>, <span class="cite">
	[
	<a href="#15" >15</a>
	]
</span>, <span class="cite">
	[
	<a href="#24" >24</a>
	]
</span>, <span class="cite">
	[
	<a href="#25" >25</a>
	]
</span>, <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>. In the case of local convergence, our convergence ball is larger and the ratio of convergence smaller than before <span class="cite">
	[
	<a href="#9" >9</a>
	]
</span>, <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span>, <span class="cite">
	[
	<a href="#14" >14</a>
	]
</span>, <span class="cite">
	[
	<a href="#15" >15</a>
	]
</span>, <span class="cite">
	[
	<a href="#24" >24</a>
	]
</span>–<span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>. These advantages are also obtained under weaker hypotheses. This type of improved convergence results are important in computational Mathematics, since this way we have a wider choice of initial guesses and we compute less iterates in order to obtain a desired error tolerance. </p>
<p>The rest of the paper is organized as follows: In order to make the paper as self contained as possible, we provide the definitions of semismooth, semianalytic and subanalytic functions as well as earlier results in Section 2. The semilocal and local convergence analysis of Newton’s method is given in Section 3. Finally the numerical examples illustrating the theoretical results are given in the concluding Section 4.  </p>
<h1 id="a0000000003">2 Semismooth, Semianalytic and Subanalytic Functions</h1>
<p> In order to make the paper as self contained as possible we state some standard definitions and results. In <span class="cite">
	[
	<a href="#24" >24</a>
	]
</span>, Milfflin introduced the concept of semismoothness for functionals, later in <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>, L.Qi and J. Sun extended this concept for functions of several variable. In fact they showed that semismoothness is equivalent to the uniform convergence of directional derivatives in all directions. </p>
<p><div class="definition_thmwrapper " id="a0000000004">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">1</span>
  </div>
  <div class="definition_thmcontent">
  <p><span class="rm">(see <span class="cite">
	[
	<a href="#14" >14</a>
	, 
	p. 70
	]
</span>)</span>Let \(F: \mathbb {R}^n\rightarrow \mathbb {R}^n\) be a locally Lipschitz continuous function. The limiting Jacobian of \(F\) at \(x\in \mathbb {R}^n\) is defined as </p>
<div class="displaymath" id="a0000000005">
  \[ \partial ^\circ F(x)=\{ A\in L(\mathbb {R}^n, \mathbb {R}^m): \exists \,  u^k\in D_F;F'(u^k)\rightarrow A, \,  k\rightarrow \infty \}  \]
</div>
<p> where \(D_F\) denotes the point of differentiability of \(F.\) The Clarke Jacobian of \(F\) at \(x\in \mathbb {R}^n\) denoted \(\partial F(x)\) is the subset of \(X^\ast \) dual of \(X,\) defined as the closed convex hull of \(\partial ^\circ F(x).\) </p>

  </div>
</div> <div class="definition_thmwrapper " id="a0000000006">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">2</span>
  </div>
  <div class="definition_thmcontent">
  <p><span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>We say that \(F\) is semismooth at \(x\in \mathbb {R}^n\) if \(F\) is locally Lipschitzian at \(x\) and </p>
<div class="displaymath" id="a0000000007">
  \[  \lim _{V\in F'(x+th'), h'\rightarrow h,\,  t\downarrow 0} \{ Vh'\}   \]
</div>
<p> exists for any \(h\in \mathbb {R}^n.\) </p>

  </div>
</div> Note that convex functions and smooth functions are semismooth functions, Further the product and sums of semismooth functions are semismooth functions (see <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span>). Moreover, semismoothness of \(F\) implies </p>
<div class="displaymath" id="a0000000008">
  \[  \lim _{h'\rightarrow h, \,  t\rightarrow 0} \tfrac {F(x+th’)-F(x)}{t} =\lim _{V\in F'(x+th'), h'\rightarrow h,\,  t\downarrow 0} \{ Vh'\} .  \]
</div>
<p> <div class="definition_thmwrapper " id="a0000000009">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">3</span>
  </div>
  <div class="definition_thmcontent">
  <p><span class="cite">
	[
	<a href="#15" >15</a>
	]
</span> A subset \(X\) of \(\mathbb {R}^n\) is semianalytic if for each \(a\in \mathbb {R}^n\) there is a neighbourhooh \(U\) of a and real analytic functions \(f_{i,j}\) on \(U\) such that </p>
<div class="displaymath" id="a0000000010">
  \[ X\cap U=\bigcup _{i=1}^r\bigcap _{j=1}^{s_i}\{ x\in U|f_{i,j}\varepsilon _{i,j} 0\}  \]
</div>
<p> where \(\varepsilon _{i,j}\in \{ {\lt}, {\gt}, =\} .\) </p>

  </div>
</div> <div class="remark_thmwrapper " id="a0000000011">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">4</span>
  </div>
  <div class="remark_thmcontent">
  <p>\(X\) is said to be semianalytic if \(X=\mathbb {R}^n\) and \(f_{i,j}\) are polynomials. </p>

  </div>
</div> <div class="definition_thmwrapper " id="a0000000012">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">5</span>
  </div>
  <div class="definition_thmcontent">
  <p><span class="cite">
	[
	<a href="#15" >15</a>
	]
</span> A subset \(X\) of \(\mathbb {R}^n\) is subanalytic if for each \(a\in \mathbb {R}^n\) admits a neighborhood \(U\) such that \(X\cap U\) is a projection of a relatively compact semianalytic set: there is a semianalytic bounded set \(A\) in \(\mathbb {R}^{n+p}\) such that \(X\cap U=\prod (A)\) where \(\prod :\mathbb {R}^{n+p}\rightarrow \mathbb {R}^{n}\) is the projection. </p>

  </div>
</div> <div class="definition_thmwrapper " id="a0000000013">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">6</span>
  </div>
  <div class="definition_thmcontent">
  <p>Let \(X\)be a subset of \(\mathbb {R}^n.\) A function \(F: X\rightarrow \mathbb {R}^{n}\) is semianalytic (resp. subanalytic) if its graph is semianalytic (resp. subanalytic). </p>

  </div>
</div> </p>
<p>It can be seen that the class of semianalytic (resp. subanalytic) sets are closed under elementary set operations, further the closure, the interior and the connected components of a semianalytic (resp. subanalytic) set are semianalytic(resp. subanalytic). But, the image of a bounded semianalytic set by a semianalytic functions is not stable under algebraic operations (see <span class="cite">
	[
	<a href="#23" >23</a>
	]
</span>, <span class="cite">
	[
	<a href="#13" >13</a>
	]
</span>). That is why the subanalytic functions are introduced. If \(X\) is a subanalytic and relatively compact set the image of \(X\) by a subanalytic function is subanalytic (see <span class="cite">
	[
	<a href="#23" >23</a>
	]
</span>, <span class="cite">
	[
	<a href="#9" >9</a>
	]
</span>). Further, if \(F\) and \(g\) are subanalytic continuous functions defined on a compact subanalytic set \(K\) then \(F+g\) is subanalytic. </p>
<p>For examples and properties of subanalytic or semianalytic functions we refer the interested reader to <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#14" >14</a>
	]
</span>, <span class="cite">
	[
	<a href="#16" >16</a>
	]
</span>, <span class="cite">
	[
	<a href="#17" >17</a>
	]
</span>, <span class="cite">
	[
	<a href="#24" >24</a>
	]
</span>, <span class="cite">
	[
	<a href="#25" >25</a>
	]
</span>, <span class="cite">
	[
	<a href="#27" >27</a>
	]
</span>, <span class="cite">
	[
	<a href="#28" >28</a>
	]
</span>. The following Propositions and Remark can be found in <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span> <div class="proposition_thmwrapper " id="a0000000014">
  <div class="proposition_thmheading">
    <span class="proposition_thmcaption">
    Proposition
    </span>
    <span class="proposition_thmlabel">7</span>
  </div>
  <div class="proposition_thmcontent">
  <p><span class="cite">
	[
	<a href="#10" >10</a>
	]
</span> If \(F: X\subset \mathbb {R}^n\rightarrow \mathbb {R}^n\) is a subanalytic locally Lipschitz mapping then for all \(x\in X\) </p>
<div class="displaymath" id="a0000000015">
  \[ \| F(x+d)-F(x)-F'(x;d)\| =o_x(\| d\| ). \]
</div>

  </div>
</div> <div class="remark_thmwrapper " id="a0000000016">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">8</span>
  </div>
  <div class="remark_thmcontent">
  <p><span class="cite">
	[
	<a href="#28" >28</a>
	]
</span> A subanalytic function \(t\rightarrow o_c(t)\) admits a Puiseux development; so there exist a constant \(c {\gt} 0,\) a real number \(\epsilon {\gt} 0\) and a rational number \(\gamma {\gt} 0\) such that \(\| F(x+d)-F(x)-F'(x;d)\| =c\| d\| ^\gamma \) whenever \(\| d\| \leq \epsilon .\) </p>

  </div>
</div> <div class="proposition_thmwrapper " id="a0000000017">
  <div class="proposition_thmheading">
    <span class="proposition_thmcaption">
    Proposition
    </span>
    <span class="proposition_thmlabel">9</span>
  </div>
  <div class="proposition_thmcontent">
  <p><span class="cite">
	[
	<a href="#10" >10</a>
	]
</span> Let \(F: \mathbb {R}^n\rightarrow \mathbb {R}^n\) be locally Lipschitz and subanalytic, there exists a positive rational number \(\gamma \) such that: </p>
<div class="displaymath" id="a0000000018">
  \[ \| F(y)-F(x)-\Lambda (y)(y-x)\| =C_x\| y-x\| ^{1+\gamma } \]
</div>
<p> where \(y\) is close to \(x,\) \(\Lambda (y)\) is any element of \(\partial F(y)\) and \(C_x\) is a positive constant. </p>

  </div>
</div>  </p>
<h1 id="a0000000019">3 Convergence</h1>
<p> We present semilocal and local convergence results for Newton’s method. First, we present a semilocal result for Newton’s method. Let \(U(x,\rho ), \bar{U}(x, \rho )\) denote the open and closed balls in \(\mathbb {R}^n\) with center \(x\) and of radius \(\rho {\gt} 0.\) <div class="theorem_thmwrapper " id="T3.1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">10</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(F:D\subseteq \mathbb {R}^n \rightarrow \mathbb {R}^n\) is locally Lipschitz subanalytic on \(D\). For any \(\Lambda (x)\in \partial F(x), x\in D,\,  \Lambda (x)\) nonsingular; </p>
<div class="equation" id="3.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.1} \| \Lambda (x_0)^{-1}F(x_0)\leq \eta ,\qquad \textnormal{for some}\, \,  x_0\in D; \end{equation}
  </div>
  <span class="equation_label">1</span>
</p>
</div>
<div class="displaymath" id="3.2">
  \begin{eqnarray} \label{3.2} \| \Lambda (y)^{-1}[F(y)-F(x)-\Lambda (y)(y-x)]\| & \leq & K\| y-x\| ^{1+\gamma }, \end{eqnarray}
</div>
<p> for each \(x,y\in D\, \, \textnormal{and some}\, \, \gamma \geq 0;\) </p>
<div class="displaymath" id="3.3">
  \begin{eqnarray} \label{3.3} \| \Lambda (y)^{-1}(\Lambda (y)-\Lambda (x))(y-x)\| & \leq & M\| y-x\| ^{1+\gamma _0}, \end{eqnarray}
</div>
<p> for each \(x,y\in D\, \, \textnormal{and some}\, \, \gamma _0\geq 0;\) </p>
<div class="equation" id="3.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.4} 0\leq \alpha :=K\eta ^\gamma +M\eta ^{\gamma _0} < 1 \end{equation}
  </div>
  <span class="equation_label">4</span>
</p>
</div>
<p>and for </p>
<div class="displaymath" id="3.5">
  \begin{align}  \label{3.5} r=& \tfrac {\eta }{1-\alpha } \end{align}
</div>
<div class="displaymath" id="a0000000020">
  \[ \bar{U}(x^\ast , r) \subseteq D. \]
</div>
<p> Then, sequence \(\{ x_k\} \) generated by Newton’s method <span class="rm">(<a href="#1.3">3</a>)</span> is well defined, remains in \(U(x_0, r)\) for each \(k =0, 1, 2, \ldots \) and converges to \(x^\ast \in \bar{U}(x_0,r)\) of equation \(F(x)=0.\) Moreover, the following estimates hold:</p>
<div class="equation" id="3.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.7} \| x_{k+1}-x_k\| \leq \alpha \| x_k-x^\ast \| , \qquad \textnormal{for each}\, \, \, k=0,1,2,\ldots \end{equation}
  </div>
  <span class="equation_label">6</span>
</p>
</div>
<p>and </p>
<div class="equation" id="3.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.8} \| x_k-x^\ast \| \leq \tfrac {\alpha ^k\eta }{1-\alpha }, \qquad \textnormal{for each}\, \, \, k=0,1,2,\ldots . \end{equation}
  </div>
  <span class="equation_label">7</span>
</p>
</div>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000021">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> It follows from (<a href="#3.1">1</a>), (<a href="#3.4">4</a>), (<a href="#3.5">5</a>) and Newton’s method for \(k=0\) that </p>
<div class="displaymath" id="a0000000022">
  \[ \| x_1-x_0\| =\| F'(x_0)^{-1}F(x_0)\| \leq \eta \leq \tfrac {\eta }{1-\alpha }=r. \]
</div>
<p> Hence, \(x_1\in \bar{U}(x_0,r).\) Using Newton’s method (<a href="#1.3">3</a>) for \(k=1,\) we get the approximation </p>
<div class="displaymath" id="a0000000023">
  \begin{eqnarray} \nonumber F(x_1)& =& F(x_1)-F(x_0)-\Lambda (x_0)(x_1-x_0)\\ \label{3.9} & =& [F(x_1)-F(x_0)-\Lambda (x_1)(x_1-x_0)] +[\Lambda (x_1)-\Lambda (x_0)](x_1-x_0). \end{eqnarray}
</div>
<p> Then, since \(x_1\in D\) we have that \(\Lambda (x_1)^{-1}\in L(Y,X).\) In view of (<a href="#3.1">1</a>), (<a href="#3.2">2</a>), (<a href="#3.3">3</a>), (<a href="#3.4">4</a>) and (<a href="#3.9">8</a>) we get that </p>
<div class="displaymath" id="a0000000024">
  \begin{eqnarray} \nonumber \| x_2-x_1\| & =& \| \Lambda (x_1)^{-1}F(x_1)\| \\ \nonumber & \leq & \| \Lambda (x_1)^{-1}(F(x_1)-F(x_0)-\Lambda (x_1)(x_1-x_0))\| \\ \nonumber & & +\| \Lambda (x_1)^{-1}(\Lambda (x_1)-\Lambda (x_0))(x_1-x_0)\| \\ \nonumber & \leq & K\| x_1-x_0\| ^{1+\gamma }+M\| x_1-x_0\| ^{1+\gamma _0}\\ \nonumber & =& (K\| x_1-x_0\| ^\gamma +M\| x_1-x_0\| ^{\gamma _0})\| x_1-x_0\| \\ \nonumber & \leq & (K\eta ^\gamma +M\eta ^{\gamma _0})\| x_1-x_0\| =\alpha \| x_1-x_0\|  \end{eqnarray}
</div>
<p> and </p>
<div class="displaymath" id="a0000000025">
  \begin{eqnarray} \nonumber \| x_2-x_0\| & \leq & \| x_2-x_1\| +\| x_1-x_0\| \leq (\alpha +1)\| x_1-x_0\| \\ \label{3.11} & =& \tfrac {1-\alpha ^2}{1-\alpha }\| x_1-x_0\| \leq \tfrac {1-\alpha ^2}{1-\alpha }\eta \leq r, \end{eqnarray}
</div>
<p> which shows that (<a href="#3.7">6</a>) holds for \(k=1\) and \(x_2\in \bar{U}(x_0,r).\) </p>
<p>Let us assume that (<a href="#3.7">6</a>) holds for all \(i\leq k\) and \(x_i\in \bar{U}(x_0,r).\) Then, by simply using \(x_{i-1}, x_i\) in place of \(x_0, x_1\) in (<a href="#3.9">8</a>)–(<a href="#3.11">9</a>) we get that </p>
<div class="displaymath" id="a0000000026">
  \[ \| x_{i+1}-x_i\| \leq \alpha \| x_i-x_{i-1}\| , \]
</div>
<p> so </p>
<div class="displaymath" id="a0000000027">
  \[ \| x_{i+1}-x_0\| \leq \tfrac {1-\alpha ^{i+1}}{1-\alpha }\| x_1-x_0\| \leq \tfrac {1-\alpha ^{i+1}}{1-\alpha }\eta \leq r, \]
</div>
<p> which complete the induction for (<a href="#3.7">6</a>) and \(x_{i+1}\in \bar{U}(x_0,r).\) It follows that sequence \(\{ x_k\} \) is complete in \(\mathbb {R}^n\) and as such it converges to some \(x^\ast \in \bar{U}(x_0,r)\) (since \( \bar{U}(x_0,r)\) is a closed set). By letting \(i\rightarrow \infty \) in the estimate </p>
<div class="displaymath" id="a0000000028">
  \[ \| \Lambda (x_i)^{-1}F(x_i)\| =\| x_{i+1}-x_i\| \leq \alpha ^{i+1}\eta  \]
</div>
<p> and since \(\Lambda (x_i)^{-1}\in L(Y,X),\) we get that \(F(x^\ast )=0.\) We also have that </p>
<div class="displaymath" id="a0000000029">
  \begin{eqnarray} \nonumber \| x_{k+i}-x_k\| & \leq & \| x_{k+i}-x_{k+i-1}|+\| x_{k+i-1}-x_{k+i-2}\| +\ldots +\| x_{k+1}-x_k\| \\ \nonumber & \leq &  (\alpha ^{k+i-1}+\alpha ^{k+i-2}+\ldots +\alpha ^k)\| x_1-x_0\| \\ \label{3.14} & =& \alpha ^k\tfrac {1-\alpha ^i}{1-\alpha }\| x_1-x_0\| \leq \alpha ^k\tfrac {1-\alpha ^i}{1-\alpha }\eta . \end{eqnarray}
</div>
<p> By letting \(i\rightarrow \infty \) in (<a href="#3.14">10</a>) we obtain (<a href="#3.8">7</a>). <div class="proof_wrapper" id="a0000000030">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>
<p><div class="remark_thmwrapper " id="a0000000031">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">11</span>
  </div>
  <div class="remark_thmcontent">
  <p>(a) Condition (<a href="#3.3">3</a>) does not necessarily imply that \(\Lambda \) is Lipschitz and cannot be avoided if you want to show convergence. </p>
<p>(b) If you use </p>
<div class="equation" id="3.15">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.15} \| \Lambda (y)^{-1}\| \leq K_1 \end{equation}
  </div>
  <span class="equation_label">11</span>
</p>
</div>
<div class="displaymath" id="3.16">
  \begin{eqnarray} \label{3.16} \| F(y)-F(x)-\Lambda (y)(y-x)\|  & \leq &  K_2\| y-x\| ^{1+\gamma }, \end{eqnarray}
</div>
<p> for each \(x,y\in D, \, \, \textnormal{and some}\, \, \gamma \geq 0;\) </p>
<div class="displaymath" id="3.17">
  \begin{eqnarray} \label{3.17} \| (\Lambda (y)-\Lambda (x))(y-x)\|  & \leq &  M_1\| y-x\| ^{1+\gamma _0}, \end{eqnarray}
</div>
<p> for each \(x,y\in D,\) and some \(\gamma _0\geq 0,\) then, we have the estimates </p>
<div class="displaymath" id="a0000000032">
  \begin{eqnarray*}  \| \Lambda (y)^{-1}[F(y)-F(x)-\Lambda (y)(y-x)]\| & \leq & \| \Lambda (y)^{-1}\| \\ & & \times \| F(y)-F(x)-\Lambda (y)(y-x)\| \\ & \leq & K_1K_2\| y-x\| ^{1+\gamma }\\ \| \Lambda (y)^{-1}[\Lambda (y)-\Lambda (x)](y-x)\| & \leq &  \| \Lambda (y)^{-1}\| \| [\Lambda (y)-\Lambda (x)](y-x)\| \\ & \leq & K_1M_1\| y-x\| ^{1+\gamma _0}. \end{eqnarray*}
</div>
<p> Set \(K=K_1K_2,\,  M=K_1M_1.\) If (<a href="#3.1">1</a>), (<a href="#3.2">2</a>) are replaced by (<a href="#3.15">11</a>), (<a href="#3.16">12</a>) and (<a href="#3.17">13</a>), then the conclusion of Theorem <a href="#T3.1">10</a> hold in this stronger though setting. </p>
<p>(c) Notice that due to the estimate </p>
<div class="displaymath" id="a0000000033">
  \[ \| \Lambda (y)^{-1}[\Lambda (y)-\Lambda (x)](y-x)\| \leq \| \Lambda (y)^{-1}[\Lambda (y)-\Lambda (x)]\| \| y-x\| , \]
</div>
<p> \(M\) in (<a href="#3.3">3</a>) can be chosen to be an upper bound on \(\| \Lambda (y)^{-1}[\Lambda (y)-\Lambda (x)]\| .\) That is </p>
<div class="displaymath" id="a0000000034">
  \[ \| \Lambda (y)^{-1}[\Lambda (y)-\Lambda (x)]\| \leq M. \]
</div>
<p> Notice also that \(\| \Lambda (y)-\Lambda (x)\| \leq M_2\) holds if e.g. \(\Lambda \) is continuous. Then, we can choose e.g. \(M= K_1 M_2\) for \(\gamma _0=0.\) </p>
<p>(d) If \(\gamma _0=\gamma =0\) and (<a href="#3.1">1</a>)-(<a href="#3.3">3</a>) are given in non-invariant form, then Theorem <a href="#T3.1">10</a> reduces to the corresponding one in <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>. Otherwise, our Theorem <a href="#T3.1">10</a> is an extension of the one in <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>. Moreover, it is an improvement in the case \(\gamma _0=\gamma =0,\) since our results are given in affine invariant form. The advantages of affine over non-affine invariant form results are well known in the literature <span class="cite">
	[
	<a href="#17" >17</a>
	]
</span>. </p>
<p>(e) It was shown in <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span> that if \(F: D\subseteq \mathbb {R}^n\rightarrow \mathbb {R}^n\) is locally Lipschitz and subanalytic, then (<a href="#3.16">12</a>) always holds. Therefore, (<a href="#3.2">2</a>) holds for \(K=K_1K_2.\) <span class="qed">â–¡</span></p>

  </div>
</div> Next, we present a local convergence result for Newton’s method. <div class="theorem_thmwrapper " id="T3.3">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">12</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Suppose \(F: D\subseteq \mathbb {R}^n\rightarrow \mathbb {R}^n\) is locally Lipschitz and subanalytic; there exists a regular point \(x^\ast \in D\) such that \(F(x^\ast )=0;\) for any \(\Lambda (x)\in \partial F(x), \,  x\in D,\,  \Lambda (x)\) is nonsingular; </p>
<div class="displaymath" id="3.18">
  \begin{eqnarray} \label{3.18} \| \Lambda (y)^{-1}[F(y)-F(x^\ast )-\Lambda (y)(y-x^\ast )]\| & \leq &  \lambda \| y-x^\ast \| ^{1+\beta } \end{eqnarray}
</div>
<p> for each \(x,y \in D\) and some \(\lambda {\gt}0,\, \, \beta {\gt}0,\) and for </p>
<div class="displaymath" id="a0000000035">
  \[ R=\min \{ \tfrac {1}{\lambda }, \tfrac {1}{\lambda ^\beta }\}  \]
</div>
<div class="displaymath" id="a0000000036">
  \[ \bar{U}(x_0,R)\subseteq D. \]
</div>
<p> Then, sequence \(\{ x_n\} \) generated by Newton’s method <span class="rm">(<a href="#1.3">3</a>)</span> converges to \(x^\ast \) provided that \(x_0\in U(x^\ast , R).\) Moreover, the following estimates hold for each \(n=0,1,2,\ldots \) </p>
<div class="equation" id="3.21">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.21} \| x_{n+1}-x^\ast \| \leq \lambda \| x_n-x^\ast \| ^{1+\beta }\leq \| x_0-x^\ast \|  < R \end{equation}
  </div>
  <span class="equation_label">15</span>
</p>
</div>
<p>and </p>
<div class="equation" id="3.22">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.22} \| x_{n+1}-x^\ast \| \leq \lambda ^{-\frac{1}{\beta }}(\lambda \| x_0-x^\ast \| )^{(1+\beta )^{n+1}}. \end{equation}
  </div>
  <span class="equation_label">16</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000037">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> We have that \(x_1\in U(x^\ast , R)\) by the choice of \(R.\) Then, using the estimate </p>
<div class="displaymath" id="a0000000038">
  \[ x_2-x^\ast =-\Lambda (x_1)^{-1}[F(x_1)-F(x^\ast )-\Lambda (x_1)(x_1-x^\ast )] \]
</div>
<p> and (<a href="#3.18">14</a>), we get that </p>
<div class="displaymath" id="a0000000039">
  \begin{eqnarray*} \nonumber \| x_2-x^\ast \| & =& \| \Lambda (x_1)^{-1}[F(x_1)-F(x^\ast )-\Lambda (x_1)(x_1-x^\ast )]\| \\ & \leq & \lambda \| x_1-x^\ast \| ^{1+\beta }\leq \| x_1-x^\ast \|  {\lt} R \end{eqnarray*}
</div>
<p> by the choice of \(R,\) which shows (<a href="#3.21">15</a>) and (<a href="#3.22">16</a>) for \(n=0.\) Suppose that (<a href="#3.21">15</a>) and (<a href="#3.22">16</a>) hold for each \(k\leq n.\) Then, we have that </p>
<div class="equation" id="3.25">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.25} x_{k+1}-x^\ast =-\Lambda (x_k)^{-1}[F(x_k)-F(x^\ast )-\Lambda (x_k)(x_k-x^\ast )] \end{equation}
  </div>
  <span class="equation_label">16</span>
</p>
</div>
<p>so by (<a href="#3.18">14</a>) and (<a href="#3.25">16</a>) we get that </p>
<div class="displaymath" id="a0000000040">
  \begin{eqnarray} \nonumber \| x_{k+1}-x^\ast \| & =& \| \Lambda (x_k)^{-1}[F(x_k)-F(x^\ast )-\Lambda (x_k)(x_k-x^\ast )]\| \\ \nonumber & \leq & \lambda \| x_k-x^\ast \| ^{1+\beta }\leq \lambda (\lambda \| x_{k-1}-x^\ast \| ^{1+\beta })^{1+\beta } \\ \nonumber & \leq & \lambda \lambda ^{1+\beta }(\| x_{k-1}-x^\ast \| )^{(1+\beta )^2}\\ \nonumber & \vdots & \\ \nonumber & =& \lambda ^{\frac{(1+\beta )^{k+1}+\ldots +(1+\beta )+1}{\beta }}\| x_0-x^\ast \| ^{(1+\beta )^{k+1}}\\ \nonumber & =& \lambda ^{-\frac{1}{\beta }}(\lambda \| x_0-x^\ast \| )^{k+1} \end{eqnarray}
</div>
<p> which shows (<a href="#3.21">15</a>), (<a href="#3.22">16</a>) for all \(n\) and that \(\lim _{k\rightarrow \infty }x_k=x^\ast .\) <div class="proof_wrapper" id="a0000000041">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>
<p><div class="remark_thmwrapper " id="a0000000042">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">13</span>
  </div>
  <div class="remark_thmcontent">
  <p>Condition (<a href="#3.18">14</a>) certainly holds if replaced by the stronger </p>
<div class="equation" id="3.27">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.27} \| \Lambda (y)^{-1}[F(y)-F(x)-\Lambda (y)(y-x)]\| \leq \lambda _1\| y-x\| ^{1+\beta },\, \, \textnormal{for each}\, \, x, y\in D. \end{equation}
  </div>
  <span class="equation_label">17</span>
</p>
</div>
<p>In this case however, </p>
<div class="equation" id="3.28">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.28} \lambda \leq \lambda _1 \end{equation}
  </div>
  <span class="equation_label">18</span>
</p>
</div>
<p>holds in general and \(\frac{\lambda _1}{\lambda }\) can be arbitrarily large <span class="cite">
	[
	<a href="#3" >3</a>
	, 
	<a href="#7" >7</a>
	, 
	<a href="#8" >8</a>
	]
</span>. Moreover, if \(\lambda _1=\lambda \) and (<a href="#3.18">14</a>) is replaced by (<a href="#3.28">18</a>), then our result reduces to the corresponding one in <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>. Otherwise (i.e., if \(\lambda {\lt} \lambda _1\)), it constitutes an improvement with advantages: </p>
<p>(i) (<a href="#3.18">14</a>) is weaker than (<a href="#3.27">17</a>). That is (<a href="#3.27">17</a>) implies (<a href="#3.18">14</a>) but (<a href="#3.18">14</a>) does not necessarily imply (<a href="#3.27">17</a>). </p>
<p>(ii) If \(\lambda {\lt} \lambda _1,\) the new error bounds on the distances \(\| x_n-x^\ast \| \) are tighter and the ratio of convergence smaller. That means in practice fewer iterates are required to achieve a given error tolerance. Hence, the applicability of Newton’s method is expanded under less computational cost. Notice also that the computation of constant \(\lambda \) requires the computation of constant \(\lambda _1\) as a special case (see, Example 4.2). <span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p>Next, we present the corresponding semilocal and local convergence results under contravariant conditions <span class="cite">
	[
	<a href="#3" >3</a>
	, 
	<a href="#6" >6</a>
	, 
	<a href="#7" >7</a>
	, 
	<a href="#17" >17</a>
	, 
	<a href="#25" >25</a>
	]
</span>. <div class="theorem_thmwrapper " id="T3.5">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">14</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Suppose: \(F:D\subseteq \mathbb {R}^n\rightarrow \mathbb {R}^n\) is locally Lipschitz subanalytic; for ant \(\Lambda (x)\in \partial F(x),\,  x\in D,\,  \Lambda (x)\) is nonsingular; </p>
<div class="equation" id="3.30*">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.30*}\| F(y)-F(x)-\Lambda (y)(y-x)\| \leq \mu _0\| \Lambda (x)(y-x)\| ^{1+\gamma } \end{equation}
  </div>
  <span class="equation_label">19</span>
</p>
</div>
<p>for some \(\mu _0 {\gt} 0,\, \gamma {\gt}0\) and each \(x, y\in D;\) </p>
<div class="equation" id="3.31*">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.31*}\| (\Lambda (y)-\Lambda (x))(y-x)\| \leq \mu _1\| \Lambda (x)(y-x)\| ^{1+\gamma } \end{equation}
  </div>
  <span class="equation_label">20</span>
</p>
</div>
<p>for some \(\mu _1 {\gt} 0\) and each \(x, y\in D;\) Define the set \(Q\) by </p>
<div class="displaymath" id="a0000000043">
  \[  Q=\{ x\in D:\| F(x)\| ^\gamma {\lt} \tfrac {1+\gamma }{\mu }\} ,  \]
</div>
<p> and let \(Q\) be bounded, where \(\mu =\mu _0+\mu _1;\,  x_0\in D\) is such that </p>
<div class="displaymath" id="a0000000044">
  \[ \| F(x_0)\| \leq \xi  \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000045">
  \[ \xi \mu ^{\frac{1}{\gamma }} {\lt} (1+\gamma )^{\frac{1}{\gamma }} \]
</div>
<p> (i.e., \(x_0\in Q\)). Then, sequence \(\{ x_k\} \) generated for \(x_0\in Q\) by Newton’s method <span class="rm">(<a href="#1.3">3</a>)</span> is well defined, remains in \(Q\) for each \(n=0,1,2,\ldots \) and converges to some \(x^\ast \in Q\) such that \(F(x^\ast )=0.\) Moreover, sequence \(\{ F(x_k)\} \) converges to zero and satisfies </p>
<div class="displaymath" id="a0000000046">
  \[ \| F(x_{k+1})\| \leq \mu \| F(x_k)\| ^{1+\gamma },\qquad \textnormal{for each}\, \,  k=0,1,2,\ldots  \]
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000047">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> By hypothesis \(x_0\in Q.\) Suppose \(x_k\in Q.\) Then, using Newton’s method (<a href="#1.3">3</a>) we get the approximation </p>
<div class="displaymath" id="3.29">
  \begin{eqnarray} \label{3.29} F(x_{k+1}& =& [F(x_{k+1})-F(x_k)-\Lambda (x_{k+1})(x_{k+1}-x_k)]\\ \nonumber & & +[\Lambda (x_{k+1})-\Lambda (x_k)](x_{k+1}-x_k). \end{eqnarray}
</div>
<p> Using (<a href="#3.30*">19</a>), (<a href="#3.31*">20</a>), (<a href="#3.29">21</a>) we get in turn that </p>
<div class="displaymath" id="a0000000048">
  \begin{eqnarray} \nonumber \| F(x_{k+1}\| & =& \| [F(x_{k+1})-F(x_k)-\Lambda (x_{k+1})(x_{k+1}-x_k)]\| \\ \nonumber & & +[\Lambda (x_{k+1})-\Lambda (x_k)](x_{k+1}-x_k)\\ \nonumber & \leq & \mu _0\| \Lambda (x_k)(x_{k+1}-x_k)\| ^{1+\gamma }+\mu _1\| \Lambda (x_k)(x_{k+1}-x_k)\| ^{1+\gamma }\\ \nonumber & =& \mu \| \Lambda (x_k)(x_{k+1}-x_k)\| ^{1+\gamma }. \end{eqnarray}
</div>
<p> Since \(x_k\in Q,\) we have that </p>
<div class="displaymath" id="a0000000049">
  \[ \| F(x_k)\| ^\gamma {\lt} \tfrac {1+\gamma }{\mu }. \]
</div>
<p> Therefore, we get that </p>
<div class="displaymath" id="a0000000050">
  \begin{eqnarray*}  \| F(x_{k+1}\| & \leq & \mu \| F(x_{k}\| ^{1+\gamma } {\lt} \mu \tfrac {\| F(x_{k}\| }{\mu }=\| F(x_{k}\| . \end{eqnarray*}
</div>
<p> Notice also that </p>
<div class="displaymath" id="a0000000051">
  \[ \| F(x_{k+1}\| ^\gamma {\lt} \| F(x_{k}\| ^\gamma {\lt} \tfrac {1+\gamma }{\mu }. \]
</div>
<p> We also have the implication </p>
<div class="displaymath" id="a0000000052">
  \[ x_{k+1} \in Q \Rightarrow \{ x_k\} \subseteq Q. \]
</div>
<p> Set </p>
<div class="displaymath" id="a0000000053">
  \[ s_k=\mu ^{\frac{1}{\gamma }}\| F(x_k)\| . \]
</div>
<p> Then, we have in turn </p>
<div class="displaymath" id="a0000000054">
  \begin{eqnarray*}  s_{k+1}& \leq &  \tfrac {1}{1+\gamma }s_k^{1+\gamma }s_{k+1}\leq \tfrac {1}{1+\gamma }s_k^{1+\gamma }\\ & \leq & \ldots \leq \tfrac {1}{1+\gamma }(\tfrac {1}{1+\gamma })^{\frac{(1+\gamma )[(1+\gamma )^k-1]}{\gamma }}s_0^{(1+\gamma )^{1+k}}\\ & =& (1+\gamma )^{\frac{1}{\gamma }}(\tfrac {s_0}{(1+\gamma )^{\frac{1}{\gamma }}})(1+\gamma )^{k+1}. \end{eqnarray*}
</div>
<p> But we have that </p>
<div class="displaymath" id="a0000000055">
  \[ s_0=\xi \mu ^{\frac{1}{\gamma }} {\lt} (1+\gamma )^{\frac{1}{\gamma }}. \]
</div>
<p> Hence, we obtain \(\lim _{k\rightarrow \infty }s_k =0,\) which imply \(\lim _{k\rightarrow \infty }\| F(x_k)\| =0.\) The set \(Q\) is bounded, so there exists an accumulation point \(x^\ast \in Q\) of sequence \(\{ x_k\} \) such that \(F(x^\ast )=0.\) <div class="proof_wrapper" id="a0000000056">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>
<p>If \(F\) is Fréchet differentiable and \(D\) is a convex set, then due to the estimate </p>
<div class="equation" id="3.36*">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.36*}F(x_{k+1})=\int _0^1[F'(x_k+\theta (x_{k+1}-x_k))-F'(x_k)](x_{k+1}-x_k)d\theta \end{equation}
  </div>
  <span class="equation_label">22</span>
</p>
</div>
<p>by repeating the proof of Theorem <a href="#T3.5">14</a> using (<a href="#3.36*">22</a>), we arrive at the following semilocal convergence result for Newton’s method (<a href="#1.2">2</a>) under contravariant conditions. <div class="theorem_thmwrapper " id="T3.6">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">15</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Suppose :\(F:D\subseteq \mathbb {R}^n\rightarrow \mathbb {R}^n\) is Fréchet-differentiable; for any \(x\in D,\,  F'(x)^{-1}\in L(\mathbb {R}^n);\) </p>
<div class="displaymath" id="a0000000057">
  \[ \| (F'(y)-F'(x))(y-x)\| \leq \mu _1\| F'(x)(y-x)\| ^{1+\gamma } \]
</div>
<p> for some \(\mu _1 {\gt} 0\) and each \(x, y\in D;\) Define the set \(Q_1\) by </p>
<div class="displaymath" id="a0000000058">
  \[ Q_1=\{ x\in D: \| F(x)\| ^\gamma {\lt} \tfrac {1+\gamma }{\mu _1}\} ; \]
</div>
<p> \(x_0\in D\) is such that </p>
<div class="displaymath" id="a0000000059">
  \[ \| f(x_0)\| \leq \xi  \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000060">
  \[ \xi \mu _1^{\frac{1}{\gamma }} {\lt} (1+\gamma )^{\frac{1}{\gamma }}. \]
</div>
<p> Then, sequence \(\{ x_k\} \) generated by Newton’s method (<a href="#1.2">2</a>) is well defined, remains in \(Q\) for each \(n=0,1,2,\ldots \) and converges to some \(x^\ast \in Q\) such that \(F(x^\ast )=0.\) Moreover, sequence \(\{ F(x_k)\} \) converges to zero and satisfies </p>
<div class="displaymath" id="a0000000061">
  \[ \| F(x_{k+1})\| \leq \mu _1\| F(x_k)\| ^{1+\gamma },\qquad \textnormal{for each}\, \,  k=0,1,2,\ldots . \]
</div>

  </div>
</div> <div class="remark_thmwrapper " id="a0000000062">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">16</span>
  </div>
  <div class="remark_thmcontent">
  <p>If \(\gamma =1,\) Theorem <a href="#T3.6">15</a> reduces to the corresponding Theorem in <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span>. However, there are examples where \(\gamma \neq 1\) (see Example <a href="#ex1">17</a>). Then, in this case the results in <span class="cite">
	[
	<a href="#26" >26</a>
	]
</span> cannot apply.<span class="qed">â–¡</span></p>

  </div>
</div> </p>
<h1 id="a0000000063">4 Numerical Examples</h1>
<p> We present numerical examples to illustrate the theoretical results. First, we present an example under contravariant conditions. </p>
<p><div class="example_thmwrapper " id="ex1">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">17</span>
  </div>
  <div class="example_thmcontent">
  <p> Let \(X=Y=\mathbb {R}^2,\, \, D=\{ x=(x_1,x_2):\frac{1}{2} \leq x_i\leq 2, i=1,2\} \) equipped with the max-norm and define \(F\) on \(D\) for \(x= (x_1,x_2)\) by </p>
<div class="displaymath" id="a0000000064">
  \[ F(x)=\left[\begin{array}{c} x_1^{\frac{3}{2}}-2x_1+x_2\\ x_1+x_2^{\frac{3}{2}}-2x_2 \end{array}\right]. \]
</div>
<p> Then, the Fréchet-derivative is given by </p>
<div class="displaymath" id="a0000000065">
  \[ F'(x)=\left[ \begin{array}{cc} \frac{3}{2}x_1^{\frac{1}{2}}-2& 1\\ 1& \frac{3}{2}x_2^{\frac{1}{2}}-2 \end{array}\right].  \]
</div>
<p> Therefore, for any \(x, y\in D,\) we have in turn that </p>
<div class="displaymath" id="a0000000066">
  \begin{eqnarray*}  \| F’(x)-F’(y)\| & =& \left\| \left[ \begin{array}{cc} \tfrac {3}{2}(x_1^{\frac{1}{2}}-y_1^{\frac{1}{2}})& 0\\ 0& \tfrac {3}{2}(x_2^{\frac{1}{2}}-y_2^{\frac{1}{2}}) \end{array}\right]\right\| \\ & =& \tfrac {3}{2}\max \{ |x_1^{\frac{1}{2}}-y_1^{\frac{1}{2}}|, |x_2^{\frac{1}{2}}-y_2^{\frac{1}{2}}|\} \\ & \leq & \tfrac {3}{2}\max \{ |x_1-y_1|^{\frac{1}{2}}, |x_2-y_2|^{\frac{1}{2}}\} \\ & \leq & \tfrac {3}{2}[\max \{ |x_1-y_1|, |x_2-y_2|\} ]^{\frac{1}{2}} =\tfrac {3}{2}|x-y|^\frac {1}{2}. \end{eqnarray*}
</div>
<p> We also have that \(\| F'(x)\| \leq 2\) for each \(x\in D.\) Then, we get that </p>
<div class="displaymath" id="a0000000067">
  \[ \| (F'(x)-F'(y))(x-y)\| \leq \tfrac {3\sqrt{2}}{8}\| F'(x)(x-y)\| ^{\frac{3}{2}}. \]
</div>
<p> Therefore, we can choose \(\gamma =\frac{1}{2},\,  \mu _1=\frac{3\sqrt{2}}{8}\) and </p>
<div class="displaymath" id="a0000000068">
  \[ Q_1=\{ x\in D:\| F(x)\| \leq 8\} . \]
</div>
<p> The, conclusions of Theorem <a href="#T3.6">15</a> hold and Newton’s method converges to \(x^\ast =(1,1).\) <span class="qed">â–¡</span></p>

  </div>
</div> Next, we present an example for the local convergence case. <div class="example_thmwrapper " id="a0000000069">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">18</span>
  </div>
  <div class="example_thmcontent">
  <p>Let \(X=Y=\mathbb {R},\, D={U}(0,1)\) and define function \(F\) on \(D\) by </p>
<div class="equation" id="4.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{4.3} F(x)=e^x-1. \end{equation}
  </div>
  <span class="equation_label">23</span>
</p>
</div>
<p>Then, we have that \(x^\ast =0.\) Using (<a href="#4.3">23</a>) we get in turn that </p>
<div class="displaymath" id="a0000000070">
  \begin{align*}  F’(y)^{-1}(F(y)-F(x^\ast )-F’(y)(y-x^\ast ))=& e^{-y}(e^y-1-e^yy)\\ =& 1-y-(1-y+\tfrac {y^2}{2!}-\tfrac {y^3}{3!}+\tfrac {y^4}{4!}-\ldots )\\ =& (\tfrac {1}{2!}-\tfrac {y}{3!}+\tfrac {y^2}{4!}-\ldots )y^2 \end{align*}
</div>
<p> so, </p>
<div class="displaymath" id="a0000000071">
  \begin{eqnarray*}  \| F’(y)^{-1}(F(y)-F(x^\ast )-F’(y)(y-x^\ast ))\| & =& |\tfrac {1}{2!}-\tfrac {y}{3!}+\tfrac {y^2}{4!}-\ldots ||y|^2\\ & \leq & (\tfrac {1}{2!}-\tfrac {|y|}{3!}+\tfrac {|y|^2}{4!}-\ldots )|y|^2\\ & \leq & (\tfrac {1}{2!}-\tfrac {1}{3!}+\tfrac {1}{4!}-\ldots )|y|^2\\ & =& (e-2)|y|^2. \end{eqnarray*}
</div>
<p> Hence, we can choose \(\lambda =e-2\) and \(\beta =1.\) Moreover, we have that </p>
<div class="displaymath" id="a0000000072">
  \begin{eqnarray*} & & F’(y)^{-1}(F(y)-F(x)-F’(y)(y-x))=\\ & =& e^{-y}(e^y-1-e^x+1-e^y(y-x))\\ & =& 1-(y-x)-e^{x-y}\\ & =& 1+x-y-[1+(x-y)+\tfrac {(x-y)}{2!}+\tfrac {(x-y)^2}{3!}+\tfrac {(x-y)^3}{4!}-\ldots ]\\ & =& (\tfrac {1}{2!}+\tfrac {x-y}{3!}+\tfrac {(x-y)^2}{4!}+\ldots )(x-y)^2 \end{eqnarray*}
</div>
<p> so, </p>
<div class="displaymath" id="a0000000073">
  \begin{align*}  \| F’(y)^{-1}(F(y)-F(x)-F’(y)(y-x))\| =&  |\tfrac {1}{2!}+\tfrac {x-y}{3!}+\tfrac {(x-y)^2}{4!}+\ldots ||y-x|^2\\ \leq & (\tfrac {1}{2!}+\tfrac {|x-y|}{3!}+\tfrac {|x-y|^2}{4!}+\ldots )|y-x|^2\\ \leq & (\tfrac {1}{2!}+\tfrac {|x-y|}{3!}+\tfrac {|x-y|^2}{4!}+\ldots )|y-x|^2\\ \leq & (\tfrac {1}{2!}+\tfrac {2}{3!}+\tfrac {2^2}{4!}+\ldots )|y-x|^2\\ =& (e^2-3)|y-x|^2. \end{align*}
</div>
<p> Hence, we can choose \(\lambda _1=e^2-3\) and \(\beta =1.\) Therefore, we obtain </p>
<div class="displaymath" id="a0000000074">
  \[ R_1=\tfrac {1}{\lambda _1}=0.227839421 {\lt} 1.392211191=\tfrac {1}{\lambda }=R. \]
</div>
<p> That is, we deduce that the new convergence ball is larger and the ratio of convergence smaller than the old convergence ball and the old ratio of convergence. Finally, the convergence of Newton’s method is guaranteed by Theorem <a href="#T3.3">12</a> provided that \(x_0\in U(x_0, R).\) <span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="1">1</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/s0377-0427(03)00420-5 "> <i class="sc">S. Amat, S. Busquier, J.M. Gutiérrez</i>, <i class="it">Geometric constructions of iterative functions to solve nonlinear equations</i>, J. Comput. Appl. Math., <b class="bf">157</b> (2003), 197–205. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="2">2</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.jmaa.2004.04.008 "> <i class="sc">I.K. Argyros</i>, <i class="it">A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach spaces</i>, J. Math. Anal. Appl., <b class="bf">298</b> (2004), 374–397. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="3">3</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">Computational theory of iterative methods. Series: Studies in Computational Mathematics</i>, 15, Editors: C.K. Chui and L. Wuytack, Elsevier Publ. Co. New York, U.S.A, 2007. </p>
</dd>
  <dt><a name="4">4</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/s12190-008-0140-6 "> <i class="sc">I.K. Argyros</i>, <i class="it">Concerning the convergence of Newton’s method and quadratic majorants</i>, J. Appl. Math. Comput., <b class="bf">29</b> (2009), 391–400. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="5">5</a></dt>
  <dd><p><a href ="https://doi.org/10.1090/s0025-5718-2010-02398-1 "> <i class="sc">I.K. Argyros</i>, <i class="it">A semilocal convergence analysis for directional Newton methods</i>, Math. Comput., <b class="bf">80</b> (2011), 327–343. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="6">6</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.jco.2011.12.003 "> <i class="sc">I.K. Argyros, S. Hilout</i>, <i class="it">Weaker conditions for the convergence of Newton’s method</i>, J. Complexity, <b class="bf">28</b> (2012), 364–387. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="7">7</a></dt>
  <dd><p><i class="sc">I.K. Argyros, Y.J. Cho, S. Hilout</i>, <i class="it">Numerical methods for equations and its applications</i>, CRC Press/Taylor and Francis Publ., New York, 2012. </p>
</dd>
  <dt><a name="8">8</a></dt>
  <dd><p><i class="sc">I.K. Argyros, S. Hilout</i>, <i class="it">Computational methods in nonlinear analysis</i>, World Scientific Publ. Comp., New Jersey, USA 2013. </p>
</dd>
  <dt><a name="9">9</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/bf02699126 "> <i class="sc">E. Bierstone, P.D. Milman</i>, <i class="it">Semianalytic and subanalytic sets</i>, IHES. Publications mathematiques, <b class="bf">67</b> (1988), 5–42. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="10">10</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/s10107-007-0166-9 "> <i class="sc">J. Bolte, A. Daniilidis, A.S. Lewis</i>, <i class="it">Tame mapping are semismooth</i>, Math. Programming (series B), <b class="bf">117</b> (2009), 5–19. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="11">11</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/bf02241866 "> <i class="sc">V. Candela, A. Marquina</i>, <i class="it">Recurrence relations for rational cubic methods I: The Halley method</i>, Computing, <b class="bf">44</b> (1990), 169–184. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="12">12</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/bf02238803 "> <i class="sc">V. Candela, A. Marquina</i>, <i class="it">Recurrence relations for rational cubic methods II: The Chebyshev method</i>, Computing, <b class="bf">45</b> (1990), 355–367. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="13">13</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.camwa.2011.01.034 "> <i class="sc">C. Chun, P. Stanica, B. Neta</i>, <i class="it">Third order family of methods in Banach spaces</i>, Computers Math. Appl., <b class="bf">61</b> (2011), 1665–1675. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="14">14</a></dt>
  <dd><p><i class="sc">F.H. Clarke</i>, <i class="it">Optimization and Nonsmooth Analysis</i>, Society for industrial and Applied Mathematics, 1990. </p>
</dd>
  <dt><a name="15">15</a></dt>
  <dd><p><a href =" https://doi.org/10.1080/02331939208843840 "> <i class="sc">J.P. Dedieu</i>, <i class="it">Penalty functions in subanalytic optimization</i>, Optimization, <b class="bf">26</b> (1992), 27–32. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="16">16</a></dt>
  <dd><p><i class="sc">J.E. Dennis Jr, R.B. Schnabel</i>, <i class="it">Numerical methods of unconstrained optimization and nonlinear equations</i>, Pretice-Hall, Englewood Cliffs., 1982. </p>
</dd>
  <dt><a name="17">17</a></dt>
  <dd><p><i class="sc">P. Deuflhard</i>, <i class="it">Newton methods for nonlinear problems: Affine invariance and Adaptive Algorithms</i>, Berlin: Springer-Verlag, 2004. </p>
</dd>
  <dt><a name="18">18</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/s0898-1221(98)00168-0 "> <i class="sc">J.M. Gutiérrez, M.A. Hernández</i>, <i class="it">Recurrence relations for the super-Halley method</i>, Computers Math. Appl., <b class="bf">36</b> (1998), 1–8. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="19">19</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/s0377-0427(97)00076-9 "> <i class="sc">J.M. Gutiérrez, M.A. Hernández</i>, <i class="it">Third-order iterative methods for operators with bounded second derivative</i>, J. Comput. Appl. Math., <b class="bf">82</b> (1997), 171–183. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="20">20</a></dt>
  <dd><p><a href =" https://doi.org/10.1016/s0377-0427(99)00347-7 "> <i class="sc">M.A. Hernández, M.A. Salanova</i>, <i class="it">Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev method</i>, Journal of Computational and Applied Mathematics, <b class="bf">126</b> (2000), 131–143. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="21">21</a></dt>
  <dd><p><a href =" https://doi.org/10.1016/s0898-1221(00)00286-8 "> <i class="sc">M.A. Hernández</i>, <i class="it">Chebyshev’s approximation algorithms and applications</i>, Computers Math. Applic., <b class="bf">41</b> (2001), 433–455. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="22">22</a></dt>
  <dd><p><i class="sc">L.V. Kantorovich, G.P. Akilov</i>, <i class="it">Functional Analysis</i>, Pergamon Press, Oxford, 1982. </p>
</dd>
  <dt><a name="23">23</a></dt>
  <dd><p><i class="sc">S. Lojasiewicz</i>, <i class="it">Ensembles semi-analytiques</i>, IHES Mimeographed notes, 1964. </p>
</dd>
  <dt><a name="24">24</a></dt>
  <dd><p><a href ="https://doi.org/10.1137/0315061"> <i class="sc">R. Mifflin</i>, <i class="it">Semi-smooth and semi-convex functions in constrained optimization</i>, SIAM J. Control and Optimization, <b class="bf">15</b> (1977), 959–972. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="25">25</a></dt>
  <dd><p><i class="sc">J.M. Ortega, W.C. Rheinboldt</i>, <i class="it">Iterative Solution of Nonlinear Equations in Several Variables</i>, Academic press, New York, 1970. </p>
</dd>
  <dt><a name="26">26</a></dt>
  <dd><p><a href =" https://doi.org/10.1007/bf01581275 "> <i class="sc">L. Qi, J. Sun</i>, <i class="it">A non-smooth version of Newton’s method</i>, Mathematical Programming, <b class="bf">58</b> (1993), 353–367. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="27">27</a></dt>
  <dd><p><i class="sc">R.T. Rockafellar</i>, <i class="it">Favorable classes of Lipschitz-continuous functions in subgradient optimization</i>, in E. Nurminski ed., Nondifferentiable Optimization (Pergamon Press, New York, 1982), 125–143. </p>
</dd>
  <dt><a name="28">28</a></dt>
  <dd><p><a href ="https://doi.org/10.1215/s0012-7094-96-08416-1 "> <i class="sc">L. Van Den Dries, C. Miller</i>, <i class="it">Geometric categories and 0-minimal structures</i>, Duke. Math. J., <b class="bf">84</b> (1996), 497–540. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>