<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Local convergence of Newton’s method using Kantorovich convex majorants: Local convergence of Newton’s method using Kantorovich convex majorants</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">

<div class="titlepage">
<h1>Local convergence of Newton’s method using Kantorovich convex majorants</h1>
<p class="authors">
<span class="author">Ioannis K. Argyros\(^\ast \)</span>
</p>
<p class="date">December 23, 2007.</p>
</div>
<p>\(^\ast \) Cameron University, Department of Mathematics Sciences, Lawton, OK 73505, USA, e-mail: <span class="tt">iargyros@cameron.edu</span> </p>

<div class="abstract"><p> We are concerned with the problem of approximating a solution of an operator equation using Newton’s method. Recently in the elegant work by Ferreira and Svaiter <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span> a semilocal convergence analysis was provided which makes clear the relationship of the majorant function with the operator involved. However these results cannot provide information about the local convergence of Newton’s method in their present form. Here we have rectified this problem by using two flexible majorant functions. The radius of convergence is also found. Finally, under the same computational cost, we show that our radius of convergence is larger, and the error estimates on the distances involved is finer than the corresponding ones <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#11" >11</a>
	]
</span>–<span class="cite">
	[
	<a href="#13" >13</a>
	]
</span>. </p>
<p><b class="bf">MSC.</b> 65G99, 65K10, 47H17, 49M15, 90C30. </p>
<p><b class="bf">Keywords.</b> Newton’s method, Banach space, Kantorovich’s majorants, convex function, local/semilocal convergence, Fréchet–derivative, radius of convergence. </p>
</div>
<h1 id="a0000000002">1  Introduction</h1>
<p> In this study we are concerned with the problem of approximating a solution \(x^\star \) of equation </p>
<div class="equation" id="1.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{1.1} F(x) =0, \end{equation}
  </div>
  <span class="equation_label">1.1</span>
</p>
</div>
<p> where \(F\) is a continuously Fréchet–differentiable operator defined on a convex subset \(\mathcal D\) of a Banach space \(\mathcal X\) with values in a Banach space \(\mathcal Y\). </p>
<p>A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equations. For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems. For the sake of simplicity, assume that a time–invariant system is driven by the equation \(\dot{x} =Q(x)\), for some suitable operator \(Q\), where \(x\) is the state. Then the equilibrium states are determined by solving equation (<a href="#1.1">1.1</a>). Similar equations are used in the case of discrete systems. The unknowns of engineering equations can be functions (difference, differential, and integral equations), vectors (systems of linear or nonlinear algebraic equations), or real or complex numbers (single algebraic equations with single unknowns). Excpet in special cases, the most commonly used solution methods are iterative–when starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation. Iteration methods are also applied for solving optimization problems. In such cases, the iteration sequences converge to an optimal solution of the problem at hand. Since all of these methods have the same recursive structure, they can be introduced and discussed in a general framework. </p>
<p>The most popular method for generating a sequence \(\{ x_n \} \) approximating \(x^\star \) is undoubtedly Newton’s method: </p>
<div class="equation" id="1.2">
<p>
  <div class="equation_content">
    \begin{equation}  \label{1.2} x_0 \in \mathcal D,\qquad x_{n+1}=x_n - F'(x_n)^{-1}\, F(x_n) \quad (n \geq 0). \end{equation}
  </div>
  <span class="equation_label">1.2</span>
</p>
</div>
<p> Here \(F'(x) \in \mathcal L(\mathcal X, \mathcal Y)\), the space of bounded linear operators from \(\mathcal X\) into \(\mathcal Y\), denotes the Fréchet–derivative of operator \(F\) <span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>, <span class="cite">
	[
	<a href="#8" >8</a>
	]
</span>. </p>
<p>There is an extensive literature on local as well as semilocal convergence theorems for Newton’s method, see, e.g. <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>–<span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>, and the references there. </p>
<p>In particuar, we are motivated by the elegant work of Ferreira and Svaiter <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span> where a semilocal convergence was provided for Kantorovich’s theorem <span class="cite">
	[
	<a href="#8" >8</a>
	]
</span> which makes clear the relationship of the majorant function and operator \(F\). However the main result (see Theorem 2 in <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span>) cannot provide in its present form information about the local convergence of Newton’s method. Here, we rectify this problem. We introduce two flexible majorant functions to provide a local convergence for Newton’s method (<a href="#1.2">1.2</a>). The radius of convergence is also given. </p>
<p>Finally, under the same computational cost we show that for special choices of the majorants functions involved, our radius of convergence is larger, and the error estimates on the distances involved is finer than the corresponding ones <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#11" >11</a>
	]
</span>–<span class="cite">
	[
	<a href="#13" >13</a>
	]
</span>. </p>
<h1 id="a0000000003">2 Local convergence analysis of Newton’s method (<a href="#1.2">1.2</a>)</h1>
<p> We need a result from convex analysis <span class="cite">
	[
	<a href="#9" >9</a>
	]
</span>, <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span>: <div class="prop_thmwrapper " id="P.2.1">
  <div class="prop_thmheading">
    <span class="prop_thmcaption">
    Proposition
    </span>
    <span class="prop_thmlabel">2.1</span>
  </div>
  <div class="prop_thmcontent">
  <p> Let \(\mathcal I\subset (-\infty , +\infty ) \) be an interval, and \(g:\,  \mathcal I\longrightarrow (-\infty , +\infty ) \) be convex. </p>
<ol class="enumerate">
  <li><p>For any \(s_0 \in {\rm int}\,  (\mathcal I)\), the correspondence \( \displaystyle s \longrightarrow \displaystyle \tfrac {g(s_0) -g(s)}{s_0 - s}\), \(s \in I\), \(s \neq s_0\),&#8195;is increasing, and there exist in \((-\infty , +\infty )\) </p>
<div class="displaymath" id="a0000000004">
  \[  \begin{array}{lll} D^{-}g(s_0)= \displaystyle \lim _{s \rightarrow s_0^-} \displaystyle \tfrac {g(s_0) -g(s)}{s_0 - s}= \displaystyle \sup _{s {\lt} s_0} \displaystyle \tfrac {g(s_0) -g(s)}{s_0 - s}. \end{array}  \]
</div>
</li>
  <li><p>If \(s,v,t \in \mathcal I\), \(s {\lt} t\), and \(s \leq t \leq v\), then </p>
<div class="displaymath" id="a0000000005">
  \[  g(t) -g(s) \leq (g(v)- g(s)) \, \,  \displaystyle \tfrac {t-s}{v-s}.  \]
</div>
</li>
</ol>

  </div>
</div> We now state a portion of a theorem (see Theorem 2 in <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span>) due to Ferreira and Svaiter, needed for what follows: <div class="thm_thmwrapper " id="T.2.2">
  <div class="thm_thmheading">
    <span class="thm_thmcaption">
    Theorem
    </span>
    <span class="thm_thmlabel">2.2</span>
  </div>
  <div class="thm_thmcontent">
  <p> Let \(F\,  : \,  \,  \mathcal D\subseteq \mathcal X\longrightarrow \mathcal Y\) be a continuous operator, continuously Fréchet–differentiable on \( {\rm int} \, \,  (\mathcal D) \). Take \(x_0 \in {\rm int} \, \,  (\mathcal D) \) with \(F'(x_0)^{-1} \in \mathcal L(\mathcal Y, \mathcal X)\). Suppose there exist \(R{\gt}0\), and a continuously differentiable function \(f \,  : \,  [0,R)\longrightarrow (-\infty ,+\infty )\), such that \( U(x_0 , R ) =\{ x \in \mathcal X, \, :\,  \parallel x -x_0 \parallel {\lt} R \}  \linebreak \subset \mathcal D\), and </p>
<ul class="itemize">
  <li><p>\(\parallel F'(x_0)^{-1} \, \,  (F'(y)\!  -\! F'(x))\parallel \leq f'(\parallel y\! -\! x \parallel \! +\!  \parallel x\! -\!  x_0 \parallel ) \! -\! f'(\parallel x \! -\! x_0 \parallel ), \, \, \\ \hspace*{1cm} {\rm for} \, \,  x, y \in U(x_0 , R ) ,\quad \parallel x -x_0 \parallel + \parallel y - x \parallel {\lt} R,\) </p>
</li>
  <li><p>\(\parallel F'(x_0)^{-1} \, \,  F(x_0) \parallel \leq f(0),\) </p>
</li>
  <li><p>\(f(0) {\gt} 0,\) </p>
</li>
  <li><p>\(f'(0) = -1,\) </p>
</li>
  <li><p>\(f'\, \,  {\rm is \, \,  convex \, \,  and \, \,  strictly \, \,  increasing \,  \,  and}\, \,  f(t)=0\, \,  {\rm for \, \,  some }\, \,  t \in (0,R).\) </p>
</li>
</ul>
<p>Then \(f\) has a smallest zero \(t_{\star }\) in \((0,R)\), the sequences generated by Newton’s method <span class="rm">(<a href="#1.2">1.2</a>)</span> for solving \(f(t)=0\), and \(F(x)=0\) with starting point \(t_0 =0\) and \(x_0\), respectively, </p>
<div class="displaymath" id="a0000000006">
  \[  t_{n+1} = t_n - f'(t_n)^{-1} \, \,  f(t_n),\qquad x_{n+1} = x_n - F'(x_n) ^{-1} \, \,  F(x_n),\quad n\geq 0  \]
</div>
<p> are well defined, \(\{  t_n \} \) is strictly increasing, is contained in \([0, t_{\star } )\), and converges to \(t_{\star } \), \(\{  x _n \} \) is contained in \(U(x_0 , t_{\star } )\), and converges to a point \(x^\star \) in \(\overline{U} (x_0 , t_{\star } )\), which is the unique zero of \(F\) in \(\overline{U} (x_0 , t_{\star } )\). </p>

  </div>
</div> Theorem <a href="#T.2.2">2.2</a> provides a semilocal convergence result for Newton’s method, and cannot give us information about the local convergence of Newton’s <br />method in this form. Indeed for e.g. when \(x_0 =x^\star \), hypotheses (\({{\mathcal H}2}\)) and (\({ {\mathcal H}3}\)) are contradicting each other. In what follows we rectify this problem. </p>
<p>We state the main local convergence result for Newton’s method (<a href="#1.2">1.2</a>): <div class="thm_thmwrapper " id="T.2.3">
  <div class="thm_thmheading">
    <span class="thm_thmcaption">
    Theorem
    </span>
    <span class="thm_thmlabel">2.3</span>
  </div>
  <div class="thm_thmcontent">
  <p> Let \(F\,  : \,  \,  \mathcal D\subseteq \mathcal X\longrightarrow \mathcal Y\) be a continuous operator, continuously Fréchet–differentiable on \( {\rm int} \, \,  (\mathcal D) \). Suppose that there exist: <br />\(x^\star \in {\rm int} \, \,  (\mathcal D)\) with \(F'(x^\star )^{-1} \in \mathcal L(\mathcal Y, \mathcal X)\), and \(F(x^\star )=0\); <br />\(R{\gt}0\), and continuously differentiable function \(f_0\), and \(f \,  : \,  [0,R)\longrightarrow \linebreak (-\infty ,+\infty )\), such that \( U(x^\star , R ) \subset \mathcal D\),<br />and </p>
<div class="equation" id="2.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.1} \parallel F'(x^\star )^{-1} \, \,  (F'(x) -F'(x^\star ))\parallel \leq f_0'(\parallel x- x^\star \parallel ) -f_0'( 0), \end{equation}
  </div>
  <span class="equation_label">2.1</span>
</p>
</div>
<div class="equation" id="2.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.2} \parallel F'(x^\star )^{-1} \, \,  (F'(y) -F'(x))\parallel \leq f'(\parallel y- x \parallel + \parallel x- x^\star \parallel ) -f'( \parallel x- x^\star \parallel ), \end{equation}
  </div>
  <span class="equation_label">2.2</span>
</p>
</div>
<p> for    \(x, y \in U(x^\star , R )\), and \(\parallel x -x^\star \parallel + \parallel y - x \parallel {\lt} R\); <br />functions \(f_0'\) and \(f'\) are convex and strictly increasing with </p>
<div class="equation" id="2.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.3} f_0' (0) = f'(0) =-1, \end{equation}
  </div>
  <span class="equation_label">2.3</span>
</p>
</div>
<div class="equation" id="2.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.4} f_0' (t) \leq f'(t) \leq 0,\quad t \in [0,R]; \end{equation}
  </div>
  <span class="equation_label">2.4</span>
</p>
</div>
<p> let \(x, y \in {U}(x^\star , R )\), and \(0 \leq t {\lt} v {\lt} R\), then for all \(x \in \overline{U}(x^\star , R )\), \(\parallel y -x \parallel \leq v -t\), define function \( r_{\displaystyle f_0, f}\,  : \,  [0,R)^4 \longrightarrow [0, +\infty )\) by </p>
<div class="equation" id="2.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.5} r_{\displaystyle f_0, f} = r_{\displaystyle f_0,f} (t, v, \parallel y -x \parallel , \parallel x - x^\star \parallel ) = - \displaystyle \tfrac {e(t,v)\, \,  \parallel y - x \parallel }{(v-t)^2 \, \,  f_0’(\parallel x - x^\star \parallel )}, \end{equation}
  </div>
  <span class="equation_label">2.5</span>
</p>
</div>
<p> and set </p>
<div class="equation" id="2.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.6} t^\star = \sup \{  t\in [0,R]\,  : \,  r_{\displaystyle f_0, f} \leq 1\} , \end{equation}
  </div>
  <span class="equation_label">2.6</span>
</p>
</div>
<p> where </p>
<div class="equation" id="2.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.7} e(t,v) = f(v) - f(t) - f'(t) \,  (v-t). \end{equation}
  </div>
  <span class="equation_label">2.7</span>
</p>
</div>
<p> Then, sequence \(\{  x_n \} \) generated by Newton’s method <span class="rm">(<a href="#1.2">1.2</a>)</span>, is well defined, remains in \( {U} (x^\star , t^\star )\) for all \(n \geq 0\), and converges to \(x^\star \) Q–linearly, so that </p>
<div class="equation" id="2.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.8} \parallel x_{n+1} - x^\star \parallel \leq \displaystyle \tfrac {1}{2} \,  \parallel x_n - x^\star \parallel , \end{equation}
  </div>
  <span class="equation_label">2.8</span>
</p>
</div>
<p> provided that \(x_0 \in {U} (x^\star , t^\star ) \). </p>
<p>Moreover, if </p>
<div class="equation" id="2.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.9} f_0 '(t^\star ) < 0, \end{equation}
  </div>
  <span class="equation_label">2.9</span>
</p>
</div>
<p> then the following estimate holds for all \(n \geq 0\) </p>
<div class="equation" id="2.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.10} \parallel x_{n+1} - x^\star \parallel \leq \displaystyle \tfrac {D^- f’(t^\star )}{- 2 \,  f_0 ’ (t^\star )} \,  \parallel x_n - x^\star \parallel ^2 . \end{equation}
  </div>
  <span class="equation_label">2.10</span>
</p>
</div>
<p> Furthermore, \(x^\star \) is the unique zero of \(F\) in \({U} (x^\star , t^\star ) \). </p>

  </div>
</div> From now one we assume hypotheses of Theorem <a href="#T.2.3">2.3</a> hold, with the exception of (<a href="#2.9">2.9</a>), which will be considered to hold only when explicitly stated. </p>
<p>We shall show Theorem <a href="#T.2.3">2.3</a> through a series of lemmas: <div class="lem_thmwrapper " id="L.2.4">
  <div class="lem_thmheading">
    <span class="lem_thmcaption">
    Lemma
    </span>
    <span class="lem_thmlabel">2.4</span>
  </div>
  <div class="lem_thmcontent">
  <p> If \( x\in \overline{U}(x^\star , t)\), \(t\in [0, t^\star )\), then </p>
<div class="displaymath" id="a0000000007">
  \[  F'(x)^{-1} \in L(\mathcal Y, \mathcal X), \]
</div>
<p> and </p>
<div class="equation" id="2.11">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.11} \parallel F'(x )^{-1} \, \,  F'(x^\star ) \parallel \leq -\displaystyle \tfrac {1}{f _0 ’(\parallel x- x^\star \parallel )} \leq - \displaystyle \tfrac {1}{ f’(t)} . \end{equation}
  </div>
  <span class="equation_label">2.11</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000008">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>Let \( x\in \overline{U}(x^\star , t)\), \(t\in [0, t^\star )\). </p>
<p>Using hypotheses (<a href="#2.1">2.1</a>), (<a href="#2.3">2.3</a>), and (<a href="#2.4">2.4</a>), we obtain in turn </p>
<div class="equation" id="2.12">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.12} \begin{array}{lll} \parallel F’(x^\star )^{-1} \, \,  (F’(x ) -F’(x^\star )) \parallel &  \leq &  f _0’(\parallel x - x^\star \parallel ) -f_0’(0) \\ &  \leq &  f _0’(\parallel x - x^\star \parallel ) +1 \\ &  \leq &  f _0’( t ) +1 < 1. \end{array} \end{equation}
  </div>
  <span class="equation_label">2.12</span>
</p>
</div>
<p> It follows from (<a href="#2.12">2.12</a>), and the Banach Lemma on invertible operators <span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>, <span class="cite">
	[
	<a href="#8" >8</a>
	]
</span> that \( F'(x)^{-1} \in L(\mathcal Y, \mathcal X)\), so that (<a href="#2.11">2.11</a>) holds true. </p>
<p>That completes the proof of Lemma <a href="#L.2.4">2.4</a>. <div class="lem_thmwrapper " id="L.2.5">
  <div class="lem_thmheading">
    <span class="lem_thmcaption">
    Lemma
    </span>
    <span class="lem_thmlabel">2.5</span>
  </div>
  <div class="lem_thmcontent">
  <p> Let \( x, y \in {U}(x^\star , R)\), and \(0 \leq t {\lt} v {\lt} R\). </p>
<p>If \( x \in \overline{U}(x^\star , t) \), and \(\parallel y - x \parallel \leq v- t\), then for </p>
<div class="displaymath" id="a0000000009">
  \[ E(z,w) = F(w) - F(z) - F'(z) \,  (w - z), \, \,  z \in {U}(x^\star , t), w \in \mathcal D,  \]
</div>
<p> we have the following estimates </p>
<div class="equation" id="2.13">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.13} \parallel F'(x^\star )^{-1} \, \,  E(x,y) \parallel \leq e(t,v) \,  \displaystyle \tfrac {\parallel y - x \parallel ^2 }{(v-t) ^2}, \end{equation}
  </div>
  <span class="equation_label">2.13</span>
</p>
</div>
<p> and </p>
<div class="equation" id="2.14">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.14} r_{\displaystyle f_0, f} \leq 1, \end{equation}
  </div>
  <span class="equation_label">2.14</span>
</p>
</div>
<p> where funtion \(r_{\displaystyle f_0, f} \) is given by <span class="rm">(<a href="#2.5">2.5</a>)</span>. </p>

  </div>
</div> <div class="proof_wrapper" id="a0000000010">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>Using the convexity of \(f'\), hypothesis (<a href="#2.2">2.2</a>), and the definition of operator \(E\) we obtain in turn: </p>
<div class="displaymath" id="2.15">
  \begin{align}  \label{2.15} & \parallel F’(x^\star )^{-1} \,  E(x,y) \parallel \leq \\ & \leq \displaystyle \int _0 ^1 \parallel F’(x^\star )^{-1} \,  {\bigg(} F’(x + \theta \,  (y-x)) - F’(x){\bigg)} \parallel \,  \parallel y -x \parallel \,  \,  {\rm d} \theta \nonumber \\ & \leq \displaystyle \int _0 ^1 {\bigg(} f’(\parallel x - x^\star \parallel + \theta \,  \parallel y - x \parallel ) - f’(\parallel x - x^\star \parallel ){\bigg)} \, \,  \parallel y - x \parallel \,  \,  {\rm d} \theta \nonumber \\ & \leq \displaystyle \int _0 ^1 {\bigg(} f’ (t + \theta \,  (v-t)) - f’(t) {\bigg)} \, \,  \displaystyle \tfrac {\parallel y - x \parallel }{v -t} \,  \,  {\rm d} \theta \nonumber , \end{align}
</div>
<p> which implies estimates (<a href="#2.13">2.13</a>). </p>
<p>In view of hypothesis (<a href="#2.4">2.4</a>), we get </p>
<div class="equation" id="2.16">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.16} \displaystyle \tfrac { \displaystyle \int _0 ^1 {\bigg(} f’ (t + \theta \,  (v-t)) - f’(t) {\bigg)} \,  {\rm d}\theta }{- f_0 ’(t)} \leq 1 \end{equation}
  </div>
  <span class="equation_label">2.16</span>
</p>
</div>
<p> which together with (<a href="#2.5">2.5</a>) imply (<a href="#2.14">2.14</a>). That completes the proof of Lemma <a href="#L.2.5">2.5</a>. As in <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span> let us denote by \( \eta _{\displaystyle f_0, f} \) and \(N_F\) the maps: </p>
<div class="equation" id="2.17">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.17} \begin{array}{rrr} \eta _{\displaystyle f_0, f} \, \,  : \, \,  [0, t^\star ) \times (t,R) & \longrightarrow &  (- \infty , + \infty )\\ (t,v) & \longrightarrow &  t - \displaystyle \tfrac {e(t,v)}{f_0’(t)}, \end{array} \end{equation}
  </div>
  <span class="equation_label">2.17</span>
</p>
</div>
<p> and </p>
<div class="equation" id="2.18">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.18} \begin{array}{rrl} N_F \, \,  : \, \,  U(x^\star , t^\star ) & \longrightarrow &  \mathcal Y\\ x & \longrightarrow &  x - F’(x)^{-1} \, \,  F(x). \end{array} \end{equation}
  </div>
  <span class="equation_label">2.18</span>
</p>
</div>
<p> According to (<a href="#2.4">2.4</a>) and Lemma <a href="#L.2.4">2.4</a>, we have: \(f_0'(t) \neq 0\), \(F'(x) ^{-1} \in \mathcal L(\mathcal Y, \mathcal X)\) respectively. </p>
<p>Let \(x \in U(x^\star , t^\star )\), then \(N_F (x)\) may not belong to \( {U} (x^\star , t^\star )\) or even not belong in the domain of \(F\). That is, we can only guarantee, on \( {U} (x^\star , t^\star )\), well definedness of only the first iteration. Therefore, we need additional results to guarantee that Newton iterations can be repeated indefinitely. </p>
<p>Let us define subsets of \(U(x^\star , t^\star )\) on which Newton’s method (<a href="#2.18">2.18</a>) is “well behaved": </p>
<div class="equation" id="2.19">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.19} \begin{array}{l} K(t) = K(t,v) = \{  x \in \overline{U} (x^\star , t) ,\,  \,  t \in [0, t^\star ], \, \,  v \in (t,R), \, \,  \\ y \in \overline{U} (x, v-t)\,  : \, \,  \parallel F’(y) ^{-1} \,  E(x,y) \parallel \leq r_{f_0, f} \,  \parallel x- y \parallel \} , \end{array} \end{equation}
  </div>
  <span class="equation_label">2.19</span>
</p>
</div>
<p> and </p>
<div class="equation" id="2.20">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.20} K = \displaystyle \cup _{t \in [0, t^\star ]}^{} K(t) . \end{equation}
  </div>
  <span class="equation_label">2.20</span>
</p>
</div>
<p> <div class="lem_thmwrapper " id="L.2.6">
  <div class="lem_thmheading">
    <span class="lem_thmcaption">
    Lemma
    </span>
    <span class="lem_thmlabel">2.6</span>
  </div>
  <div class="lem_thmcontent">
  <p> If \(t\in [0, t^\star )\), \(v \in (t,R)\), then the following hold true: </p>
<div class="displaymath" id="a0000000011">
  \[  K(t) \subset U(x^\star , t^\star ),  \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000012">
  \[ N_F(K(t)) \subset K(\eta _{f_0, f} (t)). \]
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000013">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>Simply replace \(x_0\) by \(x^\star \) in the proof of Lemma 8 in <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span> (see also Proposition 4 in <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span>). That completes the proof of Lemma <a href="#L.2.6">2.6</a>. In view of (<a href="#1.2">1.2</a>) and (<a href="#2.18">2.18</a>) we have: </p>
<div class="equation" id="2.21">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.21} x_{n+1} = N_F(x_n)\qquad (n \geq 0). \end{equation}
  </div>
  <span class="equation_label">2.21</span>
</p>
</div>
<p> <i class="it">Proof of Theorem <a href="#T.2.3">2.3</a>.</i> According to Lemmas <a href="#L.2.4">2.4</a>–<a href="#L.2.6">2.6</a>, it is only left to show \(x_n \in U(x^\star , t^\star )\) (\(n \geq 1\)), \(\displaystyle \lim _{n \rightarrow \infty } x_n = x^\star \), so that estimates (<a href="#2.8">2.8</a>) and (<a href="#2.10">2.10</a>) hold true for all \(n \geq 0\). </p>
<p>By hypothesis \(x_0 \in U(x^\star , t^\star )\). Let us assume \(x_k \in U(x^\star , t^\star )\) for all \(k \leq n\). We shall show \(x_{k+1} \in U(x^\star , t^\star )\). Using (<a href="#2.21">2.21</a>), and Lemma <a href="#L.2.5">2.5</a> for \(y = x^\star \), \(x = x_n\), we get </p>
<div class="equation" id="2.22">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.22} \parallel x_{k+1} - x^\star \parallel \leq \parallel x_{k} - x^\star \parallel < t^\star , \end{equation}
  </div>
  <span class="equation_label">2.22</span>
</p>
</div>
<p> which show \(x_{k+1} \in U(x^\star , t^\star )\), and \(\displaystyle \lim _{k \rightarrow \infty } x_k = x^\star \). </p>
<p>The proof of estimates (<a href="#2.8">2.8</a>) and (<a href="#2.10">2.10</a>) is given as in <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span> with function \(f_0'\) replacing \(f'\) in the denominator of the estimates involved. </p>
<p>Finally, to show uniqueness in \( {U}(x^\star , t^\star )\), let \(y ^\star \) be a zero of \(F\) in \( {U}(x^\star , t^\star ) \). Define linear operator \(\mathcal{M}\) by </p>
<div class="equation" id="2.23">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.23} {\mathcal M} = \displaystyle \int _0 ^1 F' ( x^\star + \theta \, (y^\star - x^\star ) ) \,  {\rm d}\theta , \end{equation}
  </div>
  <span class="equation_label">2.23</span>
</p>
</div>
<p> Using (<a href="#2.1">2.1</a>), and the estimate (<a href="#2.12">2.12</a>) for \( x^\star + \theta \, (y^\star - x^\star ) \in {U}(x^\star , t^\star ) \), replacing \(x\), we conclude \({\mathcal M}^{-1} \) exists. It then follows from the identity </p>
<div class="equation" id="2.24">
<p>
  <div class="equation_content">
    \begin{equation}  \label{2.24} F( y^\star ) - F(x^\star ) = \displaystyle {\mathcal M} \, (y ^\star -x^\star ), \end{equation}
  </div>
  <span class="equation_label">2.24</span>
</p>
</div>
<p> that \(x^\star = y^\star \). That completes the proof of Theorem <a href="#T.2.3">2.3</a>. </p>
<h1 id="a0000000014">3 Applications</h1>
<p> <div class="example_thmwrapper " id="E.3.1">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">3.1</span>
  </div>
  <div class="example_thmcontent">
  <p> Let us assume there exist \(L {\gt}0\) such that the Lipschitz condition </p>
<div class="equation" id="3.1">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.1} \parallel F'(x^\star ) ^{-1}\,  (F'(y) - F'(x)) \parallel \leq L\,  \parallel x-y \parallel \  {\rm holds \, \,  for \, \,  all}  x, y \in \overline{U} (x_0, R) \subseteq \mathcal D. \end{equation}
  </div>
  <span class="equation_label">3.1</span>
</p>
</div>
<p> Define scalar majorant function \(f \, \,  : \, \,  [0,R] \longrightarrow (-\infty , + \infty ) \) by </p>
<div class="equation" id="3.2">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.2} f(t) = \displaystyle \tfrac {L}{2} \,  t^2 - t + \beta \qquad {\rm for \, \,  some} \, \, \,  \beta \geq 0, \end{equation}
  </div>
  <span class="equation_label">3.2</span>
</p>
</div>
<p> and set </p>
<div class="equation" id="3.3">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.3} f_0(t) = f(t) \qquad t \in [0,R]. \end{equation}
  </div>
  <span class="equation_label">3.3</span>
</p>
</div>
<p> It then follows from (<a href="#2.16">2.16</a>) that we can set: </p>
<div class="equation" id="3.4">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.4} t_R ^\star =R = \displaystyle \tfrac {2}{3\,  L}, \end{equation}
  </div>
  <span class="equation_label">3.4</span>
</p>
</div>
<p> which is the radius of convergence obtained by Rheinboldt <span class="cite">
	[
	<a href="#11" >11</a>
	]
</span>, <span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>. </p>
<p>It follows from (<a href="#3.1">3.1</a>) that there exists \(L_0 {\gt} 0\) such that: </p>
<div class="equation" id="3.5">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.5} \parallel F'(x^\star ) ^{-1}\,  (F'(x) - F'(x^\star )) \parallel \leq L _0 \,  \parallel x-x^\star \parallel , \quad {\rm for \, \,  all} \quad x \in \overline{U} (x_0, R) . \end{equation}
  </div>
  <span class="equation_label">3.5</span>
</p>
</div>
<p> Clearly </p>
<div class="equation" id="3.6">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.6} L_0 \leq L \end{equation}
  </div>
  <span class="equation_label">3.6</span>
</p>
</div>
<p> holds and \(\displaystyle \tfrac {L}{L_0}\) can be arbitrarily large <span class="cite">
	[
	<a href="#2" >2</a>
	]
</span>–<span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>. </p>
<p>Let us define function \(f_0\) by </p>
<div class="equation" id="3.7">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.7} f_0(t) = \displaystyle \tfrac {L_0}{2} \,  t^2 - t + \beta . \end{equation}
  </div>
  <span class="equation_label">3.7</span>
</p>
</div>
<p> It then follows from (<a href="#2.16">2.16</a>) that we can set: </p>
<div class="equation" id="3.8">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.8} t_A ^\star =R = \displaystyle \tfrac {2}{2\,  L_0 + L}. \end{equation}
  </div>
  <span class="equation_label">3.8</span>
</p>
</div>
<p> By comparing (<a href="#3.4">3.4</a>) with (<a href="#3.8">3.8</a>) we conclude: </p>
<div class="equation" id="3.9">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.9} t_R ^\star \leq t_A ^\star . \end{equation}
  </div>
  <span class="equation_label">3.9</span>
</p>
</div>
<p> Note that if strict inequality holds in (<a href="#3.6">3.6</a>), then so does in (<a href="#3.9">3.9</a>). </p>

  </div>
</div> <div class="example_thmwrapper " id="E.3.2">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">3.2</span>
  </div>
  <div class="example_thmcontent">
  <p> Let \(f \,  : \,  [0,R) \longrightarrow (-\infty , + \infty )\) be a twice continuously differentiable function with \(f'\) convex. Then \(F\) satisfies (<a href="#2.2">2.2</a>) if and only if: </p>
<div class="equation" id="3.10">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.10} \parallel F'(x^\star ) ^{-1} \, \,  F''(x) \parallel \leq f''(\parallel x - x^\star \parallel ) \quad {\rm for \, \,  all} \, \,  x\in \mathcal D, \quad {\rm such \, \,  that } \, \,  x \in U(x^\star , R) \end{equation}
  </div>
  <span class="equation_label">3.10</span>
</p>
</div>
<p> (see Lemma 14 in <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span> or <span class="cite">
	[
	<a href="#13" >13</a>
	]
</span>). </p>
<p>Let us define function \(f\) on \([0,R)\) by </p>
<div class="equation" id="3.11">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.11} f(t) = \displaystyle \tfrac {\gamma \,  t^2}{1 - \gamma \,  t} \  - t + \beta . \end{equation}
  </div>
  <span class="equation_label">3.11</span>
</p>
</div>
<p> where \(R {\lt}\displaystyle \tfrac {1}{\gamma } \), for some \(\gamma {\gt}0\). </p>
<p>If for example \(F\) is an analytic operator, then (<a href="#3.10">3.10</a>) is satisfied for </p>
<div class="equation" id="3.12">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.12} \gamma ^\star = \displaystyle \sup _{k\geq 2} \parallel \displaystyle \tfrac {F’(x^\star ) ^{-1} F^{(k)} (x^\star ) }{k!} \parallel ^{ \tfrac {1}{k-1}} . \end{equation}
  </div>
  <span class="equation_label">3.12</span>
</p>
</div>
<p> Smale <span class="cite">
	[
	<a href="#12" >12</a>
	]
</span>, and Wang <span class="cite">
	[
	<a href="#13" >13</a>
	]
</span> have used (<a href="#3.11">3.11</a>) to provided a convergence analysis for Newton’s method (<a href="#1.2">1.2</a>). </p>
<p>In particular Wang <span class="cite">
	[
	<a href="#13" >13</a>
	]
</span> showed convergence for \(F\) being only twice Fréchet continuously differentiable for \(\gamma \) satisfying </p>
<div class="equation" id="3.13">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.13} \gamma ^\star \leq \gamma . \end{equation}
  </div>
  <span class="equation_label">3.13</span>
</p>
</div>
<p> We have also used (<a href="#3.11">3.11</a>) to provide a convergence analysis for the Secant method <span class="cite">
	[
	<a href="#5" >5</a>
	]
</span> (see also <span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>). </p>
<p>Let us also define function \(f_0\) by </p>
<div class="equation" id="3.14">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.14} f_0(t) = f(t)\qquad t \in [0,R). \end{equation}
  </div>
  <span class="equation_label">3.14</span>
</p>
</div>
<p> By solving (<a href="#2.16">2.16</a>) we obtain for analytic operators \(F\) Smale’s radius of convergence <span class="cite">
	[
	<a href="#12" >12</a>
	]
</span>: </p>
<div class="equation" id="3.15">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.15} t_S ^\star = \displaystyle \tfrac {5 - \sqrt{13}}{6\,  \gamma ^\star }, \end{equation}
  </div>
  <span class="equation_label">3.15</span>
</p>
</div>
<p> and for twice Fréchet continuously differentiable operator \(F\) Wang’s <span class="cite">
	[
	<a href="#13" >13</a>
	]
</span>: </p>
<div class="equation" id="3.16">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.16} t_W ^\star = \displaystyle \tfrac {5 - \sqrt{13}}{6\,  \gamma }. \end{equation}
  </div>
  <span class="equation_label">3.16</span>
</p>
</div>
<p> In what follows we shall show that we can enlarge radii given by (<a href="#3.15">3.15</a>) and (<a href="#3.16">3.16</a>). </p>
<p>We can see that for function \(f\) given by (<a href="#3.11">3.11</a>), condition (<a href="#3.10">3.10</a>) or equivalently (<a href="#2.2">2.2</a>) imply that there exists \(\gamma _0 {\gt} 0\) satisfying </p>
<div class="equation" id="3.17">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.17} \gamma _0 \leq \gamma , \end{equation}
  </div>
  <span class="equation_label">3.17</span>
</p>
</div>
<p> so that function \(f_0 \,  : \,  [0, \displaystyle \tfrac {1}{\gamma _0}) \longrightarrow (-\infty , + \infty )\) satisfies condition (<a href="#2.1">2.1</a>) for \(R \in [0, \displaystyle \tfrac {1}{\gamma _0}) \). </p>
<p>Note also that \(\displaystyle \tfrac {\gamma }{\gamma _0}\) can be arbitrarily large <span class="cite">
	[
	<a href="#2" >2</a>
	]
</span>–<span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>. It follows by (<a href="#3.17">3.17</a>) that there exists \(a \in [0,1]\) such that </p>
<div class="equation" id="3.18">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.18} \gamma _0 = a\,  \gamma . \end{equation}
  </div>
  <span class="equation_label">3.18</span>
</p>
</div>
<p> Set </p>
<div class="equation" id="3.19">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.19} b =1-a , \end{equation}
  </div>
  <span class="equation_label">3.19</span>
</p>
</div>
<p> and define scalar polynomial \(P_a\) by </p>
<div class="equation" id="3.20">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.20} P_a(t) = 3 \,  a^2 \,  t^3 + a \, (6 \,  b - a) \,  t^2 + (3 \,  b^2 - 2 \,  a \,  b -1) \,  t - b^2. \end{equation}
  </div>
  <span class="equation_label">3.20</span>
</p>
</div>
<p> By the definition of polynomial \(P_a\) and for fixed \(a\), we get </p>
<div class="equation" id="3.21">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.21} P_a(0) = -b^2 \leq 0, \qquad {\rm and} \quad P_a(1)=1. \end{equation}
  </div>
  <span class="equation_label">3.21</span>
</p>
</div>
<p> Using (<a href="#3.21">3.21</a>) and the intermediate value theorem we conclude that there exists \(t_a \in [0,1)\) such that \(P_a (t_a)=0\). Let us denote by \(t_a\) the minimal number in \([0,1)\) satisfying \(P_a(t_a)=0\). </p>
<p>Define </p>
<div class="equation" id="3.22">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.22} t_a ^\star = \displaystyle \tfrac {1 - t_a}{ \gamma }. \end{equation}
  </div>
  <span class="equation_label">3.22</span>
</p>
</div>
<p> In particular for \(a=1\), \(t_1 = \displaystyle \tfrac {1+ \sqrt{13}}{6}\), and consequently </p>
<div class="equation" id="3.23">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.23} t_a ^\star = \displaystyle \tfrac {5 - \sqrt{13}}{6\,  \gamma }= t_W^\star . \end{equation}
  </div>
  <span class="equation_label">3.23</span>
</p>
</div>
<p> It is simple algebra to show that for all \(a \in [0,1]\), \(P_a(t_1) \geq 0\), which implies </p>
<div class="equation" id="3.24">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.24} t_a \leq t_1 . \end{equation}
  </div>
  <span class="equation_label">3.24</span>
</p>
</div>
<p> and </p>
<div class="equation" id="3.25">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.25} t_1 ^\star \leq t_a ^\star . \end{equation}
  </div>
  <span class="equation_label">3.25</span>
</p>
</div>
<p> We also note that strict inequality holds in (<a href="#3.24">3.24</a>), and (<a href="#3.25">3.25</a>) for \(a\neq 1\). </p>
<p>As an example, let \(a = \displaystyle \tfrac {1}{2}\). Then we obtain </p>
<div class="displaymath" id="a0000000015">
  \[  t_{1/2} =.65185 {\lt} t_1 = \displaystyle \tfrac {1 + \sqrt{13}}{6} = .76759 ,  \]
</div>
<p> and </p>
<div class="equation" id="3.26">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.26} t_1 ^\star = \displaystyle \tfrac {.23241}{\gamma } < \displaystyle \tfrac {.34815}{\gamma }= t_{1/2} ^\star . \end{equation}
  </div>
  <span class="equation_label">3.26</span>
</p>
</div>
<p> Finally note that clearly if strict inequality holds in (<a href="#2.4">2.4</a>), i.e., in (<a href="#3.6">3.6</a>) or (<a href="#3.17">3.17</a>), then our estimates on \(\parallel x _{n+1} - x^\star \parallel \) (\(n\geq 0\)) are finer (more precise) than the corresponding ones in <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#11" >11</a>
	]
</span>, <span class="cite">
	[
	<a href="#13" >13</a>
	]
</span> (see e.g. (<a href="#2.10">2.10</a>)). </p>
<p>These results are also obtained under the same computational cost since in practice the evaluation of \(L\) (or \(\gamma \)) requires that \(L_0\) (or \(\gamma _0\)). </p>

  </div>
</div> <div class="rem_thmwrapper " id="R.3.3">
  <div class="rem_thmheading">
    <span class="rem_thmcaption">
    Remark
    </span>
    <span class="rem_thmlabel">3.3</span>
  </div>
  <div class="rem_thmcontent">
  <p> As noted in <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#5" >5</a>
	]
</span>, <span class="cite">
	[
	<a href="#6" >6</a>
	]
</span>, <span class="cite">
	[
	<a href="#7" >7</a>
	]
</span>, <span class="cite">
	[
	<a href="#10" >10</a>
	]
</span>, <span class="cite">
	[
	<a href="#12" >12</a>
	]
</span> the local results obtained here can be used for projection method such us Arnold’s, the generalized minimum residual method (GMRES), the generalized conjugate residual method (GCR), for combined Newton/finite projection methods, and in connection with the mesh independence principle to develop the cheapest and most efficient mesh refinement strategies. </p>

  </div>
</div> <div class="rem_thmwrapper " id="R.3.4">
  <div class="rem_thmheading">
    <span class="rem_thmcaption">
    Remark
    </span>
    <span class="rem_thmlabel">3.4</span>
  </div>
  <div class="rem_thmcontent">
  <p> The local results obtained can also be used to solve equation of the form \(F(x)=0\), where \(F'\) satisfies the autonomous differential equation <span class="cite">
	[
	<a href="#4" >4</a>
	]
</span>, <span class="cite">
	[
	<a href="#8" >8</a>
	]
</span>: </p>
<div class="equation" id="3.27">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.27} F'(x) =T(F(x)), \end{equation}
  </div>
  <span class="equation_label">3.27</span>
</p>
</div>
<p> where \(T\, :\,  \mathcal Y\longrightarrow \mathcal X\) is a known continuous operator. </p>
<p>Since \(F'(x^\star )=T(F(x^\star )) =T(0)\), we can apply our results without actually knowing the solution of \(x^\star \) of equation \(F(x)=0\). </p>
<p>As an example, let \(\mathcal X= \mathcal Y= (-\infty , + \infty )\), \(\mathcal D= \overline{U} (0,1)\), and define function \(F\) on \(\mathcal D\) by </p>
<div class="equation" id="3.28">
<p>
  <div class="equation_content">
    \begin{equation}  \label{3.28} F(x) = {\rm e}^x -1. \end{equation}
  </div>
  <span class="equation_label">3.28</span>
</p>
</div>
<p> Then, for \(x^\star =0\), we can set \(T(x) = x+1\) in (<a href="#3.27">3.27</a>). </p>

  </div>
</div> </p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="1">1</a></dt>
  <dd><p><i class="sc">J. Appel, E. De Pascale, J.V. Lysenko</i> and <i class="sc">P.P. Zabrejko</i>, <i class="it">New results on Newton–Kantorovich approximations with applications to nonlinear integral equations</i>, Numer. Funct. Anal. and Optimiz., <b class="bf">18</b>, pp.&#160;1–17, 1997. </p>
</dd>
  <dt><a name="2">2</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">On the Newton–Kantorovich hypothesis for solving equations</i>, J. Comput. Appl. Math., <b class="bf">169</b>, pp.&#160;315–332, 2004. </p>
</dd>
  <dt><a name="3">3</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">A unifying local–semilocal convergence analysis and applications for two–point Newton–like methods in Banach space</i>, J. Math. Anal. and Appl., <b class="bf">298</b>, pp.&#160;374–397, 2004. </p>
</dd>
  <dt><a name="4">4</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, Computational theory of iterative methods, Studies in Computational Mathematics, 15, Elsevier, 2007, New York, U.S.A. </p>
</dd>
  <dt><a name="5">5</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">On the convergence of the Secant method under the gamma condition</i>, Cent. Eur. J. Math., <b class="bf">5</b>, pp.&#160;205–214, 2007. </p>
</dd>
  <dt><a name="6">6</a></dt>
  <dd><p><i class="sc">O.P. Ferreira</i> and <i class="sc">B.F. Svaiter </i>, <i class="it">Kantorovich’s majorants principle for Newton’s method</i>, Comput. Optim. and Appl., 2007, in press. </p>
</dd>
  <dt><a name="7">7</a></dt>
  <dd><p><i class="sc">J.M. Gutiérrez, M.A. Hernánadez</i> and M.A. Salanova, <i class="it">Accessibility of solutions by Newton’s method</i>, Inter. J. Comput. Math., <b class="bf">57</b>, pp.&#160;239–247, 1995. </p>
</dd>
  <dt><a name="8">8</a></dt>
  <dd><p><i class="sc">L.V. Kantorovich</i> and <i class="sc">G.P. Akilov</i>, <i class="it">Functional analysis in normed spaces</i>, Pergamon Press, Oxford, 1982. </p>
</dd>
  <dt><a name="9">9</a></dt>
  <dd><p><i class="sc">Y. Nesterov </i> and <i class="sc">A. Nemirovskii</i>, <i class="it">Interior–point polynomial algorithms in convex programming</i>, SIAM Studies in Appl. Math., <b class="bf">13</b>, Philadelphia, PA, 1994. </p>
</dd>
  <dt><a name="10">10</a></dt>
  <dd><p><i class="sc">F.A. Potra</i>, <i class="it">The Kantorovich theorem and interior point methods</i>, Mathematical Programming, <b class="bf">102</b>, pp.&#160;47–50, 2005. </p>
</dd>
  <dt><a name="11">11</a></dt>
  <dd><p><i class="sc">W.C. Rheinboldt</i>, <i class="it">An adaptive continuation process for solving systems of nonlinear equations</i>, Banach Center Publ., <b class="bf">3</b>, pp.&#160;129–142, 1975. </p>
</dd>
  <dt><a name="12">12</a></dt>
  <dd><p><i class="sc">S. Smale</i>, <i class="it">Newton method estimates from data at one point. The merging of disciplines: new directions in pure, applied and computational mathematics</i>, (R. Ewing, K. Gross, C. Martin, eds), Springer–Verlag, New York, pp.&#160;185–196, 1986. </p>
</dd>
  <dt><a name="13">13</a></dt>
  <dd><p><i class="sc">X. Wang</i>, <i class="it">Convergence on Newton’s method and inverse function theorem in Banach space</i>, Math. Comput., <b class="bf">68</b>, pp.&#160;169–186, 1999. </p>
</dd>
  <dt><a name="14">14</a></dt>
  <dd><p><i class="sc">T.J. Ypma</i>, <i class="it">Local convergence of inexact Newton Methods</i>, SIAM J. Numer. Anal., <b class="bf">21</b>, pp.&#160;583–590, 1984. </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>