<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>A simplified proof of the Kantorovich theorem for solving equations using telescopic series: A simplified proof of the Kantorovich theorem for solving equations using telescopic series</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">

<div class="titlepage">
<h1>A simplified proof of the Kantorovich theorem for solving equations using telescopic series</h1>
<p class="authors">
<span class="author">Ioannis K. Argyros\(^{1}\) Hongmin Ren\(^{2}\)</span>
</p>
<p class="date">September 12, 2012.</p>
</div>
<p>\(^{1}\) Department of Mathematical Sciences, Cameron University, Lawton, Oklahoma 73505-6377, USA, e-mail: <span class="tt">ioannisa@cameron.edu</span>. </p>
<p>\(^{2}\) College of Information and Engineering, Hangzhou Polytechnic, Hangzhou 311402, Zhejiang, PR China, e-mail: <span class="tt">rhm65@126.com</span>. </p>

<div class="abstract"><p> We extend the applicability of the Kantorovich theorem (KT) for solving nonlinear equations using Newton-Kantorovich method in a Banach space setting. Under the same information but using elementary scalar telescopic majorizing series, we provide a simpler proof for the (KT) [2], [7]. Our results provide at least as precise information on the location of the solution. Numerical examples are also provided in this study. </p>
<p><b class="bf">MSC.</b> 65J15, 65G99, 47H99, 49M15. </p>
<p><b class="bf">Keywords.</b> Newton-Kantorovich method; Banach space; majorizing series, telescopic series, Kantorovich theorem. </p>
</div>
<h1 id="a0000000002">1 Introduction</h1>
<p>Newton’s method is one of the most fundamental tools in computational analysis, operations research, and optimization [1, 2, 4, 7–11]. One can find applications in management science; industrial and financial research; data mining; linear and nonlinear programming. In particular interior point algorithms in convex optimization are based on Newton’s method. </p>
<p>The basic idea of Newton’s method is linearization. Suppose \(F:\mathbb {R}\rightarrow \mathbb {R}\) is a differentiable function, and we would like to solve equation </p>
<div class="equation" id="eq1.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.1} F(x)=0. \end{equation}
  </div>
  <span class="equation_label">1.1</span>
</p>
</div>
<p>Starting from an initial guess, we can have the linear approximation of \(F(x)\) in the neighborhood of \(x_0: F(x_0+h)\approx F(x_0)+F^\prime (x_0)h\), and solve the resulting linear equation \(F(x_0)+F^\prime (x_0)h=0\), leading to the recurrent method </p>
<div class="equation" id="eq1.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq1.2} x_{n+1}=x_n-F^\prime (x_n)^{-1}F(x_n)\quad (n\ge 0). \end{equation}
  </div>
  <span class="equation_label">1.2</span>
</p>
</div>
<p> This is Newton’s method as proposed in 1669 by I. Newton (for polynomial only). It was J. Raphson, who proposed the usage of Newton’s method for general functions \(F\). That is why the method is often called the Newton-Raphson method. Later in 1818, Fourier proved that the method converges quadratically in a neighborhood of the root, while Cauchy (1829, 1847) provided the multidimensional extension of Newton’s method (1.2). In 1948, L. V. Kantorovich published an important paper [7] extending Newton’s method for functional spaces (the Newton-Kantorovich method (NKM)). That is \(F:D\subseteq X\rightarrow Y\), where \(X,Y\) are Banach spaces, and \(D\) is a open convex set [1, 7]. Ever, since thousands of papers have been written in a Banach space setting for the (NKM) as well as Newton-type methods, and their applications. We refer the reader to the publications [1–11] for recent results (see also, the references there). </p>
<p>It is stated in the (KT) that Newton’s method (1.2) converges provided the famous for its simplicity and clarity Kantorovich hypotheses (KH) (see \((C_6)\)) is satisfied. (KH) uses the information \((x_0,F,F^\prime )\). Any successful attempt for weakening (KH) under the same information is extremely important in computational mathematics, since that will imply the extension of the applicability of (NKM). We have already provided conditions weaker than (KH) [1, 2] by introducing the center Lipichitz condition, which is a special case of the Lipschitz condition. </p>
<p>In this study we present a new proof of the (KT) for Newton’s method using telescopic majorizing sequences. Our proof is simpler than the corresponding one provided by Kantorovich [7]. Section 2 contains the semilocal convergence of Newton’s method (1.2). Numerical examples where our results apply to solve nonlinear equations are provided in Section 3. </p>
<h1 id="a0000000003">2 Semilocal convergence of Newton-Kantorovich method (NKM)</h1>
<p> Let us assume that \(F^\prime (x_0)^{-1}\in L(Y,X)\) at some \(x_0\in D\), and the following conditions hold: </p>
<p>\((C_1)\)&#8195;\(0{\lt}\| F^\prime (x_0)^{-1}\| \le \beta \), </p>
<p>\((C_2)\)&#8195;\(0{\lt}\| F^\prime (x_0)^{-1}F(x_0)\| \le \eta \), </p>
<p>and Lipschitz continuity condition </p>
<p>\((C_3)\)&#8195;\(\| F^\prime (x)-F^\prime (y)\| \le L\| x-y\| \) &#8195;for some  \(L{\gt}0\), and for all \(x,y\in D\). </p>
<p>It is convenient for us to define for \(a_0=b_0=1\), </p>
<div class="equation" id="eq2.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.1} \gamma =\beta L\eta , \end{equation}
  </div>
  <span class="equation_label">2.1</span>
</p>
</div>
<p> and scalar sequences </p>
<div class="equation" id="eq2.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.2} a_{n+1}=\frac{a_n}{1-\gamma a_n b_n}, \end{equation}
  </div>
  <span class="equation_label">2.2</span>
</p>
</div>
<div class="equation" id="eq2.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.3} c_n=\tfrac {1}{2}Lb_n^2, \end{equation}
  </div>
  <span class="equation_label">2.3</span>
</p>
</div>
<div class="equation" id="eq2.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.4} b_{n+1}=\beta a_{n+1}c_n\eta . \end{equation}
  </div>
  <span class="equation_label">2.4</span>
</p>
</div>
<p> We shall find a connection between (NKM) \(\{ x_n\} \), and scalar sequences \(\{ a_n\} \), \(\{ b_n\} \), \(\{ c_n\} \). Sequences \(\{ a_n\} \), \(\{ b_n\} \) and \(\{ c_n\} \) have been used by Kantorovich [7]. However, we present a new proof for the (KT). </p>
<p><div class="lemma_thmwrapper " id="a0000000004">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2.1</span>
  </div>
  <div class="lemma_thmcontent">
  <p>Under the \((C_1)-(C_3)\) conditions further suppose: </p>
<p>\((C_4)\)&#8195;\(x_n\in D\), </p>
<p>and </p>
<p>\((C_5)\) &#8195;\(\gamma a_n b_n{\lt}1\). </p>
<p>Then, the following estimates hold: </p>
<p>\((I_n)\)&#8195;\(\| F^\prime (x_n)^{-1}\| \le a_n\beta \), </p>
<p>\((II_n)\)&#8195;\(\| x_{n+1}-x_n\| =\| F^\prime (x_n)^{-1}F(x_n)\| \le b_n\eta \), </p>
<p>and </p>
<p>\((III_n)\)&#8195;\(\| F(x_{n+1})\| \le c_n\eta ^2\). </p>

  </div>
</div> <div class="proof_wrapper" id="a0000000005">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>We shall use induction to show items \((I_n)-(III_n)\). \((I_0)\) and \((II_0)\) follow immediately from the initial conditions. To show \((III_0)\), we use (1.2) for \(n=0\), \((II_0)\), and \((C_3)\) to obtain </p>
<div class="equation" id="eq2.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.5} \begin{array}{lll} F(x_1)& =& F(x_1)-F(x_0)-F^\prime (x_0)(x_1-x_0)\\ & =& \displaystyle \int ^1_0 [F^\prime (x_0+t(x_1-x_0))-F^\prime (x_0)](x_1-x_0)dt \end{array} \end{equation}
  </div>
  <span class="equation_label">2.5</span>
</p>
</div>
<p> \(\Rightarrow \) </p>
<div class="equation" id="eq2.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.6} \begin{array}{lll} \| F(x_1)\| & =& \Big\| \displaystyle \int ^1_0 [F^\prime (x_0+t(x_1-x_0))-F^\prime (x_0)](x_1-x_0)dt\Big\| \\ & \le & \displaystyle \int ^1_0 \| [F^\prime (x_0+t(x_1-x_0))-F^\prime (x_0)]\| dt\| x_1-x_0\| \\ & \le &  L\| x_1-x_0\| ^2\displaystyle \int ^1_0 tdt=\tfrac {L}{2}\| x_1-x_0\| ^2\\ & \le & \frac{L}{2}b_0^{2}\eta ^2=c_0\eta ^2. \end{array} \end{equation}
  </div>
  <span class="equation_label">2.6</span>
</p>
</div>
<p> If \(x_{k+1}\in D\; (k\le n)\), then it follows from \((C_3)-(C_5)\) and the induction hypotheses: </p>
<div class="equation" id="eq2.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.7} \begin{array}{lll} \| F^\prime (x_k)^{-1}\| \| F^\prime (x_{k+1})-F^\prime (x_k)\| & \le &  a_k\beta L\| x_{k+1}-x_k\| \\ & \le &  a_k \beta L b_k\eta =\gamma a_kb_k<1. \end{array} \end{equation}
  </div>
  <span class="equation_label">2.7</span>
</p>
</div>
<p> In view of (2.7), and the Banach lemma on invertible operators [1, 2, 7] \(F^\prime (x_{k+1})^{-1}\in L(Y,X)\), so that </p>
<div class="equation" id="eq2.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.8} \begin{array}{lll} \| F^\prime (x_{k+1})^{-1}\| & \le &  \frac{\| F^\prime (x_k)^{-1}\| }{1-\| F^\prime (x_k)^{-1}\| \| F^\prime (x_{k+1})-F^\prime (x_k)\| }\\ & \le & \frac{a_k\beta }{1-\gamma a_k b_k}=a_{k+1}\beta , \end{array} \end{equation}
  </div>
  <span class="equation_label">2.8</span>
</p>
</div>
<p> which shows \((I_n)\) for all \(n\ge 0\). </p>
<p>As in (2.5), we have </p>
<div class="equation" id="eq2.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.9} \begin{array}{lll} F(x_{k+1})& =& F(x_{k+1})-F(x_k)-F^\prime (x_{k})(x_{k+1}-x_k)\\ & =& \displaystyle \int ^1_0 [F^\prime (x_k+t(x_{k+1}-x_k))-F^\prime (x_k)](x_{k+1}-x_k)dt \end{array} \end{equation}
  </div>
  <span class="equation_label">2.9</span>
</p>
</div>
<p> and </p>
<div class="equation" id="eq2.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.10} \begin{array}{lll} \| F(x_{k+1})\| \le \frac{L}{2}\| x_{k+1}-x_k\| ^{2}\le \frac{L}{2}b_k^{2}\eta ^{2}=c_k\eta ^{2}, \end{array} \end{equation}
  </div>
  <span class="equation_label">2.10</span>
</p>
</div>
<p> which shows \((III_n)\) for all \(n\ge 0\). Moreover, we get by (1.2), (2.8) and (2.10): </p>
<div class="equation" id="eq2.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.11} \begin{array}{lll} \| x_{k+2}-x_{k+1}\| & =& \| F^\prime (x_{k+1})^{-1}F(x_{k+1})\| \le \| F^\prime (x_{k+1})^{-1}\| \| F(x_{k+1})\| \\ & \le & \beta a_{k+1}c_k\eta ^{2}=b_{k+1}\eta . \end{array} \end{equation}
  </div>
  <span class="equation_label">2.11</span>
</p>
</div>
<p> That completes the induction for \((II_n)\) and the proof of the lemma. </p>
<p>We need to show the convergence of sequence \(\{ x_n\} \), which is equivalent to proving that \(\{ b_n\} \) is a Cauchy sequence. To this effect we need the following auxiliary results: </p>
<p><div class="lemma_thmwrapper " id="a0000000006">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2.2</span>
  </div>
  <div class="lemma_thmcontent">
  <p>Suppose: </p>
<p>there exists \(x_0\in D\) such that </p>
<p>\((C_6)\)&#8195;\(2\gamma \le 1\), where, \(\gamma \) is given by <span class="rm">(2.1)</span> <br />and Condition \((C_4)\) holds. Then, the following assertions hold: </p>
<p>(a) Scalar sequence \(\{ a_n\} \) increases. In particular, \((C_5)\) holds. </p>
<p>(b) \(\lim _{n\rightarrow \infty }b_n=0\). </p>
<p>(c) </p>
<div class="equation" id="eq2.12">
<p>
  <div class="equation_content">
    \begin{equation}  r:=\sum ^\infty _{k=0}b_k=\frac{2}{1+\sqrt{1-2\gamma }}.\label{eq2.12} \end{equation}
  </div>
  <span class="equation_label">2.12</span>
</p>
</div>
<p> (d) \((C_4)\) holds if \(\overline{U}(x_0,r\eta )\subseteq D\). </p>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000007">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>(a) We shall show using induction that \(\{ a_n\} \), \(\{ b_n\} \) and \(\{ c_n\} \) are positive sequences. </p>
<p>In view of the initial conditions, \(a_0\), \(b_0\), \(c_0\) are positive, and \(1-\gamma a_0b_0{\gt}0\). Assume \(a_k\), \(b_k\), \(c_k\) and \(1-\gamma a_k b_k \) are positive for \(k\le n\). By (2.2), \(a_{k+1}{\gt}0\), consequently \(b_{k+1}{\gt}0\). Moreover, \(1-\gamma a_{k+1}b_{k+1}{\gt}0\) by \((C_6)\) (see also [2], [7]). The induction is completed. </p>
<p>Solving (2.2) for \(b_n\), we obtain </p>
<div class="equation" id="eq2.13">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.13} b_n=\frac{1}{\gamma }(\frac{1}{a_n}-\frac{1}{a_{n+1}}). \end{equation}
  </div>
  <span class="equation_label">2.13</span>
</p>
</div>
<p> By telescopic sum, we have: </p>
<div class="equation" id="eq2.14">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.14} \sum ^{n-1}_{k=0}b_k=\frac{1}{\gamma }(\frac{1}{a_0}-\frac{1}{a_n})=\frac{1}{\gamma }(1-\frac{1}{a_n}),\quad since\;  a_0=1, \end{equation}
  </div>
  <span class="equation_label">2.14</span>
</p>
</div>
<p> or </p>
<div class="equation" id="eq2.15">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.15} a_n=\frac{1}{1-\gamma \sum ^{n-1}_{k=0}b_k}. \end{equation}
  </div>
  <span class="equation_label">2.15</span>
</p>
</div>
<p> But \(1-\gamma \sum ^{n-1}_{k=0}b_k\) decreases, so \(\{ a_n\} \) given by (2.15) increases. Note also that \(a_n\ge a_0=1\). </p>
<p>(b) By (a) \(a_n\ge 1\) is increasing, so that </p>
<div class="equation" id="eq2.16">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.16} 0< \frac{1}{a_n}\le 1. \end{equation}
  </div>
  <span class="equation_label">2.16</span>
</p>
</div>
<p> Therefore, \(\{ \frac{1}{a_n}\} \) is monotonic on the compact set \([0,1]\) and as such it converges to some limit \(\overline{a}\). By letting \(n\rightarrow \infty \) in (2.13), we get </p>
<div class="equation" id="eq2.17">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.17} \lim _{n\rightarrow \infty } b_n=\lim _{n\rightarrow \infty }\frac{1}{\gamma }(\frac{1}{a_n}-\frac{1}{a_{n+1}})=\frac{1}{\gamma }(\overline{a}-\overline{a})=0. \end{equation}
  </div>
  <span class="equation_label">2.17</span>
</p>
</div>
<p> (c) It follows from (b) and (2.14) that there exists \(r=\sum ^\infty _{k=0}b_k\). The value of \(r\) is well known to be given by (2.12) [2], [7]. </p>
<p>(d) We have \(\| x_1-x_0\| \le b_0\eta =\eta \Rightarrow x_1\in U(x_0,r\eta )\). Assume \(x_k\in U(x_0,r\eta )\subseteq D\) for all \(k\le n\). Then, we have by Lemma 2.1 in turn that </p>
<div class="displaymath" id="a0000000008">
  \[  \| x_{k+1}-x_0\| \le \| x_{k+1}-x_k\| +\cdots +\| x_1-x_0\| \le (b_k+\cdots +b_0)\eta {\lt} r\eta ,  \]
</div>
<p> \(\Rightarrow \) </p>
<div class="displaymath" id="a0000000009">
  \[  x_{k+1}\in U(x_0,r\eta )\subseteq D.  \]
</div>
<p> That completes the proof of the lemma. </p>
<p><div class="remark_thmwrapper " id="Remark 2.3">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">2.3</span>
  </div>
  <div class="remark_thmcontent">
  <p>In view of \((C_3)\) there exists \(L_0{\gt}0\) such that </p>
<div class="displaymath" id="a0000000010">
  \[  \| F^\prime (x)-F^\prime (x_0)\| \le L_0\| x-x_0\| \quad \text{ for\; all} \; x\in D.  \]
</div>
<p> Note that </p>
<div class="displaymath" id="a0000000011">
  \[  L_0\le L  \]
</div>
<p> holds in general and \(\frac{L}{L_0}\) can be arbitrarily large [2]. </p>

  </div>
</div> We can show the semilocal convergence result for (NKM) (1.2). </p>
<p><div class="theorem_thmwrapper " id="Theorem 2.4">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.4</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Under conditions \((C_1)-(C_3)\), \((C_6)\) further assume </p>
<p>\((C_7)\)&#8195;\(\overline{U}(x_0,r\eta )=\{ x\in X|\| x-x_0\| \le r\eta \} \subseteq D\  \)</p>

  </div>
</div>where, \(\)r\( is given by {\rm (2.12)}.\)\(\) </p>
<p>Then, sequence \(\{ x_n\} \) generated by (NKM) <span class="rm">(1.2)</span> is well defined, remains in \(\overline{U}(x_0,r\eta )\) for all \(n\ge 0\) and converges to a solution \(x^\star \in \overline{U}(x_0,r\eta )\) of equation \(F(x)=0\). Moreover, the following estimate holds: </p>
<div class="displaymath" id="eq2.18">
  \begin{align} \label{eq2.18} \| x_n-x^\star \| \le \sum ^\infty _{k=n}b_k\eta {\lt}r\eta . \end{align}
</div>
<p> Furthermore, \(x^\star \) is the only solution of equation \(F(x)=0\) in \(D_0\bigcap D=D_1\  \text{where,}\  D_0=U(x_0,\frac{1}{\beta L_0})\)., </p>
<p><div class="theorem_thmwrapper " id="a0000000012">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
  </div>
  <div class="theorem_thmcontent">
  
  </div>
</div> <div class="proof_wrapper" id="a0000000013">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>It follows from Lemmas 2.1 and 2.2 (see also \((II_n)\)) that \(\{ x_n\} \) is a Cauchy sequence in a Banach space \(X\) and as such it converges to some \(x^\star \). We have \(\lim _{n\rightarrow \infty }b_n=0\), which implies by (2.3) that \(\lim _{n\rightarrow \infty }c_n=0\). By letting \(n\rightarrow \infty \) in (2.10) and using the continuity of operator \(F\), we obtain \(F(x^\star )=0\). </p>
<p>By \((C_7)\), we obtain </p>
<div class="displaymath" id="eq2.19">
  \begin{align} \label{eq2.19} \| x_{n+1}-x_0\| \le \sum ^n_{k=0}\| x_{k+1}-x_k\| \le \sum ^n_{k=0}b_k\eta {\lt}r\eta . \end{align}
</div>
<p> \(\Rightarrow x_{n+1}\in \overline{U}(x_0,r\eta )\) </p>
<p>\(\Rightarrow x^\star =\lim _{n\rightarrow \infty }x_n\in \overline{U}(x_0,r\eta )\) (since \(\overline{U}(x_0,r\eta )\) is a closed set). </p>
<p>Let \(m{\gt}n\). Then, we have </p>
<div class="displaymath" id="eq2.20">
  \begin{align} \label{eq2.20} \| x_n-x_m\| \le \sum ^{m-1}_{k=n}\| x_k-x_{k+1}\| \le \sum ^{m-1}_{k=n}b_k\eta \le \sum ^\infty _{k=n}b_k\eta {\lt}r\eta . \end{align}
</div>
<p> By letting \(m\rightarrow \infty \) in (2.20), we obtain (2.18). Finally, to show uniqueness, let \(y^\star \in D_0\) be a solution of equation \(F(x)=0\). Define linear operator </p>
<div class="equation" id="eq2.21">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq2.21} M=\int ^1_0 F^\prime (x^\star +t(y^\star -x^\star ))dt. \end{equation}
  </div>
  <span class="equation_label">2.21</span>
</p>
</div>
<p> We have using \((C_3)\): </p>
<div class="displaymath" id="a0000000014">
  \begin{align}  \| F^\prime (x_0)^{-1}\| \| M-F^\prime (x_0)\| & \le \beta L_0\int ^1_0 \| x^\star +t(y^\star -x^\star )-x_0\|  dt \nonumber \\ & \le \beta L_0 \int ^1_0 [(1-t)\| x^\star -x_0\| +t\| y^\star -x_0\| ]dt \label{eq2.22}\\ & {\lt} \frac{\beta L_0}{2}(r\eta +\frac{1}{\beta L_0})\le \frac{1}{2}+\frac{1}{2}=1.\nonumber \end{align}
</div>
<p> It follows from (2.22), and the Banach lemma on invertible operators that \(M^{-1}\in L(Y,X)\). In view of (2.21), we get </p>
<div class="displaymath" id="a0000000015">
  \begin{equation*}  0=F(y^\star )-F(x^\star )=M(y^\star -x^\star ), \end{equation*}
</div>
<p> which implies \(x^\star =y^\star \). That completes the proof of the theorem. </p>
<p><div class="remark_thmwrapper " id="Remark 2.5">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">2.5</span>
  </div>
  <div class="remark_thmcontent">
  <p>If \(L_0=L\), Theorem 2.4 reduces to the (KT). Otherwise, (i.e if \(L_0{\lt}L\)), then our theorem provides a more precise information on the location of the solution. </p>

  </div>
</div> </p>
<h1 id="a0000000016">3 Applications</h1>
<p> In the first example we show that the Kantorovich hypothesis (see (3.2)) is satisfied with the bigger uniqueness ball of solution than before [2], [7]. </p>
<p><div class="example_thmwrapper " id="Example 3.1">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">3.1</span>
  </div>
  <div class="example_thmcontent">
  <p>Let \(X=Y=\mathbb {R}\) be equipped with the max-norm. Let \(x_0=1\), \(D=U(x_0,1-q)\), \(q\in [0,1)\) and define function \(F\) on \(D\) by </p>
<div class="equation" id="eq3.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.1} F(x)=x^3-q. \end{equation}
  </div>
  <span class="equation_label">3.1</span>
</p>
</div>
<p> Then, we obtain that \(\beta =\frac{1}{3}\), \(L=6(2-q)\), \(L_0=3(3-q)\) and \(\eta =\frac{1}{3}(1-q)\). Then, the famous for its simplicity and clarity Kantorovich hypothesis for solving equations using (NKM) [1, 2, 7] is satisfied, say for \(q=.6\), since </p>
<div class="equation" id="eq3.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.2} h=2L\beta \eta =\frac{4}{3}(2-q)(1-q)=0.746666\ldots <1. \end{equation}
  </div>
  <span class="equation_label">3.2</span>
</p>
</div>
<p> Hence, (NKM) converges starting at \(x_0=1\). We also have that <br />\(r=1.330386707\), \(\eta =.133\ldots \), \(r\eta =.177384894\), \(L_0=7.2{\lt}L=8.4\) and \(\frac{1}{\beta L_0}=.41666\ldots \). That is our Theorem 2.4 guarantees the convergence of (NKM) to \(x^\star =\sqrt[3]{0.49}=.788373516\) and the uniqueness ball is better than the one given in (KT). </p>

  </div>
</div> </p>
<p>In the second example we apply Theorem 2.4 to a nonlinear integral equation of Chandrasekhar-type. </p>
<p><div class="example_thmwrapper " id="Example 3.2">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">3.2</span>
  </div>
  <div class="example_thmcontent">
  <p>Let us consider the equation </p>
<div class="equation" id="eq3.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.3} x(s)=1+\frac{s}{4}x(s)\int ^1_0 \frac{x(t)}{s+t}dt,\quad s\in [0,1]. \end{equation}
  </div>
  <span class="equation_label">3.3</span>
</p>
</div>
<p> Note that solving (3.3) is equivalent to solving \(F(x)=0\), where \(F:C[0,1]\rightarrow C[0,1]\) defined by </p>
<div class="equation" id="eq3.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.4} [F(x)](s)=x(s)-1-\frac{s}{4}x(s)\int ^1_0 \frac{x(t)}{s+t}dt,\quad s\in [0,1]. \end{equation}
  </div>
  <span class="equation_label">3.4</span>
</p>
</div>
<p> Using (3.4), we obtain that the Fréchet-derivative of \(F\) is given by </p>
<div class="equation" id="eq3.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq3.5} [F^\prime (x)y](s)=y(s)-\frac{s}{4}y(s)\int ^1_0 \frac{x(t)}{s+t}dt-\frac{s}{4}x(s)\int ^1_0 \frac{y(t)}{s+t}dt,\quad s\in [0,1]. \end{equation}
  </div>
  <span class="equation_label">3.5</span>
</p>
</div>
<p> Let us choose the initial point \(x_0(s)=1\) for each \(s\in [0,1]\). Then, we have that \(\beta =1.534463572\), \(\eta =.2659022747\), \(L_0=L=\ln 2=.693147181\), \(h=2\gamma =.392066334\) and \(r=1.23784269\) (see also [1,2,3,7]). Then, hypotheses of Theorem 2.4 are satisfied. In consequence, equation \(F(x)=0\) has a solution \(x^\star \) in \(U(1,\rho )\), where \(\rho =r\eta =.298816793\). </p>

  </div>
</div> </p>
<p><div class="acknowledgement_thmwrapper " id="a0000000017">
  <div class="acknowledgement_thmheading">
    <span class="acknowledgement_thmcaption">
    Acknowledgements
    </span>
  </div>
  <div class="acknowledgement_thmcontent">
  <p>This work was supported by National Natural Science Foundation of China (Grant No. 10871178). </p>

  </div>
</div> </p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="Arg1">1</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">Polynomial equations in abstract spaces and applications</i>, St. Lucie/CRC/Lewis Publ. Mathematics Series, 1998, Boch Raton Florida. </p>
</dd>
  <dt><a name="Arg2">2</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">Computational theory of iterative methods</i>, Series: Studies in Computational Mathematics 15, Editors, C.K. Chui and L. Wuytack, Elservier Publ. Co. New York, USA, 2007. </p>
</dd>
  <dt><a name="Cha">3</a></dt>
  <dd><p><i class="sc">S. Chandrasekhar</i>, <i class="it">Radiative Transfer</i>, Dover. Publ, 1960, New York. </p>
</dd>
  <dt><a name="Deu">4</a></dt>
  <dd><p><i class="sc">P. Deuflhard</i>, <i class="it">Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms</i>, Springer-Verlag, Berlin, Heidelberg, 2004. </p>
</dd>
  <dt><a name="Ezq1">5</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1002/anac.200410024"><i class="sc">J.A. Ezquerro, M.A. Hernández</i>, <i class="it">New Kantorovich-type conditions for Halley’s method</i>, Appl. Num. Anal. Comp. Math., 2 (2005), pp.&#160;70–77. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="Ezq2">6</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1016/j.jco.2009.04.001"><i class="sc">J.A. Ezquerro, M.A. Hernández</i>, <i class="it">An optimization of Chebyshev-method</i>, J. Complexity, 25 (2009), pp.&#160;343–361. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="Kan">7</a></dt>
  <dd><p><i class="sc">L.V. Kantorovich, G.P. Akilov</i>, <i class="it">Functional Analysis</i>, Pergamon Press, Oxford, 1982. </p>
</dd>
  <dt><a name="Mat">8</a></dt>
  <dd><p><a href ="http://mathfaculty.fullerton.edu/mathews/n2003/newtonsmethod/Newton’sMethodBib/Links/Newton’sMethodBib_lnk_3.html"> <i class="sc">J.H. Mathews</i>, <i class="it">Bibligraphy for Newton’s method</i>, Available from:<br /><span class="tt">http://mathfaculty.fullerton.edu/mathews/n2003/newtonsmethod/ <br />Newton’sMethodBib/Links/Newton’sMethodBib_lnk_3.html.</span> <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
<p><br /></p>
</dd>
  <dt><a name="Nes">9</a></dt>
  <dd><p><i class="sc">Yu. Nesterov</i>, <i class="it">Introductory Lectures on Convex Programming</i>, Kluwer, Boston, 2004. </p>
</dd>
  <dt><a name="Wri">10</a></dt>
  <dd><p><i class="sc">S.J. Wright</i>, <i class="it">Primal-Dual Interior Point Methods</i>, SIAM, Philadelphia, 1997. </p>
</dd>
  <dt><a name="Ypm">11</a></dt>
  <dd><p><i class="sc">T.J. Ypma</i>, <i class="it">Historical developments of the Newton-Raphson method</i>, SIAM Review, 37 (1995), pp.&#160;531–551. </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>