<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Ball convergence of Potra-Ptak-type method with optimal fourth order of convergence: Ball convergence of Potra-Ptak-type method with optimal fourth order of convergence</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">



<div class="titlepage">
<h1>Ball convergence of Potra-Ptak-type method with optimal fourth order of convergence</h1>
<p class="authors">
<span class="author">Ioannis K. Argyros\(^\ast \) Santhosh George\(^{\ast \ast }\)</span>
</p>
<p class="date">May 5, 2015; accepted: June 2, 2016; published online: November 8, 2021.</p>
</div>
<div class="abstract"><p> We present a local convergence analysis Potra-Ptak-type method with optimal fourth order of convergence in order to approximate a solution of a nonlinear equation. In earlier studies such as <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#5" >5</a>
	]
</span>–<span class="cite">
	[
	<a href="#28" >28</a>
	]
</span> hypotheses up to the fourth derivative are used. In this paper we use hypotheses up to the first derivative only, so that the applicability of these methods is extended under weaker hypotheses. Moreover the radius of convergence and computable error bounds on the distances involved are also given in this study. Numerical examples are also presented in this study. </p>
<p><b class="bf">MSC.</b> 65D10, 65D99. </p>
<p><b class="bf">Keywords.</b> Potra-Ptak-type method, Newton’s method, order of convergence, local convergence. </p>
</div>
<p>\(^\ast \)Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA, e-mail: <span class="tt">iargyros@cameron.edu</span>. </p>
<p>\(^{\ast \ast }\)Department of Mathematical and Computational Sciences, NIT Karnataka, India-575 025, e-mail: <span class="tt">sgeorge@nitk.ac.in</span>. </p>
<h1 id="a0000000002">1 Introduction</h1>
<p> Let \(F:D\subseteq S\rightarrow S\) is a nonlinear function, \(D\) is a convex subset of \(S\) and \(S\) is \(\mathbb {R}\) or \(\mathbb {C}.\) Consider the problem of approximating a locally unique solution \(x^\ast \) of equation </p>
<div class="equation" id="1.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{1.1} F(x)=0.\end{equation}
  </div>
  <span class="equation_label">1</span>
</p>
</div>
<p>Newton-like methods are famous for finding solution of (<a href="#1.1">1</a>), these methods are usually studied based on: semi-local and local convergence <span class="cite">
	[
	<a href="#3" >3</a>
	, 
	<a href="#4" >4</a>
	, 
	<a href="#20" >20</a>
	, 
	<a href="#21" >21</a>
	, 
	<a href="#22" >22</a>
	, 
	<a href="#24" >24</a>
	, 
	<a href="#26" >26</a>
	]
</span>. </p>
<p>Third order methods such as Euler’s, Halley’s, super Halley’s, Chebyshev’s <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>–<span class="cite">
	[
	<a href="#28" >28</a>
	]
</span> require the evaluation of the second derivative \(F''\) at each step, which in general is very expensive. That is why many authors have used higher order multipoint methods <span class="cite">
	[
	<a href="#1" >1</a>
	]
</span>–<span class="cite">
	[
	<a href="#28" >28</a>
	]
</span>. In this paper, we study the local convergence of fourth order method defined for each \(n=0,1,2,\ldots \) by </p>
<div class="displaymath" id="a0000000003">
  \begin{eqnarray} \nonumber y_n& =& x_n-F’(x_n)^{-1}F(x_n)\\ \nonumber z_n& =& x_n-F’(x_n)^{-1}(F(x_n)+F(y_n))\\ \label{1.2} x_{n+1}& =& x_n-F(x_n)^{-2}F’(x_n)^{-1}F(y_n)^2(2F(x_n)+F(y_n)), \end{eqnarray}
</div>
<p> where \(x_0\) is an initial point. Method (<a href="#1.2">2</a>) was studied by Cordero et.al. in <span class="cite">
	[
	<a href="#13" >13</a>
	]
</span>. In particular the fourth order of convergence was shown under hypotheses reaching up to the fourth derivative of function \(F.\) Notice that method (<a href="#1.2">2</a>) involves three functional evaluations. Therefore the efficiencyindex \(EI=p^{\frac{1}{m}}\) where \(p\) is theorder of convergence and \(m\) is the number of functional evaluations per step gives \(EI=4^{\frac{1}{3}}=1.5874.\) Kung and Traub conjecture <span class="cite">
	[
	<a href="#28" >28</a>
	]
</span> that the order of convergence of any multipoint method without memory cannot exceed the bound \(2^{m-1}\) (called the optimal order). Thus, the optimal order for a method with three function evaluations per step should be four. </p>
<p>Other single and multi-point methods can be found in <span class="cite">
	[
	<a href="#2" >2</a>
	, 
	<a href="#3" >3</a>
	, 
	<a href="#20" >20</a>
	, 
	<a href="#25" >25</a>
	]
</span> and the references therein. The local convergence of the preceding methods has been shown under hypotheses up to the fourth derivative (or even higher). These hypotheses restrict the applicability of these methods. As a motivational example, let us define function \(f\) on \(D=[-\frac{1}{2},\frac{5}{2}]\) by </p>
<div class="displaymath" id="a0000000004">
  \[ f(x)=\left\lbrace \begin{array}{ll} x^3\ln x^2+ x^5-x^4,\, \, \, x\neq 0\\ 0 ,\, \, \,  x=0 \end{array}\right. \]
</div>
<p> Choose \(x^\ast =1.\) We have that </p>
<div class="displaymath" id="a0000000005">
  \begin{eqnarray*}  f’(x)& =&  3x^2\ln x^2+5x^4-4x^3+2x^2,\, \,  f’(1)=3,\\ f”(x)& =& 6x\ln x^2+20x^3-12x^2+10x\\ f”’(x)& =& 6\ln x^2+60x^2-24x+22. \end{eqnarray*}
</div>
<p> Then, obviously, function \(f'''\) is unbounded on \(D.\) In the present paper we only use hypotheses on the first Fréchet derivative. This way we expand the applicability of method (<a href="#1.2">2</a>). </p>
<p>The rest of the paper is organized as follows: Section 2 contains the local convergence analysis of methods (<a href="#1.2">2</a>). The numerical examples are presented in the concluding Section 3. </p>
<h1 id="a0000000006">2 Local convergence analysis</h1>
<p> We present the local convergence analysis of method (<a href="#1.2">2</a>) in this section. Let \(L_0 {\gt}0, L {\gt}0\) and \(M\geq 1.\) It is convenient for the local convergence analysis of method (<a href="#1.2">2</a>) to introduce some functions and parameters. Define functions \(g_1,\,  g_2,\, h_2,\, g_3,\, h_3\) on the interval \([0, \tfrac {1}{L_0})\) by </p>
<div class="displaymath" id="a0000000007">
  \begin{eqnarray*} \nonumber g_1(t)& =& \tfrac {Lt}{2(1-L_0t)},\\ g_2(t)& =& \tfrac {1}{2(1-L_0t)}[Lt+2Mg_1(t)],\\ h_2(t)& =& g_2(t)-1 \\ g_3(t)& =& g_2(t)+\tfrac {M^3g_1^2(t)(2+g_1(t))}{(1-\tfrac {L_0}{2}t)^2(1-L_0t)},\\ h_3(t)& =& g_3(t)-1 \end{eqnarray*}
</div>
<p> and parameter </p>
<div class="displaymath" id="a0000000008">
  \[  r_A=\tfrac {2}{2L_0+L}.  \]
</div>
<p> We have that \(h_2(0)=-1{\lt}0\) and \(h_2(r_A)=\tfrac {M}{1-L_0r_A} {\gt} 0,\) since \(\tfrac {Lr_A}{2(1-L_0r_A)} =1\) and \(1-L_0r_A {\gt} 0.\) It then follows from the intermediate value theorem that function \(h_2\) has zeros in the interval \((0, r_A).\) Denote by \(r_2\) the smallest such zero. Similarly, we have that \(h_3(0)=-1 {\lt} 0\) and \(h_3(r_2) {\gt} 0.\) Denote by \(r\) the smallest zero of function \(h_3\) in the interval \((0, r_2).\) Then, for each \(t\in [0, r)\) we have </p>
<div class="equation" id="2.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.1} 0\leq g_1(t) < 1, \end{equation}
  </div>
  <span class="equation_label">2</span>
</p>
</div>
<div class="equation" id="2.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.2} 0\leq g_2(t) < 1, \end{equation}
  </div>
  <span class="equation_label">3</span>
</p>
</div>
<p>and </p>
<div class="equation" id="2.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.3} 0\leq g_3(t) < 1. \end{equation}
  </div>
  <span class="equation_label">4</span>
</p>
</div>
<p>Denote by \(U(v,\rho ), \bar{U}(v,\rho )\) the open and closed balls in \(S\), respectively, with center \(v\in S\) and of radius \(\rho {\gt}0.\) Next, we show the following local convergence result for method (<a href="#1.2">2</a>) using the preceding notation. <div class="theorem_thmwrapper " id="T2.1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(F:D\subseteq S\rightarrow S\) be a differentiable function. Suppose that there exist \(x^\ast \in D,\) \( L_0 {\gt} 0, L {\gt} 0\) and \( M \geq 1\) such that for each \(x,\,  y\in D\) </p>
<div class="equation" id="2.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.4} F(x^\ast )=0,\, \,  F'(x^\ast )\neq 0, \end{equation}
  </div>
  <span class="equation_label">5</span>
</p>
</div>
<div class="equation" id="2.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.5} |F'(x^\ast )^{-1}(F'(x)-F'(x^\ast ))|\leq L_0|x-x^\ast |, \end{equation}
  </div>
  <span class="equation_label">6</span>
</p>
</div>
<div class="equation" id="2.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.6} |F'(x^\ast )^{-1}(F'(x)-F'(y))|\leq L|x-y|, \end{equation}
  </div>
  <span class="equation_label">7</span>
</p>
</div>
<div class="equation" id="2.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.7} |F'(x^\ast )^{-1}F'(x)|\leq M, \end{equation}
  </div>
  <span class="equation_label">8</span>
</p>
</div>
<p>and </p>
<div class="equation" id="2.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.8} \bar{U}(x^\ast , (1+M_0)r)\subseteq D, \end{equation}
  </div>
  <span class="equation_label">9</span>
</p>
</div>
<p>where \(r \) is defined above <a href="#T2.1">theorem 1</a>. Then, the sequence \(\{ x_n\} \) generated by method <a href="#1.2" class="eqref">2</a> for \(x_0\in U(x^\ast , r)-\{ x^\ast \} \) is well defined, remains in \(U(x^\ast , r)\) for each \(n=0,1,2,\cdots \) and converges to \(x^\ast .\) Moreover, the following estimates hold </p>
<div class="equation" id="2.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.9} |y_n-x^\ast |\leq g_1(|x_n-x^\ast |)|x_n-x^\ast | < |x_n-x^\ast | < r, \end{equation}
  </div>
  <span class="equation_label">10</span>
</p>
</div>
<div class="equation" id="2.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.10} |z_n-x^\ast |\leq g_2(|x_n-x^\ast |)|x_n-x^\ast | < |x_n-x^\ast | \end{equation}
  </div>
  <span class="equation_label">11</span>
</p>
</div>
<p>and </p>
<div class="equation" id="2.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.11} |x_{n+1}-x^\ast |\leq g_3(|x_n-x^\ast |)|x_n-x^\ast | < |x_n-x^\ast |, \end{equation}
  </div>
  <span class="equation_label">12</span>
</p>
</div>
<p>where the “\(g\)” functions are defined above in <a href="#T2.1">theorem 1</a>. Furthermore, if that there exists \(T\in [r, \frac{2}{L_0})\) such that \(\bar{U}(x^\ast ,T)\subset D,\) then the limit point \(x^\ast \) is the only solution of equation \(F(x)=0\) in \(\bar{U}(x^\ast ,T).\) </p>

  </div>
</div><div class="proof_wrapper" id="a0000000009">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>We shall use induction to show estimates (<a href="#2.9">10</a>)–(<a href="#2.11">12</a>). Using the hypothesis \(x_0\in {U}(x^\ast , r)-\{ x^\ast \} ,\) (<a href="#2.5">6</a>) and the definition of \(r,\) we have that </p>
<div class="equation" id="2.12">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.12} |F'(x^\ast )^{-1}(F'(x_0)-F'(x^\ast ))|\leq L_0|x_0-x^\ast |< L_0 r < 1. \end{equation}
  </div>
  <span class="equation_label">13</span>
</p>
</div>
<p>It follows from (<a href="#2.12">13</a>) and the Banach Lemma on invertible functions <span class="cite">
	[
	<a href="#3" >3</a>
	, 
	<a href="#4" >4</a>
	, 
	<a href="#19" >19</a>
	, 
	<a href="#20" >20</a>
	, 
	<a href="#22" >22</a>
	, 
	<a href="#23" >23</a>
	]
</span> that \(F'(x_0)\neq 0\) and </p>
<div class="equation" id="2.13">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.13} |F'(x_0)^{-1}F'(x^\ast )|\leq \tfrac {1}{1-L_0|x_0-x^\ast |} < \tfrac {1}{1-L_0r}. \end{equation}
  </div>
  <span class="equation_label">14</span>
</p>
</div>
<p>Hence, \(y_0\) and \(x_0\) are well defined. We also get from (<a href="#1.2">2</a>), (<a href="#2.1">2</a>), (<a href="#2.6">7</a>) and (<a href="#2.13">14</a>) that </p>
<div class="displaymath" id="a0000000010">
  \begin{align}  \nonumber |y_0\! -\! x^\ast | \leq & |F’(x_0)^{-1}F’(x^\ast )||\int _0^1F’(x^\ast )^{-1}(F’(x^\ast \! \! +\! \! \theta (x_0\! -\! x^\ast ))\! \! -\! \! F’(x_0))(x_0\! -\! x^\ast )d\theta \\ \leq &  \tfrac {L|x_0-x^\ast |^2}{2(1-L_0|x_0-x^\ast |)}\\ \label{2.14} =& g_1(|x_0-x^\ast |)|x_0-x^\ast | {\lt} |x_0-x^\ast | {\lt}r, \end{align}
</div>
<p> which shows (<a href="#2.9">10</a>) for \(n=0\) and \(y_0\in U(x^\ast , r).\) We can write </p>
<div class="equation" id="2.15">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.15} F(x_0)=F(x_0)-F(x^\ast )=\int _0^1F'(x^\ast +\theta (x_0-x^\ast ))(x_0-x^\ast )d\theta . \end{equation}
  </div>
  <span class="equation_label">17</span>
</p>
</div>
<p>Then, by (<a href="#2.7">8</a>), (<a href="#2.15">17</a>) we obtain that </p>
<div class="displaymath" id="a0000000011">
  \begin{eqnarray} \nonumber |F’(x^\ast )^{-1}F(x_0)|& \leq & |\int _0^1F’(x^\ast )^{-1}F’(x^\ast +\theta (x_0-x^\ast ))(x_0-x^\ast )d\theta |\\ \label{2.16} & \leq & M|x_0-x^\ast |, \end{eqnarray}
</div>
<p> where we also used \(|x^\ast +\theta (x_0-x^\ast )-x^\ast |=\theta |x_0-x^\ast | {\lt} r.\) That is \(x^\ast +\theta (x_0-x^\ast )-x^\ast \in U(x^\ast , r).\) We also have that </p>
<div class="equation" id="2.17">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.17} |F'(x^\ast )^{-1}F(y_0)|\leq M|y_0-x^\ast |\leq Mg_1(|x_0-x^\ast |)|x_0-x^\ast |. \end{equation}
  </div>
  <span class="equation_label">19</span>
</p>
</div>
<p>Then, by the second substep of method (<a href="#1.2">2</a>) for \(n=0,\) (<a href="#2.2">3</a>), (<a href="#2.4">5</a>) and (<a href="#2.17">19</a>) we get that </p>
<div class="displaymath" id="a0000000012">
  \begin{eqnarray} \nonumber |z_0-x^\ast |& \leq & |x_0-x^\ast -F’(x_0)^{-1}F(x_0)|\\ \nonumber & & +|F’(x_0)^{-1}F’(x^\ast )||F’(x^\ast )^{-1}F(y_0)|\\ \nonumber & \leq &  \tfrac {L|x_0-x^\ast |^2}{2(1-L_0|x_0-x^\ast |}+\tfrac {M|y_0-x^\ast |}{1-L_0|x_0-x^\ast |}\\ \nonumber & \leq & \tfrac {(L|x_0-x^\ast |+2Mg_1(|x_0-x^\ast |))|x_0-x^\ast |}{2(1-L_0|x_0-x^\ast |)}\\ \label{2.18} & =& g_2(|x_0-x^\ast |)|x_0-x^\ast | {\lt} |x_0-x^\ast | {\lt} r, \end{eqnarray}
</div>
<p> which shows (<a href="#2.10">11</a>) for \(n=0\) and \(z_0\in U(x^\ast , r).\) Next, we show that \(F(x_0) \neq 0.\) Using (<a href="#2.5">6</a>), we get that </p>
<div class="displaymath" id="a0000000013">
  \begin{align} \nonumber & |(F’(x^\ast )(x_0-x^\ast ))^{-1}[F(x_0)-F(x^\ast )-F’(x^\ast )(x_0-x^\ast )]|\\ \nonumber & \leq |x_0-x^\ast |^{-1}|\int _0^1F’(x^\ast )^{-1}(F’(x^\ast +\theta (x_0-x^\ast ))-F’(x^\ast ))(x_0-x^\ast )d\theta |\\ \nonumber & \leq |x_0-x^\ast |^{-1}\tfrac {L_0|x_0-x^\ast |^2}{2}\\ \label{2.19} & \leq \tfrac {L_0|x_0-x^\ast |}{2} {\lt} \tfrac {L_0r}{2} {\lt} 1. \end{align}
</div>
<p> it follows from (<a href="#2.19">21</a>) that \(F(x_0)\neq 0\) and </p>
<div class="equation" id="2.20">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.20} |F(x_0)^{-1}F'(x^\ast )|\leq \tfrac {1}{|x_0-x^\ast |(1-\tfrac {L_0}{2}|x_0-x^\ast |)}. \end{equation}
  </div>
  <span class="equation_label">22</span>
</p>
</div>
<p>Hence \(x_1\) is well defined. Then, using the last substep of method (<a href="#1.2">2</a>) for \(n=0,\) (<a href="#2.3">4</a>), (<a href="#2.13">14</a>), (<a href="#2.16">18</a>), (<a href="#2.17">19</a>), (<a href="#2.18">20</a>) and (<a href="#2.20">22</a>) we get in turn that </p>
<div class="displaymath" id="a0000000014">
  \begin{align*}  |x_1-x^\ast |& \leq |z_0-x^\ast |+|F’(x^\ast )^{-1}F(y_0)|^2|F’(x^\ast )^{-1}F(x_0)|^{-2}\\ & \quad \times |F’(x^\ast )^{-1}F’(x_0)|^{-1}(2|F’(x^\ast )^{-1}F(x_0)|\\ & \quad +|F’(x^\ast )^{-1}F(y_0)|)\\ & \leq g_2(|x_0-x^\ast |)|x_0-x^\ast |\\ & \quad +\tfrac { M^3|y_0-x^\ast |^2(2|x_0-x^\ast |+|y_0-x^\ast |)}{|x_0-x^\ast |^2(1-\tfrac {L_0}{2}|x_0-x^\ast |)^2(1-L_0|x_0-x^\ast |)}\\ & \leq [g_2(|x_0-x^\ast |)\\ & \quad +\tfrac { M^3g_1^2(|x_0-x^\ast |)(2+g_1(|x_0-x^\ast |)}{(1-\tfrac {L_0}{2}|x_0-x^\ast |)^2(1-L_0|x_0-x^\ast |)}]|x_0-x^\ast |\\ & =g_3(|x_0-x^\ast |)|x_0-x^\ast | {\lt} |x_0-x^\ast | {\lt} r, \end{align*}
</div>
<p> which shows (<a href="#2.11">12</a>) for \(n=0\) and \(x_1\in U(x^\ast , r).\) By simply replacing \(x_0, y_0, z_0,\,  x_1\) by \(x_k, y_k, z_k,\,  x_{k+1}\) in the preceding estimates we arrive at estimates (<a href="#2.9">10</a>)–(<a href="#2.11">12</a>). Using the estimate \(|x_{k+1}-x^\ast | {\lt} |x_k-x^\ast | {\lt} r,\) we deduce that \(x_{k+1}\in U(x^\ast , r)\) and \(\lim _{k\rightarrow \infty }x_k=x^\ast .\) To show the uniqueness part, let \(Q=\int _0^1F'(y^\ast +\theta (x^\ast -y^\ast )d\theta \) for some \(y^\ast \in \bar{U}(x^\ast , T)\) with \(F(y^\ast )=0.\) Using (<a href="#2.5">6</a>) we get that </p>
<div class="displaymath" id="a0000000015">
  \begin{eqnarray} \nonumber |F’(x^\ast )^{-1}(Q-F’(x^\ast ))|& \leq & \int _0^1L_0|y^\ast +\theta (x^\ast -y^\ast )-x^\ast |d\theta \\ \label{2.30*} & \leq & \int _0^1(1-\theta )|x^\ast -y^\ast |d\theta \leq \frac{L_0}{2}R {\lt} 1. \end{eqnarray}
</div>
<p> It follows from (<a href="#2.30*">23</a>) and the Banach Lemma on invertible functions that \(Q\) is invertible. Finally, from the identity \(0=F(x^\ast )-F(y^\ast )=Q(x^\ast -y^\ast ),\) we deduce that \(x^\ast =y^\ast .\) </p>
<p><div class="remark_thmwrapper " id="r3.3">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">2</span>
  </div>
  <div class="remark_thmcontent">
  
<ol class="enumerate">
  <li><p>In view of (<a href="#2.5">6</a>) and the estimate </p>
<div class="displaymath" id="a0000000016">
  \begin{eqnarray*}  \| F’(x^\ast )^{-1}F’(x)\| & =& \| F’(x^\ast )^{-1}(F’(x)-F’(x^\ast ))+I\| \\ & \leq &  1+\| F’(x^\ast )^{-1}(F’(x)-F’(x^\ast ))\|  \leq 1+L_0\| x-x^\ast \|  \end{eqnarray*}
</div>
<p> condition (<a href="#2.7">8</a>) can be dropped and \(M\) can be replaced by </p>
<div class="displaymath" id="a0000000017">
  \[ M(t)=1+L_0 t. \]
</div>
</li>
  <li><p>The results obtained here can be used for operators \(F\) satisfying autonomous differential equations <span class="cite">
	[
	<a href="#3" >3</a>
	]
</span> of the form </p>
<div class="displaymath" id="a0000000018">
  \[ F'(x)=P(F(x)) \]
</div>
<p> where \(P\) is a continuous operator. Then, since \(F'(x^\ast )=P(F(x^\ast ))=P(0),\) we can apply the results without actually knowing \(x^\ast .\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\) </p>
</li>
  <li><p>The radius \(r_A\) was shown by us to be the convergence radius of Newton’s method <span class="cite">
	[
	<a href="#2" >2</a>
	]
</span>–<span class="cite">
	[
	<a href="#4" >4</a>
	]
</span> </p>
<div class="equation" id="2.30">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.30} x_{n+1}=x_n-F'(x_n)^{-1}F(x_n)\, \, \, \textnormal{for each}\, \, \, n=0,1,2,\cdots \end{equation}
  </div>
  <span class="equation_label">24</span>
</p>
</div>
<p>under the conditions (<a href="#2.5">6</a>) and (<a href="#2.6">7</a>). It follows from the definition of \(r\) that the convergence radius \(r\) of the method (<a href="#1.2">2</a>) cannot be larger than the convergence radius \(r_A\) of the second order Newton’s method (<a href="#2.30">24</a>). As already noted in <span class="cite">
	[
	<a href="#3" >3</a>
	, 
	<a href="#4" >4</a>
	]
</span> \(r_A\) is at least as large as the convergence ball given by Rheinboldt <span class="cite">
	[
	<a href="#27" >27</a>
	]
</span> </p>
<div class="equation" id="2.31">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.31}r_R=\tfrac {2}{3L}.\end{equation}
  </div>
  <span class="equation_label">25</span>
</p>
</div>
<p>In particular, for \(L_0 {\lt} L\) we have that </p>
<div class="displaymath" id="a0000000019">
  \[ r_R {\lt} r \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000020">
  \[ \tfrac {r_R}{r_A}\rightarrow \tfrac {1}{3}\, \, \,  as\, \, \,  \tfrac {L_0}{L}\rightarrow 0. \]
</div>
<p> That is our convergence ball \(r_A\) is at most three times larger than Rheinboldt’s. The same value for \(r_R\) was given by Traub <span class="cite">
	[
	<a href="#28" >28</a>
	]
</span>. </p>
</li>
  <li><p>It is worth noticing that method (<a href="#1.2">2</a>) is not changing when we use the conditions of <a href="#T2.1">theorem 1</a> instead of the stronger conditions used in <span class="cite">
	[
	<a href="#1" >1</a>
	, 
	<a href="#5" >5</a>
	, 
	<a href="#12" >12</a>
	]
</span>–<span class="cite">
	[
	<a href="#28" >28</a>
	]
</span>. Moreover, we can compute the computational order of convergence (COC) defined by </p>
<div class="displaymath" id="a0000000021">
  \[ \xi = \ln \left(\frac{\| x_{n+1}-x^\ast \| }{\| x_n-x^\ast \| }\right)/\ln \left(\frac{\| x_{n}-x^\ast \| }{\| x_{n-1}-x^\ast \| }\right)  \]
</div>
<p> or the approximate computational order of convergence </p>
<div class="displaymath" id="a0000000022">
  \[ \xi _1= \ln \left(\frac{\| x_{n+1}-x_n\| }{\| x_n-x_{n-1}\| }\right)/\ln \left(\frac{\| x_{n}-x_{n-1}\| }{\| x_{n-1}-x_{n-2}\| }\right).  \]
</div>
<p> This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator \(F.\) </p>
</li>
</ol>

  </div>
</div></p>
<h1 id="a0000000023">3 Numerical Examples</h1>
<p> We present numerical examples in this section. <div class="example_thmwrapper " id="a0000000024">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">3</span>
  </div>
  <div class="example_thmcontent">
  <p>Let \(D=[-\infty , +\infty ].\) Define function \(f\) of \(D\) by </p>
<div class="equation" id="4.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{4.2}f(x)=\sin (x). \end{equation}
  </div>
  <span class="equation_label">26</span>
</p>
</div>
<p>Then we have for \(x^\ast =0\) that \(L_0=L=M=1, \alpha =1.\) The parameters are \(r_A=0.6667,\, r_2=0.6667,\, r=0.3991\) and \(\xi _1=5.1010.\) </p>

  </div>
</div></p>
<p><div class="example_thmwrapper " id="a0000000025">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">4</span>
  </div>
  <div class="example_thmcontent">
  <p>Let \(D=[-1,1].\) Define function \(f\) of \(D\) by </p>
<div class="equation" id="4.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{4.1}f(x)=e^x-1. \end{equation}
  </div>
  <span class="equation_label">27</span>
</p>
</div>
<p>Using (<a href="#4.1">27</a>) and \(x^\ast =0,\) we get that \(L_0=e-1 {\lt} L=M=e, \alpha =1.\) The parameters are \(r_A=0.3249,\,  r_2=0.1458,\,  r=0.0699\) and \(\xi _1=3.9088.\) </p>
<p><span class="qed">â–¡</span></p>

  </div>
</div></p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="1">1</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.amc.2008.08.050"> <i class="sc">S. Amat, M.A. Hernández, N. Romero</i>, <i class="it">A modified Chebyshev’s iterative method with at least sixth order of convergence</i>, Appl. Math. Comput., <b class="bf">206</b> (2008), 164–174, <a href="https://doi.org/10.1016/j.amc.2008.08.050">https://doi.org/10.1016/j.amc.2008.08.050</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="2">2</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/s00010-004-2733-y"> <i class="sc">S. Amat, S. Busquier, S. Plaza</i>, <i class="it">Dynamics of the King’s and Jarratt iterations</i>, Aeq. Math., <b class="bf">69</b> (2005), 212–213, <a href="https://doi.org/10.1007/s00010-004-2733-y">https://doi.org/10.1007/s00010-004-2733-y</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="3">3</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">Convergence and Application of Newton-type Iterations,</i> Springer, 2008. </p>
</dd>
  <dt><a name="4">4</a></dt>
  <dd><p><i class="sc">I.K. Argyros, Said Hilout</i>, <i class="it">Computational Methods in Nonlinear Analysis</i>, World Scientific Publ. Co., New Jersey, USA, 2013. </p>
</dd>
  <dt><a name="5">5</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/0377-0427(94)90093-0"> <i class="sc">I.K. Argyros, D. Chen, Q. Quian</i>, <i class="it">The Jarratt method in Banach space setting</i>, J. Comput. Appl. Math., <b class="bf">51</b> (1994), pp.&#160;103–106, <a href="https://doi.org/10.1016/0377-0427(94)90093-0">https://doi.org/10.1016/0377-0427(94)90093-0</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="6">6</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/bf02241866"> <i class="sc">V. Candela, A. Marquina</i>, <i class="it">Recurrence relations for rational cubic methods I: The Halley method</i>, Computing, <b class="bf">44</b> (1990), 169–184, <a href="https://doi.org/10.1007/bf02241866">https://doi.org/10.1007/bf02241866</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="7">7</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.amc.2006.02.037"> <i class="sc">J. Chen</i>, <i class="it">Some new iterative methods with three-order convergence</i>, Appl. Math. Comput., <b class="bf">181</b> (2006), 1519–1522, <a href="https://doi.org/10.1016/j.amc.2006.02.037">https://doi.org/10.1016/j.amc.2006.02.037</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="8">8</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.amc.2013.11.017"> <i class="sc">C. Chun, B. Neta, M. Scott</i>, <i class="it">Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations</i>, Appl. Math. Comput., <b class="bf">227</b> (2014), pp.&#160;567–592, <a href="https://doi.org/10.1016/j.amc.2013.11.017">https://doi.org/10.1016/j.amc.2013.11.017</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="9">9</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.amc.2007.01.062"><i class="sc">A. Cordero, J. Torregrosa</i>, <i class="it">Variants of Newton’s method using fifth order quadrature formulas</i>, Appl. Math. Comput., <b class="bf">190</b> (2007), pp.&#160;686–698, <a href="https://doi.org/10.1016/j.amc.2007.01.062">https://doi.org/10.1016/j.amc.2007.01.062</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="10">10</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.aml.2013.03.012"><i class="sc">A. Cordero, J. Maimo, J. Torregrosa, M.P. Vassileva, P. Vindel</i>, <i class="it">Chaos in King’s iterative family</i>, Appl. Math. Lett., <b class="bf">26</b> (2013), pp.&#160;842–848, <a href="https://doi.org/10.1016/j.aml.2013.03.012">https://doi.org/10.1016/j.aml.2013.03.012</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="11">11</a></dt>
  <dd><p><i class="sc">A. Cordero, A. Magrenan, C. Quemada, J.R. Torregrosa</i>, <i class="it">Stability study of eight-order iterative methods for solving nonlinear equations</i>, J. Comput. Appl. Math., (to appear). </p>
</dd>
  <dt><a name="12">12</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.cam.2010.08.043"><i class="sc">A. Cordero, J.L. Hueso, E. Martinez, J.R. Torregrossa</i>, <i class="it">Steffensen type methods for solving non-linear equations</i>, J. Comput. Appl. Math., <b class="bf">236</b> (2012), pp.&#160;3058–3064, <a href="https://doi.org/10.1016/j.cam.2010.08.043">https://doi.org/10.1016/j.cam.2010.08.043</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="13">13</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.cam.2010.04.009"><i class="sc">A. Cordero, J.L. Hueso, E. Martinez, J.R. Torregrossa</i>, <i class="it">New modifications of Potra-Ptak’s method with optimal fourth and eight orders of convergence</i>, J. Comput. Appl. Math., <b class="bf">234</b> (2010), pp.&#160;2969–2976, <a href="https://doi.org/10.1016/j.cam.2010.04.009">https://doi.org/10.1016/j.cam.2010.04.009</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="14">14</a></dt>
  <dd><p><i class="sc">J.A. Ezquerro, M.A. Hernández</i>, <i class="it">A uniparametric Halley-type iteration with free second derivative</i>, Int. J. Pure Appl. Math., <b class="bf">6</b> (2003), pp.&#160;99–110. </p>
</dd>
  <dt><a name="15">15</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/s10543-009-0226-z"> <i class="sc">J.A. Ezquerro, M.A. Hernández</i>, <i class="itshape">New iterations of R-order four with reduced computational cost</i>, BIT Numer. Math., <b class="bf">49</b> (2009), pp.&#160;325–342, <a href="https://doi.org/10.1007/s10543-009-0226-z">https://doi.org/10.1007/s10543-009-0226-z</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="16">16</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/s0096-3003(02)00238-2"> <i class="sc">M. Frontini, E. Sormani</i>, <i class="it">Some variants of Newton’s method with third order convergence</i>, Appl. Math. Comput., <b class="bf">140</b> (2003), pp.&#160;419–426, <a href="https://doi.org/10.1016/s0096-3003(02)00238-2">https://doi.org/10.1016/s0096-3003(02)00238-2</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="17">17</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/s0898-1221(98)00168-0"> <i class="sc">J.M. Gutiérrez, M.A. Hernández</i>, <i class="it">Recurrence relations for the super-Halley method</i>, Comput. Math. Appl., <b class="bf">36</b> (1998) 7, pp.&#160;1–8, <a href="https://doi.org/10.1016/s0898-1221(98)00168-0">https://doi.org/10.1016/s0898-1221(98)00168-0</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="18">18</a></dt>
  <dd><p><i class="sc">M.A. Hernández, M.A. Salanova</i>, <i class="it">Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces</i>, Southwest J. Pure Appl. Math., <b class="bf">1</b> (1999), pp.&#160;29–40. </p>
</dd>
  <dt><a name="19">19</a></dt>
  <dd><p><a href ="https://doi.org/10.13189/ujam.2013.010215"> <i class="sc">J.P. Jaiswal</i>, <i class="it">A new third-order derivative free method for solving nonlinear equations</i>, Universal J. Appl. Math., <b class="bf">1</b> (2013), pp.&#160;131–135, <a href="https://doi.org/10.13189/ujam.2013.010215">https://doi.org/10.13189/ujam.2013.010215</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="20">20</a></dt>
  <dd><p><a href ="https://doi.org/10.1137/0710072"> <i class="sc">R.F. King</i>, <i class="it">A family of fourth-order methods for nonlinear equations</i>, SIAM. J. Numer. Anal., <b class="bf">10</b> (1973), pp.&#160;876–879, <a href="https://doi.org/10.1137/0710072">https://doi.org/10.1137/0710072</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="21">21</a></dt>
  <dd><p><a href ="https://doi.org/10.1016/j.amc.2009.01.047"> <i class="sc">A.K. Maheshwari</i>, <i class="it">A fourth order iterative method for solving nonlinear equations</i>, Appl. Math. Comput., <b class="bf">211</b> (2009), pp.&#160;383–391, <a href="https://doi.org/10.1016/j.amc.2009.01.047">https://doi.org/10.1016/j.amc.2009.01.047</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="22">22</a></dt>
  <dd><p><a href ="https://doi.org/10.1142/s0219876210002210"> <i class="sc">S.K. Parhi, D.K. Gupta</i>, <i class="it">Semi-local convergence of a Stirling-like method in Banach spaces</i>, Int. J. Comput. Methods, <b class="bf">7</b> (2010), pp.&#160;215–228, <a href="https://doi.org/10.1142/s0219876210002210">https://doi.org/10.1142/s0219876210002210</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="23">23</a></dt>
  <dd><p><i class="sc">M.S. Petkovic, B. Neta, L. Petkovic, J. Džunič</i>, <i class="it">Multipoint Methods for Solving Nonlinear Equations</i>, Elsevier, 2013. </p>
</dd>
  <dt><a name="24">24</a></dt>
  <dd><p><i class="sc">F.A. Potra, V. Ptak</i>, <i class="it">Nondiscrete Induction and Iterative Processes</i>, Research Notes in Mathematics, vol. 103, Pitman Publ., Boston, MA, 1984. </p>
</dd>
  <dt><a name="25">25</a></dt>
  <dd><p><i class="sc">L.B. Rall</i>, <i class="it">Computational Solution of Nonlinear Operator Equations</i>, Robert E. Krieger, New York, 1979. </p>
</dd>
  <dt><a name="26">26</a></dt>
  <dd><p><a href ="https://doi.org/10.1007/s11075-009-9302-3"><i class="sc">H. Ren, Q. Wu, W. Bi</i>, <i class="it">New variants of Jarratt method with sixth-order convergence</i>, Numer. Algor., <b class="bf">52</b> (2009), pp.&#160;585–603, <a href="https://doi.org/10.1007/s11075-009-9302-3">https://doi.org/10.1007/s11075-009-9302-3</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="27">27</a></dt>
  <dd><p><i class="sc">W.C. Rheinboldt</i>, <i class="it">An adaptive continuation process for solving systems of nonlinear equations</i>, in: Mathematical models and numerical methods (A.N.Tikhonov <i class="it">et al.</i> eds.) pub.3, (19), pp.&#160;129–142 Banach Center, Warsaw Poland. </p>
</dd>
  <dt><a name="28">28</a></dt>
  <dd><p><i class="sc">J.F. Traub</i>, <i class="it">Iterative Methods for the Solution of Equations</i>, Prentice Hall, Englewood Cliffs, New Jersey, USA, 1964. </p>
</dd>
  <dt><a name="30">29</a></dt>
  <dd><p><a href ="https://doi.org/10.1080/10236198.2012.761979"><i class="sc">X. Wang, J. Kou</i>, <i class="it">Convergence for modified Halley-like methods with less computation of inversion</i>, J. Diff. Eq. Appl., <b class="bf">19</b> (2013) 9, pp.&#160;1483–1500, <a href="https://doi.org/10.1080/10236198.2012.761979">https://doi.org/10.1080/10236198.2012.761979</a>. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>