<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Expanding the applicability of Newton-Tikhonov method for ill-posed equations: Expanding the applicability of Newton-Tikhonov method for ill-posed equations</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">


<div class="titlepage">
<h1>Expanding the applicability of Newton-Tikhonov method for ill-posed equations</h1>
<p class="authors">
<span class="author">Ioannis K. Argyros\(^\ast \) Santhosh George\(^\S \)</span>
</p>
<p class="date">February 16, 2013.<br />Published online: January 23, 2015.</p>
</div>
<p>\(^\ast \)Department of Mathematicsal Sciences, Cameron University, Lawton, OK 73505, USA, e-mail: <span class="tt">ioannisa@cameron.edu</span>. </p>
<p>\(^\S \)Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India-757 025, e-mail:<span class="tt">sgeorge@nitk.ac.in</span>. </p>

<div class="abstract"><p> We present a new semilocal convergence analysis of Newton-<br />Tikhonov methods for solving ill-posed operator equations in a Hilbert space setting. Using more precise majorizing sequences and under the same computational cost as in earlier studies such as <span class="cite">
	[
	<a href="#kn:SG" >13</a>
	]
</span>–<span class="cite">
	[
	<a href="#kn:GN-NLR" >20</a>
	]
</span>, we provide: weaker sufficient convergence criteria; tighter error estimates on the distances involved and an at least as precise information on the location of the solution. Applications include Hammertein nonlinear integral equations. </p>
<p><b class="bf">MSC.</b> 65J20, 65J15, 65H10, 65G99, 47J35, 47H99, 49M15. </p>
<p><b class="bf">Keywords.</b> Newton-Tikhonov method, Hilbert space, semilocal convergence, majorizing sequence, Hammerstein operator, ill-posed problem. </p>
</div>
<h1 id="a0000000002">1 Introduction</h1>
<p>In this study we are concerned with the problem of approximating a solution of non-linear ill-posed equation </p>
<div class="equation" id="eq:1.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:1.1}A(x) =y \end{equation}
  </div>
  <span class="equation_label">1.1</span>
</p>
</div>
<p>where \(A\) is a nonlinear operator defined on a subset \(D=D(A)\) of a Hilbert space \(X\), and with range \(R(A)\) in a Hilbert space \(Y.\) Equation (<a href="#eq:1.1">1.1</a>) is ill-posed in the sense that the solution of (<a href="#eq:1.1">1.1</a>) does not depend continuously on the data \(y.\) Many regularization methods such as Tikhonov regularization <span class="cite">
	[
	<a href="#kn:Engkun" >9</a>
	, 
	<a href="#kn:Tik" >10</a>
	, 
	<a href="#kn:Jin" >23</a>
	, 
	<a href="#kn:Mair" >25</a>
	, 
	<a href="#kn:Sch" >27</a>
	, 
	<a href="#kn:sc" >32</a>
	]
</span>, Gauss-Newton method <span class="cite">
	[
	<a href="#kn:Basm" >6</a>
	]
</span> and other methods <span class="cite">
	[
	<a href="#kn:Hns" >21</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Jin2" >22</a>
	]
</span> have been used to approximate solution of equation (<a href="#eq:1.1">1.1</a>). </p>
<p>Many problems from computational sciences and other disciplines can be brought in a form similar to equation (<a href="#eq:1.1">1.1</a>) using mathematical modelling <span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Bin" >7</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Engl" >8</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Sch1" >29</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:SchEngl" >30</a>
	]
</span>. The solutions of these equations can rarely be found in closed form. That is why most solution methods for these equations are iterative. The study about convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure, while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. </p>
<p>George and collaborators (see <span class="cite">
	[
	<a href="#kn:SG" >13</a>
	]
</span>–<span class="cite">
	[
	<a href="#kn:GN-NLR" >20</a>
	]
</span>) solved equation (<a href="#eq:1.1">1.1</a>) for the special case when \(A\) is a Hammerstein-type operator. A Hammerstein-type operator is of the form \(A=MF,\) where \(F:D(F)\subset X\rightarrow Z\) is a nonlinear and \(M:Z\rightarrow Y\) is a bounded linear operator with \(X,Y\) and \(Z\) are Hilbert spaces. Hence, (<a href="#eq:1.1">1.1</a>) becomes </p>
<div class="equation" id="eq:1.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:1.2} MF(x) = y.\end{equation}
  </div>
  <span class="equation_label">1.2</span>
</p>
</div>
<p>In particular George and Kunhanandan <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span> assumed that a solution \(x^\ast \in D(F)\) of (<a href="#eq:1.2">1.2</a>) satisfies </p>
<div class="equation" id="eq:1.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:1.3}\| F(\hat{x})-F(x_0)\|  = \min \{ \| F(x)-F(x_0)\| :MF(x)=y, x\in D(F) \} ,\end{equation}
  </div>
  <span class="equation_label">1.3</span>
</p>
</div>
<p>and \(y^\delta \in Y\) are the available noisy data such that </p>
<div class="equation" id="eq:1.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:1.4} \| y-y^\delta \|  \leq \delta . \end{equation}
  </div>
  <span class="equation_label">1.4</span>
</p>
</div>
<p>Then, for fixed \(\alpha {\gt} 0, \delta {\gt}0\), the Newton-Tikhonov (NT) method defined by </p>
<div class="equation" id="eq:1.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:1.6} x_{n+1,\alpha }^\delta = x_{n,\alpha }^\delta - F'(x_{n,\alpha }^\delta )^{-1}(F(x_{n,\alpha }^\delta )-z_\alpha ^\delta ),\hskip14.226377952755904ptx_{0,\alpha }^\delta = x_0, \end{equation}
  </div>
  <span class="equation_label">1.5</span>
</p>
</div>
<p>where \(z_\alpha ^\delta \) is an approximation of the solution of the equation \(M(z)=y^\delta \) (see Section 5) was used to generate a sequence \(\{ x_{n,\alpha }^\delta \} \) converging quadratically to a solution \(x_\alpha ^\delta \) of the equation </p>
<div class="equation" id="eq:01">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:01} F(x)=z_\alpha ^\delta \end{equation}
  </div>
  <span class="equation_label">1.6</span>
</p>
</div>
<p>provided that certain Kantrovich-type criteria are satisfied. </p>
<p>In the present paper we expand the applicability of (NT) by using more precise majorizing sequence for \(\{ x_{n,\alpha }^\delta \} \) than the ones given in <span class="cite">
	[
	<a href="#kn:sgku1" >17</a>
	]
</span>. This way we provide a semilocal convergence analysis for (NT) with the following advantages over the work in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span> under the same computational cost: </p>
<ol class="enumerate">
  <li><p>Weaker sufficient convergence criteria; </p>
</li>
  <li><p>Tighter error estimates on the distances \(\| x_{n+1,\alpha }^\delta -x_{n,\alpha }^\delta \| \) and </p>
</li>
  <li><p>An at least as precise information on the location of the solution. </p>
</li>
</ol>
<p> These advantages are obtained, since we use the more precise center-Lipschitz condition instead of the Lipschitz condition for the computation of the upper bounds on the norms \(\| F'(x_{n,\alpha }^\delta )^{-1}\| \) (see (C3) and (C4) in Section 3). We also study the semilocal convergence of the simplified Newton-Tikhonov method (SNT) defined by </p>
<div class="equation" id="eq:1.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:1.7} x_{n+1,\alpha }^\delta = x_{n,\alpha }^\delta - F'(x_{0,\alpha }^\delta )^{-1}(F(x_{n,\alpha }^\delta )-z_\alpha ^\delta ),\hskip14.226377952755904ptx_{0,\alpha }^\delta = x_0. \end{equation}
  </div>
  <span class="equation_label">1.7</span>
</p>
</div>
<p>(SNT) method is used as predictor for (NT) method since the former converges under weaker sufficient convergence criteria than the latter. </p>
<p>The paper is organized as follows. Section 2 contains results on scalar sequences that are majorizing for (NT). Sections 3 and 4 contain respectively, the semilocal convergence of (NT) and (SNT). The applications are given in the concluding Section 5. </p>
<h1 id="a0000000003">2 Majorizing Sequences</h1>
<p> We present auxiliary results on scalar sequences which shall be shown to be majorizing for \(\{ x_{n,\alpha }^\delta \} \) in Section 3. <div class="definition_thmwrapper " id="a0000000004">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">2.1</span>
  </div>
  <div class="definition_thmcontent">
  <p>Let \(L_0 {\gt} 0, L{\gt}0, b{\gt}0\) and \(n {\gt}0.\) Define scalar sequences \(\{ r_{n,\alpha }^\delta \} , \{ s_{n,\alpha }^\delta \} , \{ t_{n,\alpha }^\delta \} \) by </p>
<div class="displaymath" id="a0000000005">
  \[ r_{0,\alpha }^\delta =0, \quad r_{1,\alpha }^\delta =r, \quad r_{2,\alpha }^\delta =r_{1,\alpha }^\delta +\frac{3bL_0(r_{1,\alpha }^\delta -r_{0,\alpha }^\delta )^2}{2(1-bL_0r_{1,\alpha }^\delta )}, \]
</div>
<div class="equation" id="eq:2.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.1} r_{n+2,\alpha }^\delta =r_{n+1,\alpha }^\delta +\frac{3bL_0(r_{n+1,\alpha }^\delta -r_{n,\alpha }^\delta )^2}{2(1-bL_0r_{n+1,\alpha }^\delta )},\quad \forall n=1,2,\ldots , \end{equation}
  </div>
  <span class="equation_label">2.8</span>
</p>
</div>
<div class="equation" id="eq:2.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.2} s_{0,\alpha }^\delta =0, \enskip s_{1,\alpha }^\delta =r, \enskip s_{n+2,\alpha }^\delta =s_{n+1,\alpha }^\delta +\frac{3bL(s_{n+1,\alpha }^\delta -s_{n,\alpha }^\delta )^2}{2(1-bL_0r_{n+1,\alpha }^\delta )},\quad \forall n=0,1,2,\ldots , \end{equation}
  </div>
  <span class="equation_label">2.9</span>
</p>
</div>
<p>and </p>
<div class="equation" id="eq:2.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.3} t_{0,\alpha }^\delta =0,\enskip t_{1,\alpha }^\delta =r, \enskip t_{n+2,\alpha }^\delta =t_{n+1,\alpha }^\delta +\frac{3bL(t_{n+1,\alpha }^\delta -t_{n,\alpha }^\delta )^2}{2(1-bLt_{n+1,\alpha }^\delta )},\quad \forall n=0,1,2,\ldots . \end{equation}
  </div>
  <span class="equation_label">2.10</span>
</p>
</div>

  </div>
</div> Then, using a simple inductive argument we obtain the following result where we compare the three scalar sequences. <div class="proposition_thmwrapper " id="a0000000006">
  <div class="proposition_thmheading">
    <span class="proposition_thmcaption">
    Proposition
    </span>
    <span class="proposition_thmlabel">2.2</span>
  </div>
  <div class="proposition_thmcontent">
  <p><span class="rm"><span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Arg1" >3</a>
	]
</span>–<span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span></span> Suppose that \(L_0 \leq L\) and </p>
<div class="equation" id="eq:2.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.4} t_{n+1,\alpha }^\delta < \tfrac {1}{bL}, \quad \forall n=0,1,2,\ldots . \end{equation}
  </div>
  <span class="equation_label">2.11</span>
</p>
</div>
<p>Then, the sequences \(\{ r_{n,\alpha }^\delta \} , \{ s_{n,\alpha }^\delta \} \) and \( \{ t_{n,\alpha }^\delta \} \) are well defined, increasing and converge to their unique least upper bounds \(r^\ast , s^\ast , t^\ast \) which satisfy for \(\gamma =\frac{6L}{3L+\sqrt{9L^2+24L_0L}},\) </p>
<div class="equation" id="eq:2.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.5}r\leq r^\ast \leq s^\ast \leq t^\ast \leq \tfrac {1}{bL}\end{equation}
  </div>
  <span class="equation_label">2.12</span>
</p>
</div>
<p>\(r^\ast \leq \bar{r}^\ast =r+\frac{bL_0r^2}{2(1-\gamma )(1-bL_0r)}, s^\ast \leq \bar{s}^\ast =\tfrac {r}{1-\gamma }\) and \(t^\ast \leq \bar{t}^\ast \leq 2r.\) Moreover, the following estimates hold for each \(n=1,2,\ldots \) </p>
<div class="equation" id="eq:2.6">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.6} r_{n,\alpha }^\delta \leq s_{n,\alpha }^\delta \leq t_{n,\alpha }^\delta ,\end{equation}
  </div>
  <span class="equation_label">2.13</span>
</p>
</div>
<p>and </p>
<div class="equation" id="eq:2.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.7} r_{n+1,\alpha }^\delta -r_{n,\alpha }^\delta \leq s_{n+1,\alpha }^\delta -s_{n,\alpha }^\delta \leq t_{n+1,\alpha }^\delta -t_{n,\alpha }^\delta .\end{equation}
  </div>
  <span class="equation_label">2.14</span>
</p>
</div>
<p>Further more strict inequality holds in <span class="rm">(<a href="#eq:2.6">2.13</a>)</span> and <span class="rm">(<a href="#eq:2.7">2.14</a>)</span> if \(L_0 {\lt} L.\) </p>

  </div>
</div><div class="remark_thmwrapper " id="a0000000007">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">2.3</span>
  </div>
  <div class="remark_thmcontent">
  <p>It follows from (<a href="#eq:2.5">2.12</a>)–(<a href="#eq:2.7">2.14</a>) that if \(L_0 {\lt}L\) sequence \(\{ t_{n,\alpha }^\delta \} \) is the least tight. This sequence is a majorizing for \(\{ x_{n,\alpha }^\delta \} \) (cf. <span class="cite">
	[
	<a href="#kn:sgku1" >17</a>
	, 
	Theorem 3.3
	]
</span>). </p>
<p>Next we present results respectively for \(\{ t_{n,\alpha }^\delta \} ,\{ s_{n,\alpha }^\delta \} ,\{ r_{n,\alpha }^\delta \} .\) </p>

  </div>
</div><div class="lemma_thmwrapper " id="L2.4">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2.4</span>
  </div>
  <div class="lemma_thmcontent">
  <p><span class="rm"><span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	, 
	<a href="#kn:Arg2" >4</a>
	]
</span></span>  Suppose that </p>
<div class="equation" id="eq:2.8">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.8} h=4Lbr\leq 1.\end{equation}
  </div>
  <span class="equation_label">2.15</span>
</p>
</div>
<p>Then, sequence \(\{ t_{n,\alpha }^\delta \} \) is increasing convergent to \(t^\ast .\) The convergence is linear if \(h=1\) and quadratic if \(h{\lt} 1.\) </p>

  </div>
</div><div class="lemma_thmwrapper " id="L2.5">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2.5</span>
  </div>
  <div class="lemma_thmcontent">
  <p><span class="rm"><span class="cite">
	[
	<a href="#kn:Arg1" >3</a>
	, 
	Lemma 2.1
	]
</span></span>  Suppose that </p>
<div class="equation" id="eq:2.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.9} h_1=\tfrac {b}{4}\Big(3L+4L_0+\sqrt{9L^2+24L_0L}\Big)r\leq 1.\end{equation}
  </div>
  <span class="equation_label">2.16</span>
</p>
</div>
<p>Then, \(\{ s_{n,\alpha }^\delta \} \) is increasing convergent to \(s^\ast .\) The convergence is linear if \(h_1=1\) and quadratic if \(h_1 {\lt} 1.\) </p>

  </div>
</div><div class="lemma_thmwrapper " id="L2.6">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2.6</span>
  </div>
  <div class="lemma_thmcontent">
  <p><span class="rm"><span class="cite">
	[
	<a href="#kn:Arg2" >4</a>
	]
</span></span>  Suppose that </p>
<div class="equation" id="eq:2.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.10} h_2=\tfrac {b}{4}\Big(4L_0+\sqrt{3L_0L+8L_0^2}+\sqrt{3L_0L}\Big)r\leq 1.\end{equation}
  </div>
  <span class="equation_label">2.17</span>
</p>
</div>
<p>Then, \(\{ r_{n,\alpha }^\delta \} \) is increasing convergent to \(r^\ast .\) The convergence is linear if \(h_2=1\) and quadratic if \(h_2 {\lt} 1.\) </p>

  </div>
</div>We also have the following generalization of Lemma <a href="#L2.6">2.6</a>. <div class="lemma_thmwrapper " id="L2.7">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2.7</span>
  </div>
  <div class="lemma_thmcontent">
  <p><span class="rm"><span class="cite">
	[
	<a href="#kn:Arg2" >4</a>
	]
</span></span> Suppose that there exists a minimum integer \(N {\gt}1\) such that \(r_{i,\alpha }^\delta (i=0,1,\ldots , N-1)\) given by <span class="rm">(<a href="#eq:2.7">2.14</a>)</span> are well-defined, </p>
<div class="equation" id="eq:2.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.11} r_{i,\alpha }^\delta < r_{i+1,\alpha }^\delta < \tfrac {1}{bL_0}, \hskip14.226377952755904pti=0,1,\ldots , N-2 .\end{equation}
  </div>
  <span class="equation_label">2.18</span>
</p>
</div>
<p>Then, the following assertions hold </p>
<div class="equation" id="eq:2.12">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.12}bL_0r_{N,\alpha }^\delta < 1,\end{equation}
  </div>
  <span class="equation_label">2.19</span>
</p>
</div>
<div class="equation" id="eq:2.13">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.13} r_{N,\alpha }^\delta \leq \tfrac {1}{bL_0}(1-(1-bL_0)r_{N-1,\alpha }^\delta \gamma )\end{equation}
  </div>
  <span class="equation_label">2.20</span>
</p>
</div>
<p>and </p>
<div class="equation" id="eq:2.14">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.14} \gamma _{N-1}=\frac{3L(r_{N+1,\alpha }^\delta -r_{N,\alpha }^\delta )}{2(1-bL_0r_{N+1,\alpha }^\delta )}\leq \gamma \leq 1-\frac{bL_0(r_{N+1,\alpha }^\delta -r_{N,\alpha }^\delta )}{1-bL_0r_{N,\alpha }^\delta }.\end{equation}
  </div>
  <span class="equation_label">2.21</span>
</p>
</div>

  </div>
</div><div class="remark_thmwrapper " id="a0000000008">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">2.8</span>
  </div>
  <div class="remark_thmcontent">
  <p>(a) Lemma <a href="#L2.7">2.7</a> reduces to Lemma <a href="#L2.6">2.6</a> if \(N=2\) (see also Remark 2.5 in <span class="cite">
	[
	<a href="#kn:Arg2" >4</a>
	]
</span>). </p>
<p>(b) It follows from (<a href="#eq:2.8">2.15</a>), (<a href="#eq:2.9">2.16</a>), (<a href="#eq:2.10">2.17</a>) that </p>
<div class="equation" id="eq:2.16">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.16}h\leq 1\Longrightarrow h_1\leq 1\Longrightarrow h_2\leq 1 \end{equation}
  </div>
  <span class="equation_label">2.22</span>
</p>
</div>
<p>but not necessarily vice versa unless if \(L_0=L.\) Moreover, we have that </p>
<div class="equation" id="eq:2.17">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:2.17} \tfrac {h_4}{h}\rightarrow \tfrac {1}{4}, \tfrac {h_2}{h}\rightarrow 0 \hskip5.690551181102362ptand\hskip5.690551181102362pt\tfrac {h_2}{h_1}\rightarrow 0\hskip5.690551181102362ptas \hskip5.690551181102362pt\tfrac {L_0}{L}\rightarrow 0. \end{equation}
  </div>
  <span class="equation_label">2.23</span>
</p>
</div>
<p>Estimates (<a href="#eq:2.17">2.23</a>) show by how many times the applicability of the method is expanded if weaker \(h_1\) or \(h_2\) are used instead of \(h.\) </p>
<p>(c) Numerical examples where \(L_0 {\lt} L\) can be found in <span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>–<span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span>. </p>

  </div>
</div></p>
<h1 id="a0000000009">3 Semilocal Convergence of (NT)</h1>
<p> We present the semi-local convergence of \(\{ x_{n,\alpha }^\delta \} \) in this section. Let \(U(x,r)\) and \(\overline{U(x,r)}\) stand, respectively, for the open and closed ball in \(X\) with center \(x\) and radius \(r{\gt}0.\) Let \(L(X,Y)\) stand for the space of bounded linear operators from \(X\) into \(Y.\) We assume throughout this section that the following \((C)\) conditions hold: </p>
<ol class="enumerate">
  <li><p>\(F:D(F)\subseteq X\rightarrow Y\) is Fr\(\grave{e}\)chet differentiable </p>
</li>
  <li><p>There exists \(x_0\in D(F)\) such that \(F'(x_0)^{-1}\in L(Y,X)\) and </p>
<div class="displaymath" id="a0000000010">
  \[ \| F'(x_0)^{-1}\| \leq b \]
</div>
</li>
  <li><p>There exists \(L_0 {\gt}0\) such that \(F'\) satisfies the center-Lipschitz conditions</p>
<div class="displaymath" id="a0000000011">
  \[ \| F'(x)-F'(x_0)\| \leq L_0\| x-x_0\|  \]
</div>
<p> holds for all \(x\in D(F).\) </p>
</li>
  <li><p>There exists \(L{\gt} 0\) such that \(F'\) satisfies the Lipschitz condition </p>
<div class="displaymath" id="a0000000012">
  \[ \| F'(x)-F'(y)\| \leq L\| x-y\|  \]
</div>
<p> holds for all \(x\) and \(y\) in \(D(F).\) </p>
</li>
  <li><p>\(\| F'(x_0)^{-1}(F(x_0)-z_\alpha ^\delta )\| \leq r\) </p>
</li>
  <li><p>\(h_2=bL_2r \leq 1,\) where \(L_2=\tfrac {1}{4}(4L_0+\sqrt{3L_0L+8L_0^2}+\sqrt{3L_0L})\) and </p>
</li>
  <li><p>\(\overline{U(x,r^\ast )}\subseteq D(F),\) where \(r^\ast \) is defined in (<a href="#eq:2.5">2.12</a>). </p>
</li>
</ol>
<p>We present the following semi-local convergence result for \(\{ x_{n,\alpha }^\delta \} .\) </p>
<p><div class="theorem_thmwrapper " id="Th3.1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.1</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Suppose that the conditions <span class="rm">(C1)–(C7)</span> hold. Then, the sequence \(\{ x_{n,\alpha }^\delta \} \) generated by <span class="rm">(NT)</span> is well defined, remains in \(\overline{U(x_0,r^\ast )}\) for all \(n \geq 0\) and converges to some \(x_\alpha ^\delta \in \overline{U(x_0,r^\ast )}\) such that \(F(x_\alpha ^\delta )=z_\alpha ^\delta .\) Moreover, the following estimates hold for each \(n=0,1,2\ldots \) </p>
<div class="equation" id="eq:3.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:3.1} \| x_{n+1,\alpha }^\delta -x_{n,\alpha }^\delta \| \leq r_{n+1,\alpha }^\delta -r_{n,\alpha }^\delta \end{equation}
  </div>
  <span class="equation_label">3.24</span>
</p>
</div>
<p>and </p>
<div class="equation" id="eq:3.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:3.2} \| x_{n+1,\alpha }^\delta -x_{\alpha }^\delta \| \leq r^\ast -r_{n,\alpha }^\delta \end{equation}
  </div>
  <span class="equation_label">3.25</span>
</p>
</div>

  </div>
</div></p>
<p><div class="proof_wrapper" id="a0000000013">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>We use mathematical induction to prove that </p>
<div class="equation" id="eq:3.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:3.3} \| x_{k+1,\alpha }^\delta -x_{k,\alpha }^\delta \| \leq r_{k+1,\alpha }^\delta -r_{k,\alpha }^\delta \end{equation}
  </div>
  <span class="equation_label">3.26</span>
</p>
</div>
<p>and </p>
<div class="equation" id="eq:3.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:3.4}\overline{U(x_{k+1,\alpha }^\delta , r^\ast -r_{k+1,\alpha }^\delta )}\subseteq \overline{U(x_{k\, \alpha }^\delta , r^\ast -r_{k+1,\alpha }^\delta )} \end{equation}
  </div>
  <span class="equation_label">3.27</span>
</p>
</div>
<p>for each \(k=0,1,2,\ldots .\) Let \(v\in \overline{U(x_{1,\alpha }^\delta , r^\ast -r_{1,\alpha }^\delta )}.\) Then, we obtain </p>
<div class="displaymath" id="a0000000014">
  \begin{eqnarray*} \| v-x_{0,\alpha }^\delta \| & \leq &  \| v-x_{1,\alpha }^\delta \| +\| x_{1,\alpha }^\delta -x_{0,\alpha }^\delta \| \\ & \leq & r^\ast -r_{1,\alpha }^\delta +r_{1,\alpha }^\delta -r_{0,\alpha }^\delta \\ & =& r^\ast -r_{0,\alpha }^\delta \end{eqnarray*}
</div>
<p>which implies \(v\in \overline{U(x_{0,\alpha }^\delta , r^\ast -r_{0,\alpha }^\delta )}.\) Note also that </p>
<div class="displaymath" id="a0000000015">
  \begin{eqnarray*} \| x_{1,\alpha }^\delta -x_{0,\alpha }^\delta \| & =&  \| F’(x_{0,\alpha }^\delta )^{-1}(F(x_{0,\alpha }^\delta )-z_\alpha ^\delta )\| \\ & \leq & r=r_{1,\alpha }^\delta -r_{0,\alpha }^\delta . \end{eqnarray*}
</div>
<p>Hence, estimates (<a href="#eq:3.3">3.26</a>) and (<a href="#eq:3.4">3.27</a>) hold for \(k =0.\) Suppose that these estimates hold for \(n\leq k.\) Then, we have that </p>
<div class="displaymath" id="a0000000016">
  \begin{eqnarray*} \| x_{k+1,\alpha }^\delta -x_{0,\alpha }^\delta \| & \leq & \sum _{i=1}^{k+1}\| x_{i,\alpha }^\delta -x_{i-1,\alpha }^\delta \| \\ & \leq & \sum _{i=1}^{k+1}(r_{i,\alpha }^\delta -r_{i-1,\alpha }^\delta )\\ & \leq & r_{i+1,\alpha }^\delta \leq r^\ast \end{eqnarray*}
</div>
<p>and </p>
<div class="displaymath" id="a0000000017">
  \[ \| x_{k,\alpha }^\delta +\theta (x_{k+1,\alpha }^\delta -x_{k,\alpha }^\delta )-x_{0,\alpha }^\delta \| \leq r_{k,\alpha }^\delta +\theta (r_{k+1,\alpha }^\delta -r_{k,\alpha }^\delta )-r_{0,\alpha }^\delta \leq r^\ast  \]
</div>
<p> for each \(\theta \in [0,1].\) </p>
<p>Using (C3), Lemmas <a href="#L2.5">2.5</a> (see also Lemma 2.1 in <span class="cite">
	[
	<a href="#kn:Arg1" >3</a>
	]
</span>) and the induction hypotheses we get </p>
<div class="displaymath" id="a0000000018">
  \begin{eqnarray} \nonumber \| F’(x_{n+1,\alpha }^\delta )-F’(x_{0,\alpha }^\delta )\| & \leq &  L_0\| x_{k+1,\alpha }^\delta -x_{0,\alpha }^\delta \| \\ \nonumber & \leq & L_0(r_{k+1,\alpha }^\delta -r_{0,\alpha }^\delta )\\ \label{eq:3.5} & \leq & L_0r_{k+1,\alpha }^\delta {\lt} \tfrac {1}{b}. \end{eqnarray}
</div>
<p> It follows from (<a href="#eq:3.5">3.28</a>) and the Banach Lemma on invertible operators <span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span> that \(F'(x_{k+1,\alpha }^\delta )^{-1} \in L(Y,X)\) and </p>
<div class="displaymath" id="a0000000019">
  \begin{eqnarray} \nonumber ||F’(x_{k+1,\alpha }^\delta )^{-1}\| & \leq & \frac{b}{1-bL_0\| x_{k+1,\alpha }^\delta -x_{0,\alpha }^\delta \| }\\ \label{eq:3.6} & \leq & \frac{b}{1-bL_0(r_{k+1,\alpha }^\delta -r_{0,\alpha }^\delta )}. \end{eqnarray}
</div>
<p> Using (NT) we obtain the approximation </p>
<div class="displaymath" id="a0000000020">
  \begin{eqnarray} \nonumber F(x_{k+1,\alpha }^\delta )-z_\alpha ^\delta & =& F’(x_{k+1,\alpha }^\delta )(x_{k+1,\alpha }^\delta -x_{k,\alpha }^\delta )+F(x_{k+1,\alpha }^\delta )-F(x_{k,\alpha }^\delta )\\ \label{eq:3.7} & & +[F’(x_{k,\alpha }^\delta )-F’(x_{k+1,\alpha }^\delta )]??(x_{k+1,\alpha }^\delta -x_{k,\alpha }^\delta ). \end{eqnarray}
</div>
<p> In view of (C4), (<a href="#eq:2.1">2.8</a>), (<a href="#eq:3.7">3.30</a>), (C3) for \(k=0\) and the induction hypotheses we get </p>
<div class="displaymath" id="a0000000021">
  \begin{eqnarray} \nonumber \| F(x_{k+1,\alpha }^\delta )-z_\alpha ^\delta \| & \leq &  \tfrac {L}{2}\| x_{k+1,\alpha }^\delta -x_{k,\alpha }^\delta \| ^2 +L\| x_{k+1,\alpha }^\delta -x_{k,\alpha }^\delta \| ^2\\ \nonumber & =&  \tfrac {3L}{2}\| x_{k+1,\alpha }^\delta -x_{k,\alpha }^\delta \| ^2\\ \label{eq:3.8} & \leq & \tfrac {3L}{2}(r_{k+1,\alpha }^\delta -r_{k,\alpha }^\delta )^2. \end{eqnarray}
</div>
<p> Moreover, by (NT), (<a href="#eq:2.1">2.8</a>), (<a href="#eq:3.6">3.29</a>) and (<a href="#eq:3.8">3.31</a>) we get that </p>
<div class="displaymath" id="a0000000022">
  \begin{eqnarray*} \| x_{k+2,\alpha }^\delta -x_{k+1,\alpha }^\delta \| & \leq &  \| F’(x_{k+1,\alpha }^\delta )^{-1}\| \| F(x_{k+1,\alpha }^\delta )-z_\alpha ^\delta \| \\ & \leq & \tfrac {b}{1-bL_0(r_{k+1,\alpha }^\delta -r_{0,\alpha }^\delta )}\tfrac {3L}{2}(r_{k+1,\alpha }^\delta -r_{k,\alpha }^\delta )^2\\ & =& r_{k+2,\alpha }^\delta -r_{k,\alpha }^\delta \end{eqnarray*}
</div>
<p>which completes the induction for (<a href="#eq:3.3">3.26</a>). Furthermore, let <br />\(w\in \overline{U(x_{k+2,\alpha }^\delta , r^\ast -r_{k+2,\alpha }^\delta )}.\) Then, we have that </p>
<div class="displaymath" id="a0000000023">
  \begin{eqnarray*} \| w-x_{k+1,\alpha }^\delta \| & \leq & \| w-x_{k+2,\alpha }^\delta \| +\| x_{k+2,\alpha }^\delta -x_{k+1,\alpha }^\delta \| \\ & \leq & r^\ast -r_{k+2,\alpha }^\delta +r_{k+21,\alpha }^\delta -r_{k+1,\alpha }^\delta \\ & =& r^\ast -r_{k+1,\alpha }^\delta . \end{eqnarray*}
</div>
<p>That is \(w\in \overline{U(x_{k+1,\alpha }^\delta , r^\ast -r_{k+1,\alpha }^\delta )}.\) Lemma <a href="#L2.6">2.6</a> implies that \(\{ r_{k,\alpha }^\delta \} \) is a complete sequence. It then follows from (<a href="#eq:3.3">3.26</a>) and (<a href="#eq:3.4">3.27</a>) that \(\{ x_{k,\alpha }^\delta \} \) is also complete sequence in the Hilbert space \(X\) and as such it converges to some \(x_\alpha ^\delta \in \overline{U(x_0, r^\ast )}\) (since \(\overline{U(x_0, r^\ast )}\) is a closed set). By letting \(k\rightarrow \infty \) in (<a href="#eq:3.8">3.31</a>) we obtain \(F(x_\alpha ^\delta )=z_\alpha ^\delta .\) Estimate (<a href="#eq:3.2">3.25</a>) is obtained from (<a href="#eq:3.1">3.24</a>) by using standard majorization techniques <span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:sgatef" >15</a>
	]
</span>. The proof of the Theorem is complete. <div class="remark_thmwrapper " id="a0000000024">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">3.2</span>
  </div>
  <div class="remark_thmcontent">
  <p>(a) If \(L_0=L, r_{n,\alpha }^\delta =t_{n,\alpha }^\delta \) then Theorem <a href="#Th3.1">3.1</a> reduces to Theorem 3.3 in <span class="cite">
	[
	<a href="#kn:sgku1" >17</a>
	]
</span> (with corresponding changes). Otherwise, i.e., if \(L_0 {\lt} L,\) according to Section 2 it constitutes an improvement with advantages as stated in the introduction of this study. </p>
<p>(b) Upper bounds on \(r^\ast \) and \(s^\ast \) in terms of \(L_0, L, b\) and \(n\) have been given in <span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Arg1" >3</a>
	]
</span>–<span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span> (see also (<a href="#eq:2.5">2.12</a>)). these bounds are given in closed form and can certainly replace \(r^\ast \) in (C7). </p>
<p>(c) Note that \(L_0 \leq L\) holds in general and \(\tfrac {L}{L_0}\) can be arbitrarily large <span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Arg1" >3</a>
	]
</span>–<span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span>. </p>

  </div>
</div>The rest of the results in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span> can be improved along the same lines by simply using, respectively, \(\bar{r}^\ast , L_0\) instead of \(\bar{t}^\ast , L.\) In order for us to make the paper is self contained as possible we present the proof of one of them and for the proof of the rest we refer the reader to <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span>. <div class="proposition_thmwrapper " id="p3.3">
  <div class="proposition_thmheading">
    <span class="proposition_thmcaption">
    Proposition
    </span>
    <span class="proposition_thmlabel">3.3</span>
  </div>
  <div class="proposition_thmcontent">
  <p>Suppose that <span class="rm">(<a href="#eq:2.10">2.17</a>), (C3)</span> and <span class="rm">(C4)</span> hold. Moreover, suppose that </p>
<div class="equation" id="eq:3.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:3.10} \| x_0-x^\ast \| \leq \bar{r}^\ast < r< \tfrac {1}{bL_0}=\lambda _0 \end{equation}
  </div>
  <span class="equation_label">3.32</span>
</p>
</div>
<p>and </p>
<div class="equation" id="3.11">
<p>
  <div class="equation_content">
    \begin{equation} \label{3.11} \overline{U(x_0,\lambda _0)} \subseteq D(F). \end{equation}
  </div>
  <span class="equation_label">3.33</span>
</p>
</div>
<p>Then, the following assertion holds: </p>
<div class="equation" id="eq:3.12">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:3.12} \| x^\ast -x_\alpha ^\delta \| \leq \tfrac {b}{1-bL_0r}\| F(x^\ast )-z_\alpha ^\delta \| . \end{equation}
  </div>
  <span class="equation_label">3.34</span>
</p>
</div>

  </div>
</div><div class="proof_wrapper" id="a0000000025">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>Using (C3) (instead of (C4)) used in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span>, we get that </p>
<div class="displaymath" id="a0000000026">
  \begin{eqnarray*} \| x^\ast -x_\alpha ^\delta \|  & =&  \| x^\ast - x_\alpha ^\delta + F’(x_0)^{-1}[ F(x_\alpha ^\delta )-F(x^\ast )+F(x^\ast )-z_\alpha ^\delta ]\| \\ &  \leq &  \| F’(x_0)^{-1}[F’(x_0)(x^\ast -x_\alpha ^\delta )-(F(x^\ast )-F(x_\alpha ^\delta ))]\| \\ & & + \| F’(x_0)^{-1}[F(x^\ast )-z_\alpha ^\delta ]\| \\ & \leq &  b L_0r\| x^\ast - x_\alpha ^\delta \|  + b\| F(x^\ast ) - z_\alpha ^\delta \| . \end{eqnarray*}
</div>
<p>which shows (<a href="#eq:3.12">3.34</a>). The proof of the proposition is complete. </p>
<p>The following is a consequence of Theorem <a href="#Th3.1">3.1</a> and Proposition <a href="#p3.3">3.3</a>. <div class="corollary_thmwrapper " id="c3.4">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">3.4</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Suppose that hypotheses of Theorem <span class="rm"><a href="#Th3.1">3.1</a></span>, \(\bar{r}^\ast {\lt} r\) and \(bL_0r {\lt}1\) hold. Then, the following assertion holds </p>
<div class="displaymath" id="a0000000027">
  \[ \| x^\ast -x_{n,\alpha }^\delta \| \leq \tfrac {b}{1-bL_0r}\| F(x^\ast )-z_\alpha ^\delta \| +r^\ast -r_{n,\alpha }^\delta  \]
</div>
<p> for each \(n=0, 1,2, \ldots .\) </p>

  </div>
</div><div class="remark_thmwrapper " id="a0000000028">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">3.5</span>
  </div>
  <div class="remark_thmcontent">
  <p>if \(L_0=L\) Proposition <a href="#p3.3">3.3</a> and Corollory <a href="#c3.4">3.4</a> reduce to the corresponding ones in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span>. Otherwise, i.e., \(L_0 {\lt} L\) our results constitute an improvement. </p>

  </div>
</div></p>
<h1 id="a0000000029">4 Semilocal convergence of (SNT)</h1>
<p> We present the semilocal convergence of (SNT). <div class="theorem_thmwrapper " id="Th4.1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">4.1</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Suppose that <span class="rm">(C1)–(C3), (C5)</span> hold. If, in addition, </p>
<ol class="enumerate">
  <li><p>\(h_0=\tau bL_0r {\lt}1\)<br />and </p>
</li>
  <li><p>\( \overline{U(x_0, \rho )}\subseteq D(F),\) where \(\rho =\tfrac {1-\sqrt{1-h_0}}{bL_0},\) </p>
</li>
</ol>
<p> then, sequence \(\{ x_{n,\alpha }^\delta \} \) generated by <span class="rm">(SNT)</span> is well defined, remain in \(\overline{U(x_0, \rho )}\) for all \(n \geq 0\) and converges to some \(x_\alpha ^\delta \in \overline{U(x_0, \rho )}\) such that \(F(x_\alpha ^\delta )=z_\alpha ^\delta .\) Moreover the following estimate hold for each \(n =0, 1,2, \ldots \) </p>
<div class="displaymath" id="a0000000030">
  \[  \| x_{n+2, \alpha }^\delta -x_{n+1,\alpha }^\delta \| \leq q\| x_{n+1,\alpha }^\delta -x_{n,\alpha }^\delta \|  \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000031">
  \[  \| x_{n,\alpha }^\delta -x_\alpha ^\delta \| \leq \tfrac {q^nr}{1-q}, \]
</div>
<p> where \(q=1-\sqrt{1-h_0}.\) </p>

  </div>
</div><div class="proof_wrapper" id="a0000000032">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>Let us define operator \(T\) on \( \overline{U(x_0, \rho )}\) by </p>
<div class="displaymath" id="a0000000033">
  \[ T(x)=x-F'(x_0)^{-1}(F(x)-z_\alpha ^\delta ). \]
</div>
<p> Then, we shall show \(T\) is a contraction on \( \overline{U(x_0, \rho )}\) and maps \( \overline{U(x_0, \rho )}\) into itself. Indeed, we have that for \( x,y\in \overline{U(x_0, \rho )}\) </p>
<div class="displaymath" id="a0000000034">
  \begin{eqnarray*} T(x)-T(y)& =& x-y-F’(x_0)^{-1}(F(x)-z_\alpha ^\delta )+F’(x_0)^{-1}(F(y)-z_\alpha ^\delta )\\ & =&  -F’(x_0)^{-1}(F(x)-F(y)-F’(x_0)(x-y)) \end{eqnarray*}
</div>
<p>and </p>
<div class="displaymath" id="a0000000035">
  \begin{eqnarray*} \| T(x)-T(y)\| & =& \| F’(x_0)^{-1}(F(x)-F(y)-F’(x_0)(x-y))\| \\ & \leq & bL_0\rho \| x-y\| . \end{eqnarray*}
</div>
<p>But, we have \(bL_0\rho {\lt}1.\) Hence, \( T\) is a contraction operator. Let \( x\in \overline{U(x_0, \rho )}.\) Then, we have that </p>
<div class="displaymath" id="a0000000036">
  \[ T(x)-x_0=T(x)-T(x_0)+T(x_0)-x_0 \]
</div>
<p> and </p>
<div class="displaymath" id="a0000000037">
  \begin{eqnarray*} \| T(x)-x_0\| & \leq &  \| T(x)-T(x_0)\| +\| T(x_0)-x_0\| \\ & =&  \frac{bL_0\rho ^2}{2+r} = \rho \end{eqnarray*}
</div>
<p>by the choice of \(\rho .\) The proof of Theorem is complete. <div class="remark_thmwrapper " id="r4.1">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">4.2</span>
  </div>
  <div class="remark_thmcontent">
  
<p>(a) More subtle arguments show that \(h_0 {\lt} 1\) can be replaced by \(h_0\leq 1\) (see <span class="cite">
	[
	<a href="#kn:Arg" >1</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Arg3" >5</a>
	]
</span>). </p>
<p>(b) We have that \(h^0=2bLr\leq 1\Rightarrow h_0 \leq 1\) but not vice versa even if \(L_0=L.\) Moreover, we have that \(\tfrac {h_0}{h^0}\rightarrow 0\) as \(\tfrac {L_0}{L}\rightarrow 0.\) Note that the \(h^0\) condition was given in <span class="cite">
	[
	<a href="#kn:sgku1" >17</a>
	]
</span>. Therefore method (SNT) can be used as a predictor until a certain finite iterate \(N\) such that \(h\leq 1\) holds for \(x_{N,\alpha }^\delta ,\) being the initial point of method (NT). Such an approach has been used by us in <span class="cite">
	[
	<a href="#kn:Arg4" >2</a>
	]
</span> for modified Newton and Newton’s method. </p>

  </div>
</div></p>
<h1 id="a0000000038">5 Applications</h1>
<p> Let us consider the nonlinear Hammerstein operator equation (c.f. <span class="cite">
	[
	<a href="#kn:Sek" >28</a>
	]
</span>) </p>
<div class="displaymath" id="a0000000039">
  \[ (MFx)(t) = \int _0^1m(s,t)p(s,x(s))x(s)ds \]
</div>
<p> where \(m\) is continuous and \(p\) is differentiable with respect to the second variable. Define \(F:D(F)=H^1(]0,1[)\rightarrow L^2(]0,1[)\) by </p>
<div class="displaymath" id="a0000000040">
  \[ F(x)(s) = p(s, x(s)), \hskip14.226377952755904pts\in [0, 1] \]
</div>
<p> and \(M:L^2(]0,1[) \rightarrow L^2(]0,1[)\) by </p>
<div class="displaymath" id="a0000000041">
  \[ Mu(t) = \int _0^1m(s,t)u(s)ds, \hskip14.226377952755904ptt\in [0,1]. \]
</div>
<p> Then, \(F\) is Fr\(\grave{e}\)chet differentiable and we have that </p>
<div class="displaymath" id="a0000000042">
  \[  [F'(x)]u(t) = \partial _2p(t,x(t))u(t), \hskip28.452755905511808ptt\in [0,1].  \]
</div>
<p> If in addition \(M_1:H^1(]0,1[)\mapsto H^1(]0,1[)\) is defined by \((M_1x)(t):=\partial _2p(t,x(t))\) is locally Lipschitz continuous, one can compute the required constants \(L_0\) and \(L.\) If we further assume the existence of a constant \(\kappa _1 {\gt}0\) such that \(\partial _2p(t,x(t)) \geq \kappa _1\) for all \(t\in [0,1]\) and \(x(t) \in U(x_0, r^\ast ),\) then \(F'(x)^{-1}\) exists and is bounded operator. </p>
<p>Equation (<a href="#eq:1.2">1.2</a>) is equivalent to </p>
<div class="equation" id="eq:5.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:5.1} M[F(x)-F(x_0)] = y-MF(x_0) \end{equation}
  </div>
  <span class="equation_label">5.35</span>
</p>
</div>
<p>where \(x_0,\) is an initial guess. Therefore the solution x of (<a href="#eq:1.2">1.2</a>) is obtained by first solving </p>
<div class="equation" id="eq:5.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:5.2} Mz = y- KF(x_0) \end{equation}
  </div>
  <span class="equation_label">5.36</span>
</p>
</div>
<p>for \(z\) and then solving </p>
<div class="equation" id="eq:5.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:5.3} F(x) = z + F(x_0)\end{equation}
  </div>
  <span class="equation_label">5.37</span>
</p>
</div>
<p>for \(x\in D(F).\) Let \(\alpha {\gt} 0,\) \(\delta {\gt}0\) be fixed. Then, we consider the regularized solution of (<a href="#eq:5.2">5.36</a>) with \(y^\delta \) in place of y as </p>
<div class="equation" id="eq:5.4">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:5.4} z_\alpha ^\delta = (M + \alpha I)^{-1}(y^\delta -MF(x_0)) + F(x_0)\end{equation}
  </div>
  <span class="equation_label">5.38</span>
</p>
</div>
<p>if case \(M\) in (<a href="#eq:5.2">5.36</a>) is positive self adjoint and \(Z=Y.\) Otherwise, we set </p>
<div class="equation" id="eq:5.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:5.5} z_\alpha ^\delta = (M^*M + \alpha I)^{-1}M^*(y^\delta -MF(x_0)) + F(x_0).\end{equation}
  </div>
  <span class="equation_label">5.39</span>
</p>
</div>
<p>Note that (<a href="#eq:5.4">5.38</a>) is the simplified or Lavrentiev regularization of equation (<a href="#eq:5.2">5.36</a>) and (<a href="#eq:5.5">5.39</a>) is the Tikhonov regularization of (<a href="#eq:5.2">5.36</a>). </p>
<p>With these choices of operators the rest of the results in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span> involving (SNT) type methods (i.e., using \( L_0\) instead of \(L\)) can be improved. <div class="proposition_thmwrapper " id="p5.1">
  <div class="proposition_thmheading">
    <span class="proposition_thmcaption">
    Proposition
    </span>
    <span class="proposition_thmlabel">5.1</span>
  </div>
  <div class="proposition_thmcontent">
  <p> Suppose \(z_\alpha ^\delta \) is given by <span class="rm">(<a href="#eq:5.5">5.39</a>)</span> and </p>
<div class="displaymath" id="a0000000043">
  \[ \| F(x_0)-F(x^\ast )\| +\tfrac {\delta }{\sqrt{\alpha }} {\lt} \tfrac {r}{2b} {\lt} \tfrac {1}{2b^2L_0}. \]
</div>
<p> Then, the following assertion holds </p>
<div class="displaymath" id="a0000000044">
  \[ \| x_0-x^\ast \| \leq \bar{r}^\ast {\lt} r{\lt} \tfrac {1}{bL_0}. \]
</div>

  </div>
</div><div class="remark_thmwrapper " id="R5.2">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">5.2</span>
  </div>
  <div class="remark_thmcontent">
  
<p>(a) If \(L_0=L\) Proposition <a href="#p5.1">5.1</a> reduces to Remark 3.4 in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span>. Otherwise, Proposition <a href="#p5.1">5.1</a> improves Remark 3.4 in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span>. </p>
<p>(b) The rest of the results in the literature (see, e.g. <span class="cite">
	[
	<a href="#kn:SG" >13</a>
	]
</span>–<span class="cite">
	[
	<a href="#kn:GN-NLR" >20</a>
	]
</span>) can be extended by simply using (C3) instead of (C4). Note also that there are examples where (C3) holds but not (C4). </p>

  </div>
</div></p>
<p><div class="remark_thmwrapper " id="R5.2">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">5.3</span>
  </div>
  <div class="remark_thmcontent">
  <p> Hereafter we consider \(z_\alpha ^\delta \) as the Tikhonov regularization of (<a href="#eq:5.2">5.36</a>) given in (<a href="#eq:5.5">5.39</a>). All results in the forthcoming sections are valid for the simplified regularization of (<a href="#eq:5.2">5.36</a>). </p>

  </div>
</div> </p>
<p>In view of the estimate in the Corollory <a href="#c3.4">3.4</a>, the next task is to find an estimate \(\| F(x^\ast )-z_\alpha ^\delta \| .\) For this, let us introduce the notation; </p>
<div class="displaymath" id="a0000000045">
  \[ z_\alpha := F(x_0) + (M^*M+ \alpha I)^{-1}M^*(y-MF(x_0)). \]
</div>
<p> We may observe that </p>
<div class="displaymath" id="a0000000046">
  \begin{eqnarray} \nonumber \| F(x^\ast )-z_\alpha ^\delta \|  & \leq & \| F(x^\ast ) - z_\alpha \|  + \| z_\alpha - z_\alpha ^\delta \| \\ \label{eq:3.2a}&  \leq &  \| F(x^\ast ) - z_\alpha \|  + \tfrac {\delta }{\sqrt{\alpha }}, \end{eqnarray}
</div>
<p> and </p>
<div class="displaymath" id="a0000000047">
  \begin{eqnarray}  \nonumber F(x^\ast ) - z_\alpha & =&  F(x^\ast )-F(x_0)-(M^\ast M + \alpha I)^{-1}M^\ast M[F(x^\ast )-F(x_0)] \\ \nonumber & =& [I-(M^*M+ \alpha I)^{-1}M^*M][F(x^\ast )-F(x_0)]\\ \label{eq:3.1a}&  = & \alpha (M^*M + \alpha I)^{-1}[F(x^\ast )-F(x_0)]. \end{eqnarray}
</div>
<p> Note that for \(u \in R(M^*M)\) with \(u = M^*Mz\) for some \(z \in Z,\) </p>
<div class="displaymath" id="a0000000048">
  \[ \| \alpha (M^*M + \alpha I)^{-1}u\|  = \| \alpha (M^*M + \alpha I)^{-1}M^*Mz\|  \leq \alpha \| z\|  \rightarrow 0 \]
</div>
<p> as \(\alpha \rightarrow 0.\) Now since \( \| \alpha (M^*M + \alpha I)^{-1}\|  \leq 1\) for all \(\alpha {\gt} 0,\) it follows that for every \(u \in \overline{R(M^\ast M)},\) \(\| \alpha (M^*M + \alpha I)^{-1}u\|  \rightarrow 0\) as \(\alpha \rightarrow 0.\) Thus we have the following theorem. </p>
<p><div class="theorem_thmwrapper " id="Th5.3">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">5.4</span>
  </div>
  <div class="theorem_thmcontent">
  <p> If \(F(x^\ast )-F(x_0) \in \overline{R(M^\ast M)},\) then \(\| F(x^\ast )-z_\alpha \|  \rightarrow 0\) as \( \alpha \rightarrow 0.\) </p>

  </div>
</div></p>
<h2 id="a0000000049">5.1 Error bounds under source conditions</h2>
<p> In view of the above theorem, we assume that </p>
<div class="equation" id="eq:4.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:4.1} \| F(x^\ast )-z_\alpha \|  \leq \varphi (\alpha )\end{equation}
  </div>
  <span class="equation_label">5.42</span>
</p>
</div>
<p>for some positive monotonic increasing function \(\varphi \) defined on \((0, \| M\| ^2]\) such that </p>
<div class="displaymath" id="a0000000050">
  \[ {\lim _{\lambda \rightarrow 0}}\varphi (\lambda ) = 0. \]
</div>
<p>Suppose \(\varphi \) is a source function in the sense that \(x^\ast \) satisfies a source condition of the form </p>
<div class="displaymath" id="a0000000051">
  \[ F(x^\ast )-F(x_0) = \varphi (M^\ast M)w, \quad \| w\| \leq 1, \]
</div>
<p> such that </p>
<div class="equation" id="eq:4.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:4.2} {\sup _{0 < \lambda < \| M\| ^2}}\tfrac {\alpha \varphi (\lambda )}{\lambda + \alpha } \leq \varphi (\alpha ), \end{equation}
  </div>
  <span class="equation_label">5.43</span>
</p>
</div>
<p>then the assumption (<a href="#eq:4.1">5.42</a>) is satisfied. Note that if \(F(x^\ast \! )-\! F(x_0)\!  \in \!  {R((M^\ast \!  M)^\nu \! )},\) for some \(\nu \) with, \(0 {\lt} \nu \leq 1,\) then by (<a href="#eq:3.1a">5.41</a>) </p>
<div class="displaymath" id="a0000000052">
  \begin{eqnarray*} \| F(x^\ast )-z_\alpha \|  & \leq &  \| \alpha (M^*M + \alpha I)^{-1}(M^\ast M)^\nu \omega \| \\ & \leq &  {\sup _{0 {\lt} \lambda \leq \| M\| ^2}}\frac{\alpha \lambda ^\nu }{\lambda + \alpha }\| \omega \|  \leq \alpha ^\nu \| \omega \| . \end{eqnarray*}
</div>
<p>Thus in this case \(\varphi (\lambda ) = \tfrac {\lambda ^\nu }{\| \omega \| }\) satisfies the assumption (<a href="#eq:4.1">5.42</a>). Therefore by (<a href="#eq:3.2a">5.40</a>) and by the assumption (<a href="#eq:4.1">5.42</a>), we have </p>
<div class="equation" id="eq:4.3">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:4.3}\| F(x^\ast )-z_\alpha ^\delta \|  \leq \varphi (\alpha ) + \tfrac {\delta }{\sqrt{\alpha }}.\end{equation}
  </div>
  <span class="equation_label">5.44</span>
</p>
</div>
<p>So, we have the following theorem. <div class="theorem_thmwrapper " id="Th4.1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">5.5</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Under the assumptions of Corollary <span class="rm"><a href="#c3.4">3.4</a></span> and <span class="rm">(<a href="#eq:4.3">5.44</a>)</span>, </p>
<div class="displaymath" id="a0000000053">
  \[  \| x^\ast -x_{n,\alpha }^\delta \|  \leq \tfrac {b}{1-b\kappa _0r}(\varphi (\alpha )+\tfrac {\delta }{\sqrt{\alpha }}) + r^\ast -r_{n,\alpha }^\delta .  \]
</div>

  </div>
</div></p>
<h2 id="a0000000054">5.2 A priori choice of the parameter</h2>
<p> Note that the estimate \(\varphi (\alpha ) + \tfrac {\delta }{\sqrt{\alpha }}\) in (<a href="#eq:4.2">5.43</a>) attains minimum for the choice \(\alpha :=\alpha _\delta \) which satisfies \(\varphi (\alpha _\delta ) = \tfrac {\delta }{\sqrt{\alpha _\delta }}.\) Let \(\psi (\lambda ) := \lambda \sqrt{\varphi ^{-1}(\lambda )}, 0 {\lt} \lambda \leq \| M\| ^2.\) Then we have \(\delta = \sqrt{\alpha _\delta }\varphi (\alpha _\delta ) = \psi (\varphi (\alpha _\delta )), \) and </p>
<div class="equation" id="eq:p1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:p1}\alpha _\delta = \varphi ^{-1}(\psi ^{-1}(\delta )).\end{equation}
  </div>
  <span class="equation_label">5.45</span>
</p>
</div>
<p>So the relation (<a href="#eq:4.3">5.44</a>) leads to </p>
<div class="displaymath" id="a0000000055">
  \[ \| F(x^\ast )-z_\alpha ^\delta \|  \leq 2\psi ^{-1}(\delta ). \]
</div>
<p> Theorem <a href="#Th4.1">5.5</a> and the above observation leads to the following. <div class="theorem_thmwrapper " id="a0000000056">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">5.6</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Let \(\psi (\lambda ) := \lambda \sqrt{\varphi ^{-1}(\lambda )}, 0 {\lt} \lambda \leq \| K\| ^2\) and the assumptions of Corollary <span class="rm"><a href="#c3.4">3.4</a></span> and <span class="rm">(<a href="#eq:4.1">5.42</a>)</span> are satisfied. For \(\delta {\gt} 0,\) let \(\alpha _\delta = \varphi ^{-1}(\psi ^{-1}(\delta )).\) If </p>
<div class="displaymath" id="a0000000057">
  \[ n_\delta : = \min \big\{ n: (r^\ast -r_{n,\alpha }^\delta ) {\lt} \tfrac {\delta }{\sqrt{\alpha _\delta }}\big\} , \]
</div>
<p> then </p>
<div class="displaymath" id="a0000000058">
  \[ \| x^\ast -x_{\alpha _\delta , n_\delta }^\delta \|  = O(\psi ^{-1}(\delta )), \  {as} \  \delta \rightarrow 0. \]
</div>

  </div>
</div></p>
<h2 id="a0000000059">5.3 An adaptive choice of the parameter</h2>
<p> The error estimate in the above Theorem has optimal order with respect to \(\delta .\) Unfortunately, an a priori parameter choice (<a href="#eq:p1">5.45</a>) cannot be used in practice since the smoothness properties of the unknown solution \(\hat{x}\) reflected in the function \(\varphi \) are generally unknown. There exist many parameter choice strategies in the literature, for example see <span class="cite">
	[
	<a href="#kn:Basm" >6</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:grogua" >11</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:gua" >12</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:SGMT" >18</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:GN-scales" >19</a>
	]
</span>, <span class="cite">
	[
	<a href="#kn:Ras" >31</a>
	]
</span> and <span class="cite">
	[
	<a href="#kn:Tau1" >33</a>
	]
</span>. </p>
<p>In <span class="cite">
	[
	<a href="#kn:PerSch" >26</a>
	]
</span>, Pereverzev and Schock considered an adaptive selection of the parameter which does not involve even the regularization method in an explicit manner. In this method the regularization parameter \(\alpha _i\) are selected from some finite set \(\{ \alpha _i: 0 {\lt} \alpha _0 {\lt} \alpha _1 {\lt} \ldots {\lt} \alpha _N\} \) and the corresponding regularized solution, say \(u_{\alpha _i}^\delta \) are studied on-line. Later George and Nair <span class="cite">
	[
	<a href="#kn:GN-NLR" >20</a>
	]
</span> considered the adaptive selection of the parameter for choosing the regularization parameter in Newton-Lavrentiev regularization method for solving Hammerstein-type operator equation. In this paper also, we consider the adaptive method for selecting the parameter \(\alpha \) in \(x_{\alpha , n}^\delta \). The rest of this section is essentially a reformulation of the adaptive method considered in <span class="cite">
	[
	<a href="#kn:PerSch" >26</a>
	]
</span> in a special context. </p>
<p>Let \(i\in \{ 0, 1, 2, \ldots , N\} \) and \(\alpha _i = \mu ^{2i}\alpha _0\) where \(\mu {\gt} 1\) and \(\alpha _0 = \delta ^2.\) Let </p>
<div class="displaymath" id="eq:p2">
  \begin{align}  \label{eq:p2}l: = \max \big\{ i: \varphi (\alpha _i) & \leq \tfrac {\delta }{\sqrt{\alpha _i}}\big\}  {\rm \  and} \\ \label{eq:p3}k: = \max \big\{ i: \| z_{\alpha _i}^\delta - z_{\alpha _j}^\delta \|  & \leq \tfrac {4\delta }{\sqrt{\alpha _j}}, j = 0, 1, 2,\ldots , i\big\} . \end{align}
</div>
<p>The proof of the next theorem is analogous to the proof of Theorem 1.2 in <span class="cite">
	[
	<a href="#kn:PerSch" >26</a>
	]
</span>, but for the sake of completeness, we supply its proof as well. </p>
<p><div class="theorem_thmwrapper " id="a0000000060">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">5.7</span>
  </div>
  <div class="theorem_thmcontent">
  <p>Let \(l\) be as in <span class="rm">(<a href="#eq:p2">5.46</a>)</span>, \(k\) be as in <span class="rm">(<a href="#eq:p3">5.47</a>)</span> and \(z_{\alpha _k}^\delta \) be as in <span class="rm">(<a href="#eq:5.5">5.39</a>)</span> with \(\alpha = \alpha _k.\) Then \(l\leq k\) and </p>
<div class="displaymath" id="a0000000061">
  \[ \| F(x^\ast ) - z_{\alpha _k}^\delta \|  \leq (2+\tfrac {4\mu }{\mu -1})\mu \psi ^{-1}(\delta ). \]
</div>

  </div>
</div><div class="proof_wrapper" id="a0000000062">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>Note that, to prove \(l\leq k\), it is enough to prove that, for \(i=1,\ldots ,N\) </p>
<div class="displaymath" id="a0000000063">
  \[ \varphi (\alpha _i) \leq \tfrac {\delta }{\sqrt{\alpha _i}}\Longrightarrow \| z_{\alpha _i}^\delta -z_{\alpha _j}^\delta \|  \leq \tfrac {4\delta }{\sqrt{\alpha _j}}, \quad \forall j = 0, 1, 2, \ldots ,i. \]
</div>
<p> For \(j \leq i,\) </p>
<div class="displaymath" id="a0000000064">
  \begin{eqnarray*} \| z_{\alpha _i}^\delta -z_{\alpha _j}^\delta \|  & \leq &  \| z_{\alpha _i}^\delta -F(\hat{x})\|  + \| F(\hat{x})-z_{\alpha _j}^\delta \| \\ & \leq & \varphi (\alpha _i) + \tfrac {\delta }{\sqrt{\alpha _i}}+\varphi (\alpha _j) + \tfrac {\delta }{\sqrt{\alpha _j}}\\ & \leq &  \tfrac {2\delta }{\sqrt{\alpha _i}}+\tfrac {2\delta }{\sqrt{\alpha _j}} \leq \tfrac {4\delta }{\sqrt{\alpha _j}}. \end{eqnarray*}
</div>
<p>This proves the relation \(l \leq k.\) Now since \(\sqrt{\alpha _{l+m}} = \mu ^m\sqrt{\alpha _{l}},\) by using triangle inequality successively, we obtain </p>
<div class="displaymath" id="a0000000065">
  \begin{eqnarray*} \| F(\hat{x})-z_{\alpha _k}^\delta \|  & \leq &  \| F(x^\ast )-z_{\alpha _l}^\delta \|  + \sum _{j=l+1}^k\tfrac {4\delta }{\sqrt{\alpha _{j-1}}}\\ & \leq &  \| F(\hat{x})-z_{\alpha _l}^\delta \|  + \sum _{m=0}^{k-l-1}\tfrac {4\delta }{\sqrt{\alpha _{l}}\mu ^m}\\ & \leq &  \| F(\hat{x})-z_{\alpha _l}^\delta \|  + (\tfrac {\mu }{\mu -1})\tfrac {4\delta }{\sqrt{\alpha _l}}. \end{eqnarray*}
</div>
<p>Therefore by (<a href="#eq:4.2">5.43</a>) and (<a href="#eq:p2">5.46</a>) we have </p>
<div class="displaymath" id="a0000000066">
  \begin{eqnarray*} \| F(x^\ast )-z_{\alpha _k}^\delta \|  & \leq &  \varphi (\alpha _l) + \tfrac {\delta }{\sqrt{\alpha _l}} + (\tfrac {\mu }{\mu -1})\tfrac {4\delta }{\sqrt{\alpha _l}}\leq (2+ \tfrac {4\mu }{\mu -1})\mu \psi ^{-1}(\delta ). \end{eqnarray*}
</div>
<p>The last step follows from the inequality \(\sqrt{\alpha _\delta }\leq \sqrt{\alpha _{l+1}}\leq \mu \sqrt{\alpha _l}\) and \(\tfrac {\delta }{\sqrt{\alpha _\delta }} = \psi ^{-1}(\delta ).\) This completes the proof. </p>
<h2 id="a0000000067">5.4 Stopping Rule</h2>
<p>Note that </p>
<div class="displaymath" id="a0000000068">
  \begin{eqnarray} \nonumber e_0 = \| x_{1,\alpha }^\delta -x_0\|  &  = & \| F’(x_0)^{-1}(M^\ast M+\alpha I)^{-1}M^\ast (y^\delta -MF(x_0))\| \\ \nonumber & =&  \| F’(x_0)^{-1}(M^\ast M+\alpha I)^{-1}M^\ast (y^\delta - y + y-MF(x_0))\| \\ \nonumber & \leq &  b( \| (M^\ast M+\alpha I)^{-1}M^\ast (y^\delta -y)\|  \\ \nonumber & & +\| (M^\ast M+\alpha I)^{-1}M^\ast M(F(\hat{x})-F(x_0))\| )\\ \nonumber \end{eqnarray}
</div>
<div class="displaymath" id="a0000000069">
  \begin{eqnarray}  \nonumber & \leq & b(\omega + \tfrac {\delta }{\sqrt{\alpha }} ), \end{eqnarray}
</div>
<p> so if </p>
<div class="equation" id="eq:st1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:st1}b(\omega + \tfrac {\delta }{\sqrt{\alpha }}) < \tfrac {1}{bL_2},\end{equation}
  </div>
  <span class="equation_label">5.48</span>
</p>
</div>
<p>and \(r^\ast {\lt} r.\) Then hypotheses of Theorem <a href="#Th3.1">3.1</a> hold. Again since \(\alpha _j = \mu ^{2j}\delta ^2, \linebreak \tfrac {\delta }{\sqrt{\alpha _k}} = \mu ^{-k};\) the condition (<a href="#eq:st1">5.48</a>) with \(\alpha = \alpha _k\) takes the form </p>
<div class="equation" id="eq:s1">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq:s1}b(\omega + \tfrac {1}{{\mu ^k}}) < \tfrac {1}{bL_2}.\end{equation}
  </div>
  <span class="equation_label">5.49</span>
</p>
</div>
<p>and \(r^\ast {\lt} r.\) Then, we have arrived at the algorithm which guarantees </p>
<div class="displaymath" id="a0000000070">
  \[ \| x^\ast -x_{n_k,\alpha _k}^\delta \| =O(\psi ^{-1}(\delta )), \  {as} \  \delta \rightarrow 0 \]
</div>
<p> with \(n_k=\min \{ n:r^\ast -r_{n,\alpha }^\delta \leq \mu ^{-j}, j=1,2,\ldots , i-1\} \) and improves the corresponding one in <span class="cite">
	[
	<a href="#kn:sgku" >16</a>
	]
</span>. </p>
<h2 id="a0000000071">5.5 Algorithm:</h2>
<p> Note that for \(i,j \in \{ 0,1,2,\ldots , n\} \) </p>
<div class="displaymath" id="a0000000072">
  \[ \| z_{\alpha _i}^\delta -z_{\alpha _j}^\delta \|  = (\alpha _j-\alpha _i)(M^\ast M + \alpha _jI)^{-1}(M^\ast M + \alpha _iI)^{-1}M^\ast (y^\delta -MF(x_0)). \]
</div>
<p> Therefore the adaptive algorithm associated with the choice of the parameter specified in the above theorem is as follows. </p>
<p><span class="tt"><p>begin </p>
<p>&#8195;&#8195;i=0 </p>
<p>&#8195;&#8195;repeat </p>
<p>&#8195;&#8195;&#8195;&#8195;i=i+1 </p>
<p>&#8195;&#8195;&#8195;&#8195;Solve for \( w_i: (M^\ast M + \alpha _iI)w_i = M^\ast (y^\delta -MF(x_0))\) </p>
<p>&#8195;&#8195;&#8195;&#8195;j=-1 </p>
<p>&#8195;&#8195;&#8195;&#8195;repeat </p>
<p>&#8195;&#8195;&#8195;&#8195;&#8195;&#8195;j=j+1 </p>
<p>&#8195;&#8195;&#8195;&#8195;&#8195;&#8195;Solve for \(z_{i,j}: (M^\ast M + \alpha _jI)z_{i,j} =(\alpha _j-\alpha _i)w_i\) </p>
<p>&#8195;&#8195;&#8195;&#8195;until (\(\| z_{i,j}\|  \leq 4\mu ^{-j}\)AND \(j{\lt}i\)) </p>
<p>&#8195;&#8195;until (\(\| z_{i,j}\|  \leq 4\mu ^{-j}\)) </p>
<p>&#8195;&#8195;k=i-1. </p>
<p>&#8195;&#8195;m=0 </p>
<p>&#8195;&#8195;repeat </p>
<p>&#8195;&#8195;&#8195;&#8195;m=m+1 </p>
<p>&#8195;&#8195;until (\((r^\ast -r_{m,\alpha }^\delta ) {\gt} \tfrac {1}{\mu ^k}\)) </p>
<p>&#8195;&#8195;\(n_k=m\) </p>
<p>&#8195;&#8195;for l=1 to \(n_k\) </p>
<p>&#8195;&#8195;&#8195;&#8195;Solve for \(u_{l-1}: F'(x_{l-1, \alpha _k}^\delta )u_{l-1} = F(x_{l-1, \alpha _k}^\delta )-z_{\alpha _k}^\delta \) </p>
<p>&#8195;&#8195;&#8195;&#8195;\( x_{l, \alpha _k}^\delta : = x_{l-1, \alpha _k}^\delta -u_{l-1}\) </p>
<p>end </p>
</span> </p>
<h1 id="a0000000073">6 Numerical Example</h1>
<p> In this section we consider a example for illustrating the algorithm mentioned in the above section. </p>
<p><div class="example_thmwrapper " id="ex.3.5.2">
  <div class="example_thmheading">
    <span class="example_thmcaption">
    Example
    </span>
    <span class="example_thmlabel">6.1</span>
  </div>
  <div class="example_thmcontent">
  <p>In this example we consider the operator \(KF:L^2(0,1)\longrightarrow L^2(0,1)\) where \(K:L^2(0,1)\longrightarrow L^2(0,1)\) defined by </p>
<div class="displaymath" id="a0000000074">
  \[ K(x)(t)= \int _0^1k(t,s) x(s)ds \]
</div>
<p> and \(F:D(F)\subseteq H^1(0,1)\longrightarrow L^2(0,1)\) defined by </p>
<div class="displaymath" id="a0000000075">
  \[ F(u):= \int _0^1k(t,s)u^3(s)ds, \]
</div>
<p> where </p>
<div class="displaymath" id="a0000000076">
  \[  k(t,s)=\left\{  \begin{array}{ll} (1-t)s ,0\leq s\leq t \leq 1 & \\ (1-s)t , 0\leq t\leq s \leq 1\end{array}.\right. \]
</div>
<p> The Fréchet derivative of \(F\) is given by </p>
<div class="displaymath" id="a0000000077">
  \[  F'(u)w=3\int _0^1k(t,s)(u(s))^2w(s)ds.  \]
</div>
<p>In our computation, we take \(f(t)=\frac{1}{110}(\frac{t^{13}}{156}-\frac{t^3}{6}+\frac{25t}{156})\) and \(f^\delta =f+\delta \) with \(\delta =0.01.\) Then the exact solution </p>
<div class="displaymath" id="a0000000078">
  \[ \hat{x}(t)=t^3. \]
</div>
<p> We use </p>
<div class="displaymath" id="a0000000079">
  \[ x_0(t)=t^3+\tfrac {3}{56}(t-t^8) \]
</div>
<p> as our initial guess. The results of the computation are presented in Table <a href="#table2">1</a>. </p>
<div class="table"  id="table2">
   <div class="centered"> <table class="tabular">
  <tr>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> \(n\)  </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p>\(k\)</p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> \(\alpha _k\) </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> \(e_n\)</p>

    </td>
    <td  style="text-align:center" 
        rowspan=""
        colspan="">
      <p> \(\frac{e_n}{\psi ^{-1}(\delta ))}\) </p>

    </td>
  </tr>
  <tr>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">64</small> </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">4</small></p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.0011</small> </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.5257</small> </p>

    </td>
    <td  style="border-top-style:solid; border-top-color:black; border-top-width:1px; text-align:center" 
        rowspan=""
        colspan="">
      <p> <small class="small">5.2541</small> </p>

    </td>
  </tr>
  <tr>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">128</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">4</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.0011</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.5234</small> </p>

    </td>
    <td  style="text-align:center" 
        rowspan=""
        colspan="">
      <p> <small class="small">5.2331</small> </p>

    </td>
  </tr>
  <tr>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">256</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">4</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.0011</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.5222</small> </p>

    </td>
    <td  style="text-align:center" 
        rowspan=""
        colspan="">
      <p> <small class="small">5.2216</small> </p>

    </td>
  </tr>
  <tr>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">512</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">4</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.0011</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.5216</small> </p>

    </td>
    <td  style="text-align:center" 
        rowspan=""
        colspan="">
      <p> <small class="small">5.2156</small></p>

    </td>
  </tr>
  <tr>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">1024</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">4</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">0.0011</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.5211</small> </p>

    </td>
    <td  style="text-align:center" 
        rowspan=""
        colspan="">
      <p> <small class="small">5.2126</small> </p>

    </td>
  </tr>
  <tr>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">2048</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p><small class="small">4</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.0011</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.5211</small> </p>

    </td>
    <td  style="text-align:center" 
        rowspan=""
        colspan="">
      <p> <small class="small">5.2110</small> </p>

    </td>
  </tr>
  <tr>
    <td  style="text-align:center; border-right:1px solid black; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p><small class="small">4096</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p><small class="small">4</small></p>

    </td>
    <td  style="text-align:center; border-right:1px solid black; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.0011</small> </p>

    </td>
    <td  style="text-align:center; border-right:1px solid black; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p> <small class="small">0.5210</small> </p>

    </td>
    <td  style="text-align:center; border-bottom-style:solid; border-bottom-color:black; border-bottom-width:1px" 
        rowspan=""
        colspan="">
      <p> <small class="small">5.2102</small> </p>

    </td>
  </tr>
</table> <figcaption>
  <span class="caption_title">Table</span> 
  <span class="caption_ref">1</span> 
  <span class="caption_text">Iterations and corresponding Error Estimates of Example <a href="#ex.3.5.2">6.1</a></span> 
</figcaption>  </div> 
</div>

  </div>
</div> </p>
<p><div class="remark_thmwrapper " id="a0000000080">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">6.2</span>
  </div>
  <div class="remark_thmcontent">
  <p>We have considered an iterative regularization method, which is a combination of Newton iterative method with a Tikhonov regularization method, for obtaining approximate solution for a nonlinear Hammerstein-type operator equation \(MF(x) = y,\) with the available data \(y^\delta \) in place of the exact data \(y.\) If the operator \(M\) is a positive self-adjoint bounded linear operator on a Hilbert space, then one may consider Newton Lavrentiev regularization method for obtaining an approximate solution for \(MF(x) = y.\) It is assumed that the Frèchet derivative \(F'(x)\) of the non-linear operator F has a continuous inverse, in a neighborhood of some initial guess \(x_0\) of the actual solution \(x^\ast .\) The procedure involves solving the equation </p>
<div class="displaymath" id="a0000000081">
  \[ (M^\ast M + \alpha I)u_\alpha ^\delta = M^\ast (y^\delta -MF(x_0)) \]
</div>
<p> and finding the fixed point of the function </p>
<div class="displaymath" id="a0000000082">
  \[ G(x) = x-F'(x)^{-1}(F(x)-F(x_0)-u_\alpha ^\delta ) \]
</div>
<p> in an iterative manner. For choosing the regularization parameter \(\alpha \) and the stopping index for the iteration, we made use of the adaptive method suggested in <span class="cite">
	[
	<a href="#kn:PerSch" >26</a>
	]
</span>. <span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="kn:Arg">1</a></dt>
  <dd><p><i class="sc">I.K. Argyros</i>, <i class="it">Convergence and Application of Newton-type Iterations</i>, Springer, 2008. </p>
</dd>
  <dt><a name="kn:Arg4">2</a></dt>
  <dd><p><a href ="http://ictp.acad.ro/jnaat/journal/article/view/2007-vol36-no2-art2"> <i class="sc">I.K. Argyros</i>, <i class="it">Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point</i>, Rev. Anal. Numér. Théor. Approx., <b class="bf">36</b> (2007), pp.&#160;123–138. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Arg1">3</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1090/S0025-5718-2010-02398-1"> <i class="sc">I.K. Argyros</i>, <i class="it">A Semilocal convergence for directional Newton methods</i>, Math. Comp. <b class="bf">80</b>, (2011), pp.&#160;327–343. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Arg2">4</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1016/j.jco.2011.12.003"> <i class="sc">I.K. Argyros</i> and <i class="sc">S. Hilout</i>, <i class="it">Weaker conditions for the convergence of Newton’s method</i>, J. Complexity, <b class="bf">28</b>, (2012), pp.&#160; 364–387. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Arg3">5</a></dt>
  <dd><p><i class="sc">I.K. Argyros, Y.J. Cho</i> and <i class="sc">S. Hilout</i>, <i class="it">Numerical Methods for Equations and its Applications</i>, CRC Press, Taylor and Francis, New York, 2012. </p>
</dd>
  <dt><a name="kn:Basm">6</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1081/NFA-200051631"> <i class="sc">A. Bakushinsky</i> and <i class="sc">A. Smirnova</i>, <i class="it">On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems</i>, Numer. Funct. Anal. Optim. <b class="bfseries">26</b> (2005), pp.&#160;35–48. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Bin">7</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1080/01630569008816395"> <i class="sc">A. Binder, H.W. Engl,</i> and <i class="sc">S. Vessela</i>, <i class="it">Some inverse problems for a nonlinear parabolic equation connected with continuous casting of steel: stability estimate and regularization</i>, Numer. Funct. Anal. Optim., <b class="bfseries">11</b> (1990), pp.&#160;643–671. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Engl">8</a></dt>
  <dd><p><i class="sc">H.W. Engl, M. Hanke</i> and <i class="sc">A. Neubauer</i>, <i class="it">Tikhonov regularization of nonlinear differential equations</i>, Inverse Methods in Action, P.C. Sabatier, ed., Springer-Verlag, New York 1990, pp.&#160;92–105. </p>
</dd>
  <dt><a name="kn:Engkun">9</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1088/0266-5611/5/4/007"> <i class="sc">H.W. Engl, K. Kunisch</i> and <i class="sc">A. Neubauer</i>, <i class="it">Convergence rates for Tikhonov regularization of nonlinear ill-posed problems</i>, Inverse Problems, <b class="bfseries">5</b> (1989), pp.&#160;523–540. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Tik">10</a></dt>
  <dd><p><i class="sc">C.W. Groetsch</i>, <i class="it">Theory of Tikhonov Regularization for Fredholm Equation of the First Kind</i>, Pitmann Books, London, 1984. </p>
</dd>
  <dt><a name="kn:grogua">11</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1090/S0002-9939-1987-0870781-5"> <i class="sc">C.W. Groetsch</i> and <i class="sc">J.E. Guacaneme</i>, <i class="it">Arcangeli’s method for Fredhom equations of the first kind</i>, Proc. Amer. Math. Soc. <b class="bfseries">99</b> (1987), pp.&#160;256–260. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:gua">12</a></dt>
  <dd><p><i class="sc">J.E. Guacaneme</i>, <i class="it">Aparameter choice for simplified regularization</i>, Rostock. Math. Kolloq., <b class="bfseries">42</b> (1990), pp.&#160;59–68. </p>
</dd>
  <dt><a name="kn:SG">13</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1515/156939406777571076"> <i class="sc">S. George</i>, <i class="it">Newton-Tikhonov regularization of ill-posed Hammerstein operator equation</i>, J. Inverse Ill-Posed Probl., <b class="bfseries">14</b> (2006), no. 2, pp.&#160;135-146. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:SG1">14</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1515/156939406778474550"> <i class="sc">S. George</i>, <i class="it">Newton-Lavrentiev regularization of ill-posed Hammerstein type operator equation</i>, J. Inverse Ill-Posed Probl., <b class="bfseries">14</b>, (2006), no. 6, pp.&#160;573-582. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:sgatef">15</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.2478/cmam-2012-0003"> <i class="sc">S. George</i> and <i class="sc">A.I. Elmahdy</i>, <i class="it">A quadratic convergence yielding iterative method for nonlinear ill-posed operator equations</i>, Comput. Methods Appl. Math. <b class="bfseries">12</b>, no. 1 (2012), pp.&#160;32–45. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:sgku">16</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1515/JIIP.2009.049"> <i class="sc">S. George</i> and <i class="sc">M. Kunhanandan</i>, <i class="it">An iterative regularization method for Ill-posed Hammerstein type operator equation</i>, J. Inverse Ill-Posed Probl., <b class="bfseries">17</b> (2009), pp.&#160;831–844. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:sgku1">17</a></dt>
  <dd><p><i class="sc">S. George</i> and <i class="sc">M. Kunhanandan</i>, <i class="it">Iterative regularization methods for ill-posed Hammerstein type operator equation with monotone nonlinear part</i>, Int. J. Math. Anal., <b class="bfseries">4</b> (2010), no. 34, pp.&#160;1673–1685. </p>
</dd>
  <dt><a name="kn:SGMT">18</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1007/BF01204229"> <i class="sc">S. George</i> and <i class="sc">M.T. Nair</i>, <i class="it">An a posteriori parameter choice for simplified regularization of ill-posed problems</i>, Integral Equations Operator Theory, <b class="bfseries">16</b> (1993), pp.&#160;392–399. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:GN-scales">19</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1080/01630569808816858"> <i class="sc">S. George</i> and <i class="sc">M.T. Nair</i>, <i class="it">On a generalized Arcangeli’s method for Tikhonov regularization with inexact data</i>, Numer. Funct. Anal. Optim., <b class="bfseries">19</b> (1998), (nos. 7–8), pp.&#160;773–787. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:GN-NLR">20</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1016/j.jco.2007.08.001"> <i class="sc">S. George</i> and <i class="sc">M.T. Nair</i>, <i class="it">A modified Newton-Lavrentiev regularization for nonlinear ill-posed Hammerstein-Type operator equation</i>, J. Complexity, <b class="bfseries">24</b> (2008), pp.&#160;228–240. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Hns">21</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1007/s002110050158"> <i class="sc">M. Hanke, A. Neubauer</i> and <i class="sc">O. Scherzer</i>, <i class="it">A convergence analysis of Landweber iteration of nonlinear ill-posed problems</i>, Numer. Math., <b class="bfseries">72</b> (1995), pp.&#160;21–37. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Jin2">22</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1007/10.1080/00036819608840482"> <i class="sc">Jin Qi-nian</i> and <i class="sc">Hou Zong-yi</i>, <i class="it">Finite-dimensional approximations to the solutions of nonlinear ill-posed problems</i>, Appl. Anal., <b class="bfseries">62</b> (1996), pp.&#160;253–261. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Jin">23</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1007/s002110050442"> <i class="sc">Jin Qi-nian</i> and <i class="sc">Hou Zong-yi</i>, <i class="it">On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems</i>, Numer. Math., <b class="bfseries">83</b> (1999), pp.&#160;139–159. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Kan">24</a></dt>
  <dd><p><i class="sc">L.V. Kantorovich</i> and <i class="sc">G.P. Akilov</i>, <i class="it">Functional Analysis in Normed Spaces</i>, Pergamon Press, New York, 1964. </p>
</dd>
  <dt><a name="kn:Mair">25</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1137/S0036141092238060"> <i class="sc">B.A. Mair</i>, <i class="it">Tikhonov regularization for finitely and infinitely smoothing operators</i>, SIAM J. Math. Anal., <b class="bfseries">25</b> (1994), pp.&#160;135–147. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:PerSch">26</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1137/S0036142903433819"> <i class="sc">S. Pereverzev</i> and <i class="sc">E. Schock</i>, <i class="it">On the adaptive selection of the parameter in regularization of ill-posed problems</i>, SIAM J. Numer. Anal., <b class="bfseries">43</b> (2005), no. 5, pp.&#160;2060–2076. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Sch">27</a></dt>
  <dd><p><i class="sc">O. Scherzer</i>, <i class="it">A parameter choice for Tikhonov regularization for solving nonlinear inverse problems leading to optimal rates</i>, Appl. Math., <b class="bfseries">38</b> (1993), pp.&#160;479–487. </p>
</dd>
  <dt><a name="kn:Sek">28</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1137/0730091"> <i class="sc">O. Scherzer, H.W. Engl</i> and <i class="sc">K. Kunisch</i>, <i class="it">Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems</i>, SIAM J. Numer. Anal., <b class="bfseries">30</b> (1993), no.6, pp.&#160;1796–1838. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Sch1">29</a></dt>
  <dd><p><i class="sc">O. Scherzer</i>, <i class="it">The use of Tikhonov regularization in the identification of electrical conductivities from overdetermined boundary data</i>, Results Mathematics, <b class="bfseries">22</b> (1992), pp.&#160;599–618. </p>
</dd>
  <dt><a name="kn:SchEngl">30</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1016/0362-546X(93)90013-I"> <i class="sc">O. Scherzer, H.W. Engl</i> and <i class="sc">R.S. Anderssen</i>, <i class="it">Parameter identification from boundary measurements in parabolic equation arising from geophysics</i>, Nonlinear Anal., <b class="bfseries">20</b> (1993), pp.&#160;127-156. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Ras">31</a></dt>
  <dd><p><i class="sc">T. Raus</i>, <i class="it">On the discrepancy principle for the solution of ill-posed problems</i>, Uch. Zap. Tartu. Gos. Univ., <b class="bfseries">672</b> (1984), pp.&#160;16–26 (In Russian). </p>
<p><br /></p>
</dd>
  <dt><a name="kn:sc">32</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1007/BF00934896"> <i class="sc">E. Schock</i>, <i class="it">On the asymptotic order of accuracy of Tikhonov regularization</i>, J. Optim. Theory Appl., <b class="bfseries">44</b> (1984), pp.&#160;95-104. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
  <dt><a name="kn:Tau1">33</a></dt>
  <dd><p><a href ="http://dx.doi.org/10.1088/0266-5611/18/1/313"> <i class="sc">U. Tautenhahn</i>, <i class="it">On the method of Lavrentiev regularization for nonlinear ill-posed problems</i>, Inverse Problems, <b class="bfseries">18</b> (2002), pp.&#160;191-207. <img src="img-0001.png" alt="\includegraphics[scale=0.1]{ext-link.png}" style="width:12.0px; height:10.700000000000001px" />
</a> </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>