Abstract

We introduce an Aitken–Newton iterative method for nonlinear equations, which is obtained by using the Hermite inverse interpolation polynomial of degree 2, with two nodes given by the Newton method. The local convergence of these iterates is shown to be 8, and the efficiency index is \(\sqrt[5]{8}\approx 1.51\), which is not optimal in the sense of Kung and Traub. However, we show that under supplementary conditions (sometimes easy to verify) the inner and outer iterates converge monotonically to the solution. This aspect allows an improved control of the iteration stopping (avoiding divisions by zero) and offer an alternative way to the estimation of radius of attraction balls in ensuring the convergence of the iterates. Numerical examples show that this method may become competitive and in certain circumstances even more robust than certain optimal methods of same convergence order.

Authors

Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)

Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)

Keywords

Nonlinear equations in  R; Newton-type iterative methods; inverse interpolation; (Sided) convergence domains; monotone convergence; divided differences.

Paper coordinates

I. Păvăloiu, E. Cătinaş, On a robust Aitken-Newton method based on the Hermite polynomial, Appl. Math. Comput., 287-288 (2016), pp. 224-231.
doi: 10.1016/j.amc.2016.03.036

PDF

About this paper

Print ISSN

0096-3003

Online ISSN

Google Scholar Profile

[1] S. Amat, J. Blanda, S. Busquier, A Steffensen type method with modified functions, Riv. Mat. Univ. Parma 7 (2007) 125–133.

[2] S. Amat, S. Busquier, J.A. Ezquerro, M.A. Hernández-Verón, A Steffensen type method of two steps in banach spaces with applications, J. Comput. Appl. Math. 291 (2016) 317–331.

[3] I.K. Argyros, A new convergence theorem for the Steffensen method in Banach space and applications, Rev. Anal. Numér. Théor. Approx. 29 (1) (2000) 119–127.

[4] I.K. Argyros, S. Hilout, An improved local convergence analysis for Newton–Steffensen-type method, J. Appl. Math. Comp. 32 (1) (2010) 111–118.

[5] E. Catinas, On some Steffensen-type iterative method for a class of nonlinear equations, Rev. Anal. Numér. Théor. Approx. 24 (1-2) (1995) 37–43.

[6] C. Chun, Certain improvements of Chebyshev–Halley methods with accelerated fourth-order convergence, Appl. Math. Comput. 189 (1) (2007) 597–601.

[7] A. Cordero, J.L. Hueso, E. Martinez, J.R. Torregrosa, Generating optimal derivative free iterative methods for nonlinear equations by using polynomial interpolations, Math. Comput. Model. 57 (7-8) (2013) 1950–1956.

[8] A. Cordero, J.R. Torregrosa, M.P. Vassileva, A family of modified Ostrowski’s methods with optimal eighth order of convergence, Appl. Math. Lett. 24 (12) (2011) 2082–2086.

[9] J. Kou, X. Wang, Some improvements of Ostrowski’s method, Appl. Math. Lett. 23 (1) (2010) 92–96.

[10] L. Liu, X. Wang, Eighth-order methods with high efficiency index for solving nonlinear equations, Appl. Math. Comput. 215 (9) (2010) 3449–3454.

[11] I. Pavaloiu, Approximation of the root of equations by Aitken–Steffensen-type monotonic sequences, Calcolo 32 (1-2) (1995) 69–82.

[12] I. Pavaloiu , E. Catinas , Bilateral approximation for some Aitken–Steffensen–Hermite type method of order three, Appl. Math. Comput. 217 (12) (2011) 5838–5846.

[13] I. Pavaloiu, E. Catinas , On an Aitken–Newton type method, Numer. Algorithm 62 (2) (2013) 253–260.

[14] M.S. Petkovic, On a general class of multipoint root-finding methods of high computational efficiency, SIAM J. Numer. Anal. 47 (6) (2010) 4402–4414.

[15] J.R. Sharma, A composite third order Newton–Steffensen method for solving nonlinear equations, Appl. Math. Comput. 169 (1) (2005) 242–246.

[16] F. Soleymani, S.K. Vanani, Optimal Steffensen-type method with eighth order of convergence, Comput. Math. Appl. 62 (12) (2011) 4619–4626.

Paper (preprint) in HTML form

On a robust Aitken-Newton method based on the Hermite polynomial

Ion Păvăloiu, Emil Cătinaş*
"T. Popoviciu" Institute of Numerical Analysis (Romanian Academy), P.O. Box 68-1, Cluj-Napoca, Romania
Abstract

We introduce an Aitken-Newton iterative method for nonlinear equations, which is obtained by using the Hermite inverse interpolation polynomial of degree 2, with two nodes given by the Newton method.
The local convergence of these iterates is shown to be 8, and the efficiency index is 851.51\sqrt[5]{8}\approx 1.51, which is not optimal in the sense of Kung and Traub. However, we show that under supplementary conditions (sometimes easy to verify) the inner and outer iterates converge monotonically to the solution. This aspect allows an improved control of the iteration stopping (avoiding divisions by zero) and offer an alternative way to the estimation of radius of attraction balls in ensuring the convergence of the iterates. Numerical examples show that this method may become competitive and in certain circumstances even more robust than certain optimal methods of same convergence order.

1. Introduction

In this paper, we study certain iterative methods for solving nonlinear equations

f(x)=0,f(x)=0, (1)

where f:[a,b],a,b,a<bf:[a,b]\rightarrow\mathbb{R},a,b\in\mathbb{R},a<b. The Aitken and Steffensen-type methods consider two more equations, equivalent to the above one:

xgi(x)=0,i=1,2,x-g_{i}(x)=0,\quad i=1,2, (2)

where gi:[a,b][a,b]g_{i}:[a,b]\rightarrow[a,b].
We admit that
(A) Eq. (1) has a solution x]a,b[\left.x^{*}\in\right]a,b[.

The most known methods of Steffensen, Aitken, or Aitken-Steffensen type are obtained from inverse interpolation polynomials of degree 1 , having the knots determined with the aid of different functions g1,g2g_{1},g_{2} (see, e.g., [15,13][1-5,13] ).

More general methods of this type have been obtained by using inverse interpolation polynomials of degree 2 [11]; they have led to iterative methods of qq-order at least 3 , and with efficiency indexes usually higher than for the case of
interpolation polynomials of degree 1 [15]. Obviously, the convergence order and the efficiency index of the resulted methods depend on the functions gig_{i}.

Here, we study a method based on the Hermite inverse interpolation polynomial of degree 2, with the nodes generated with the aid of the Newton iteration function, xf(x)/f(x)x-f(x)/f^{\prime}(x), in (2).

We need the following assumption on ff :
(B) ff is three times derivable on [a,b][a,b] and f(x)0,x[a,b]f^{\prime}(x)\neq 0,\forall x\in[a,b].

This assumption implies that function ff is monotone and continuous on [a,b][a,b], therefore f1:J[a,b],J:=f([a,b])\exists f^{-1}:J\rightarrow[a,b],J:=f([a,b]), and by (B) we have

x=f1(0)x^{*}=f^{-1}(0)

In order to approximate the solution xx^{*} we shall consider the Hermite inverse interpolation polynomial of degree 2.
Let a1,a2[a,b]a_{1},a_{2}\in[a,b], denote b1=f(a1),b2=f(a2)b_{1}=f\left(a_{1}\right),b_{2}=f\left(a_{2}\right) so that

f1(b1)=a1,f1(b2)=a2f^{-1}\left(b_{1}\right)=a_{1},\quad f^{-1}\left(b_{2}\right)=a_{2}

and

f1()|y=b2=1f(a2)\left.f^{-1}(\cdot)^{\prime}\right|_{y=b_{2}}=\frac{1}{f^{\prime}\left(a_{2}\right)}

The polynomial of degree 2 which satisfies

P(b1)\displaystyle P\left(b_{1}\right) =a1\displaystyle=a_{1}
P(b2)\displaystyle P\left(b_{2}\right) =a2\displaystyle=a_{2}
P(b2)\displaystyle P^{\prime}\left(b_{2}\right) =1f(a2)\displaystyle=\frac{1}{f^{\prime}\left(a_{2}\right)}

is the Hermite polynomial, given by

P(y)=a1+[b1,b2;f1](yb1)+[b1,b2,b2;f1](yb1)(yb2)P(y)=a_{1}+\left[b_{1},b_{2};f^{-1}\right]\left(y-b_{1}\right)+\left[b_{1},b_{2},b_{2};f^{-1}\right]\left(y-b_{1}\right)\left(y-b_{2}\right) (3)

having the remainder

f1(y)P(y)=[y,b1,b2,b2;f1](yb1)(yb2)2,yJf^{-1}(y)-P(y)=\left[y,b_{1},b_{2},b_{2};f^{-1}\right]\left(y-b_{1}\right)\left(y-b_{2}\right)^{2},\quad\forall y\in J

where [u,v;h],[u,v,w;h][u,v;h],[u,v,w;h] denote the first, respectively the second order divided difference of the function hh at the nodes u,vu,v resp. u,v,wu,v,w.

Setting y=0y=0 in (3) we are led to another approximation for xx^{*} :

xP(0)=a1[b1,b2;f1]b1+[b1,b2,b2;f1]b1b2x^{*}\approx P(0)=a_{1}-\left[b_{1},b_{2};f^{-1}\right]b_{1}+\left[b_{1},b_{2},b_{2};f^{-1}\right]b_{1}b_{2} (4)

having the error

xP(0)=[0,b1,b2,b2;f1]b1b22x^{*}-P(0)=-\left[0,b_{1},b_{2},b_{2};f^{-1}\right]b_{1}b_{2}^{2} (5)

Formula (4) will be used in an iterative fashion, by setting a3=P(0)a_{3}=P(0), and so on. However, instead of (4) and (5) we need formulas which do not need the evaluation of the inverse function.

It is known that the divided differences satisfy the following relations (see, e.g., [12]):

[b1,b2;f1]\displaystyle{\left[b_{1},b_{2};f^{-1}\right]} =1[a1,a2;f]\displaystyle=\frac{1}{\left[a_{1},a_{2};f\right]}
[b1,b2,b2;f1]\displaystyle{\left[b_{1},b_{2},b_{2};f^{-1}\right]} =[a1,a2,a2;f][a1,a2;f]2f(a2)\displaystyle=-\frac{\left[a_{1},a_{2},a_{2};f\right]}{\left[a_{1},a_{2};f\right]^{2}f^{\prime}\left(a_{2}\right)}

which lead us to a formula for a3a_{3} based only on the values of ff and its derivatives:

xa3=a1f(a1)[a1,a2;f][a1,a2,a2;f]f(a1)f(a2)[a1,a2;f]2f(a2)x^{*}\approx a_{3}=a_{1}-\frac{f\left(a_{1}\right)}{\left[a_{1},a_{2};f\right]}-\frac{\left[a_{1},a_{2},a_{2};f\right]f\left(a_{1}\right)f\left(a_{2}\right)}{\left[a_{1},a_{2};f\right]^{2}f^{\prime}\left(a_{2}\right)} (6)

Taking into account (B) and the Mean Value Theorem on the divided differences, the error in (5) becomes

[0,b1,b2,b2;f1]=f1(η)′′′6,ηintJ.\left[0,b_{1},b_{2},b_{2};f^{-1}\right]=\frac{f^{-1}(\eta)^{\prime\prime\prime}}{6},\quad\eta\in\operatorname{int}J.

Since (see, e.g., [12])

f1(y)′′′=3(f′′(x))2f(x)f′′′(x)f(x)5, with y=f(x)f^{-1}(y)^{\prime\prime\prime}=\frac{3\left(f^{\prime\prime}(x)\right)^{2}-f^{\prime}(x)f^{\prime\prime\prime}(x)}{f^{\prime}(x)^{5}},\quad\text{ with }y=f(x)

and ff is one-to-one, it follows that ξ]a,b[\exists\xi\in]a,b[ with η=f(ξ)\eta=f(\xi) such that

[0,b1,b2,b2;f1]=3(f′′(ξ))2f(ξ)f′′′(ξ)6(f(ξ))5\left[0,b_{1},b_{2},b_{2};f^{-1}\right]=\frac{3\left(f^{\prime\prime}(\xi)\right)^{2}-f^{\prime}(\xi)f^{\prime\prime\prime}(\xi)}{6\left(f^{\prime}(\xi)\right)^{5}} (7)

tiie Ellul iii (// is vvilletil as

xa3=Ef(ξ)6(f(ξ))5b1b22x^{*}-a_{3}=-\frac{E_{f}(\xi)}{6\left(f^{\prime}(\xi)\right)^{5}}b_{1}b_{2}^{2} (9)

As we have mentioned, the functions g1g_{1} and g2g_{2} in (2) are taken, with the aid of the Newton iteration function, as:

g1(x)=xf(x)f(x)\displaystyle g_{1}(x)=x-\frac{f(x)}{f^{\prime}(x)}
g2(x)=g1(g1(x))\displaystyle g_{2}(x)=g_{1}\left(g_{1}(x)\right)

so that, setting a1=zna_{1}=z_{n} and a2=yna_{2}=y_{n} in (6), we are led to the following iterations:

yn\displaystyle y_{n} =xnf(xn)f(xn)\displaystyle=x_{n}-\frac{f\left(x_{n}\right)}{f^{\prime}\left(x_{n}\right)}
zn\displaystyle z_{n} =ynf(yn)f(yn)\displaystyle=y_{n}-\frac{f\left(y_{n}\right)}{f^{\prime}\left(y_{n}\right)} (10)
xn+1\displaystyle x_{n+1} =znf(zn)[zn,yn;f][zn,yn,yn;f]f(zn)f(yn)[yn,zn;f]2f(yn),n=0,1,\displaystyle=z_{n}-\frac{f\left(z_{n}\right)}{\left[z_{n},y_{n};f\right]}-\frac{\left[z_{n},y_{n},y_{n};f\right]f\left(z_{n}\right)f\left(y_{n}\right)}{\left[y_{n},z_{n};f\right]^{2}f^{\prime}\left(y_{n}\right)},\quad n=0,1,\ldots

We refer to this as the Aitken-Newton method.
We shall show in Section 2 that under some very simple conditions, this method converges locally with qq-order 8. Since, at each iteration step we need to compute 5 function evaluations (i.e., f(xn),f(xn),f(yn),f(yn)f\left(x_{n}\right),f^{\prime}\left(x_{n}\right),f\left(y_{n}\right),f^{\prime}\left(y_{n}\right) and f(zn)f\left(z_{n}\right) ), the efficiency index is 851.51\sqrt[5]{8}\approx 1.51, which is not optimal in the sense of Kung and Traub.

There are a lot of optimal methods (see, e.g., [7,8,10,16][7,8,10,16] ) but, as we shall see in the numerical examples section, the optimal efficiency index is not a paramount by itself, since there exist situations when some optimal methods may have very small attraction sets for certain solutions.

In Section 3 we show that if there exists a set D[a,b]D\subseteq[a,b] containing a solution xx^{*} such that on this set, ff has monotone and convex properties, EfE_{f} is positive, and the Fourier condition is verified, then for any initial approximation in DD, the method converges monotonically to xx^{*} :

x<xn+1<zn<yn<xn,( or xn<yn<zn<xn+1<x),n=0,1,x^{*}<x_{n+1}<z_{n}<y_{n}<x_{n},\quad\left(\text{ or }\quad x_{n}<y_{n}<z_{n}<x_{n+1}<x^{*}\right),\quad n=0,1,\ldots

It is important to note that the convergence holds as large as the domain DD is, and this domain may extend to the left or to the right of the solution much more than ensured by (even sharp) estimates for the radius of an attraction ball.

The above inequalities show another advantage of this method, since the stopping criterions (such as |xn1xn|<tol1\left|x_{n-1}-x_{n}\right|<tol_{1} or |f(xn)|<tol2)\left.\left|f\left(x_{n}\right)\right|<tol_{2}\right) may be considered for the extended (inner) iterates xn,yn,zn,xn+1x_{n},y_{n},z_{n},x_{n+1} (resp. for the values of ff at them), avoiding divisions by zeros.

These aspects, as well as other ones, are treated in Section 4, regarding numerical examples.

2. Local convergence

The following result holds.
Theorem 1. Under assumptions (A) and (B), the Aitken-Newton method (10) converges locally, with qq-convergence order 8 :

xxn+1=An(xxn)8x^{*}-x_{n+1}=A_{n}\left(x^{*}-x_{n}\right)^{8} (11)

where

An=Ef(ξn)f′′(θn)(f′′(δn))4f(μn)(f(vn))2192(f(ξn))5(f(xn))4f(yn),A_{n}=-\frac{E_{f}\left(\xi_{n}\right)f^{\prime\prime}\left(\theta_{n}\right)\left(f^{\prime\prime}\left(\delta_{n}\right)\right)^{4}f^{\prime}\left(\mu_{n}\right)\left(f^{\prime}\left(v_{n}\right)\right)^{2}}{192\left(f^{\prime}\left(\xi_{n}\right)\right)^{5}\left(f^{\prime}\left(x_{n}\right)\right)^{4}f^{\prime}\left(y_{n}\right)},

for certain ξn,θn,δn,μn,νn]a,b[\left.\xi_{n},\theta_{n},\delta_{n},\mu_{n},\nu_{n}\in\right]a,b[. The asymptotic constant is given by

A=limnAn=Ef(x)(f′′(x))5192(f(x))7A=\lim_{n\rightarrow\infty}A_{n}=-\frac{E_{f}\left(x^{*}\right)\left(f^{\prime\prime}\left(x^{*}\right)\right)^{5}}{192\left(f^{\prime}\left(x^{*}\right)\right)^{7}}

Proof. We assume in the beginning that the elements of the sequences (xn)n0,(yn)n0\left(x_{n}\right)_{n\geq 0},\left(y_{n}\right)_{n\geq 0} and (zn)n0\left(z_{n}\right)_{n\geq 0} remain in [a,b][a,b].
Assumption ( BB ) and the first relation in (4) imply the existence of δn]a,b[\left.\delta_{n}\in\right]a,b[ such that

xyn=f′′(δn)2f(xn)(xxn)2,n=0,1,x^{*}-y_{n}=-\frac{f^{\prime\prime}\left(\delta_{n}\right)}{2f^{\prime}\left(x_{n}\right)}\left(x^{*}-x_{n}\right)^{2},\quad n=0,1,\ldots (12)

Analogously, the second relation in (4) attracts

xzn=f′′(θn)2f(yn)(xyn)2,n=0,1,,x^{*}-z_{n}=-\frac{f^{\prime\prime}\left(\theta_{n}\right)}{2f^{\prime}\left(y_{n}\right)}\left(x^{*}-y_{n}\right)^{2},\quad n=0,1,\ldots, (13)

with θn]a,b[\left.\theta_{n}\in\right]a,b[.

Considering (9), with a3=xn+1,b1=f(zn)a_{3}=x_{n+1},b_{1}=f\left(z_{n}\right) and b2=f(yn)b_{2}=f\left(y_{n}\right), we get

xxn+1=Ef(ξn)6(f(ξn))5f(zn)(f(yn))2,n=0,1,x^{*}-x_{n+1}=-\frac{E_{f}\left(\xi_{n}\right)}{6\left(f^{\prime}\left(\xi_{n}\right)\right)^{5}}f\left(z_{n}\right)\left(f\left(y_{n}\right)\right)^{2},\quad n=0,1,\ldots

The values f(zn)f\left(z_{n}\right) și f(yn)f\left(y_{n}\right) are easily expressed by

f(zn)=f(μn)(znx)\displaystyle f\left(z_{n}\right)=f^{\prime}\left(\mu_{n}\right)\left(z_{n}-x^{*}\right)
f(yn)=f(vn)(ynx)\displaystyle f\left(y_{n}\right)=f^{\prime}\left(v_{n}\right)\left(y_{n}-x^{*}\right) (14)

with μn,vn]a,b[\left.\mu_{n},v_{n}\in\right]a,b[.
Relations (12)-(14) imply (11).
The induction holds if x0x_{0} is chosen sufficiently close to xx^{*}.

3. Monotone convergence

We consider the following supplementary conditions on ff
(C) Function EfE_{f} given by (8) is strictly positive on [a,b][a,b] :

Ef(x)=3(f′′(x))2f(x)f′′′(x)>0,x]a,b[\left.E_{f}(x)=3\left(f^{\prime\prime}(x)\right)^{2}-f^{\prime}(x)f^{\prime\prime\prime}(x)>0,\quad\forall x\in\right]a,b[

(D) The initial approximation x0[a,b]x_{0}\in[a,b] satisfies the Fourier condition:

f(x0)f′′(x0)>0.f\left(x_{0}\right)f^{\prime\prime}\left(x_{0}\right)>0.

Remark 1. As we shall see in the numerical examples, condition (C) may sometimes be easily verified (e.g, when ff^{\prime} and f′′f^{\prime\prime} have different signs, etc.).

We obtain the following results:
Theorem 2. If ff and x0x_{0} satisfy assumptions (A)-(D) and, moreover:
(i1)f(x)>0,x[a,b];\left(i_{1}\right)f^{\prime}(x)>0,\forall x\in[a,b];
(ii1)f′′(x)>0,x[a,b]\left(ii_{1}\right)f^{\prime\prime}(x)>0,\forall x\in[a,b];
then the iterates of the Aitken-Newton method (10) satisfy:
(j1)x<xn+1<zn<yn<xn,n=0,1,\left(j_{1}\right)x^{*}<x_{n+1}<z_{n}<y_{n}<x_{n},\quad n=0,1,\ldots
( jj1)limxn=limyn=limzn=x\left.jj_{1}\right)\lim x_{n}=\lim y_{n}=\lim z_{n}=x^{*}.
Proof. We proceed by induction.
Let xn[a,b]x_{n}\in[a,b] satisfy the Fourier condition. By (ii) we have that f(xn)>0f\left(x_{n}\right)>0, therefore xn>xx_{n}>x^{*}, by ( i1i_{1} ). Relations (12) and (13) attract yn>xy_{n}>x^{*} and zn>xz_{n}>x^{*}. The first relation in (10), together with f(xn)>0f\left(x_{n}\right)>0 imply that yn<xny_{n}<x_{n}. Analogously, zn<ynz_{n}<y_{n}. The third relation in (10) and the hypotheses imply that zn>xn+1>xz_{n}>x_{n+1}>x^{*}. Statement ( j1j_{1} ) is proved.

It follows that sequences (xn),(yn),(zn)\left(x_{n}\right),\left(y_{n}\right),\left(z_{n}\right) are convergent to the same limit \ell, and (10) shows that f()=0f(\ell)=0, i.e., =x\ell=x^{*}.
Theorem 3. If ff and x0x_{0} satisfy ( AA )- ( DD ) and moreover,
(i2)f(x)<0,x[a,b];\left(i_{2}\right)f^{\prime}(x)<0,\forall x\in[a,b];
(ii ) f′′(x)<0,x[a,b]f^{\prime\prime}(x)<0,\forall x\in[a,b],
then the same conclusions as those in Theorem 2 hold.
The proof is easily obtained applying Theorem 2 to the function h=fh=-f.
The following two theorems have similar proofs.
Theorem 4. If ff and x0x_{0} satisfy conditions (A)-(D) and moreover,
(i3)f(x)<0,x[a,b];\left(i_{3}\right)f^{\prime}(x)<0,\forall x\in[a,b];
(ii3)f′′(x)>0,x[a,b]\left(ii_{3}\right)f^{\prime\prime}(x)>0,\forall x\in[a,b];
then the iterates of the Aitken-Newton method (10) satisfy
(j3)xn<yn<zn<xn+1<x,n=0,1,\left(j_{3}\right)x_{n}<y_{n}<z_{n}<x_{n+1}<x^{*},n=0,1,\ldots,
(jjo) limxn=limyn=limzn=x\lim x_{n}=\lim y_{n}=\lim z_{n}=x^{*}.
Theorem 5. If ff and x0x_{0} satisfy conditions (A)-(D) and moreover,

Table 1: Table 1
Aitken-Newton iterates (10), f(x)=e2x+sinx2f(x)=e^{2x}+\sin x-2.
nn xnx_{n} yny_{n} znz_{n} f(xn)f\left(x_{n}\right)
0 1 5.932655378778493e15.932655378778493\mathrm{e}-1 3.446691220304792e13.446691220304792\mathrm{e}-1 6.2e+06.2\mathrm{e}+0
1 2.781136458347832e12.781136458347832\mathrm{e}-1 2.739285803512798e12.739285803512798\mathrm{e}-1 2.739153432766920e12.739153432766920\mathrm{e}-1 1.8e21.8\mathrm{e}-2
2 2.739153431449791e12.739153431449791\mathrm{e}-1 0
Table 2: Table 2
Aitken-Newton iterates (10), f(x)=ex4x2f(x)=e^{x}-4x^{2}.
nn xnx_{n} yny_{n} znz_{n} f(xn)f\left(x_{n}\right)
0 1 7.573293140767846e17.573293140767846\mathrm{e}-1 7.161639906789638e17.161639906789638\mathrm{e}-1 1.2e+00-1.2\mathrm{e}+00
1 7.148090008114115e17.148090008114115\mathrm{e}-1 7.148059123705082e17.148059123705082\mathrm{e}-1 7.148059123627778e17.148059123627778\mathrm{e}-1 1.1e05-1.1\mathrm{e}-05
2 7.148059123627779e17.148059123627779\mathrm{e}-1 4.4e16-4.4\mathrm{e}-16

(i4)f(x)>0,x[a,b];\left(i_{4}\right)f^{\prime}(x)>0,\forall x\in[a,b];
(ii4)f′′(x)<0,x[a,b]\left(ii_{4}\right)f^{\prime\prime}(x)<0,\forall x\in[a,b],
then the iterates of (10) satisfy the conclusion of Theorem 4.
Remark 2. As we shall see in the numerical examples, even if the hypotheses of the above monotone convergence results are not fulfilled, the iterates may converge (due to the local convergence Theorem 1) and sometimes even monotonically (Theorems 2-5 offer sufficient but not also necessary conditions).

4. Numerical examples

In this section we shall consider some examples for which we run programs in Matlab, in double precision floating point arithmetic (this is the setting most encountered in practice). As we shall see, the high convergence orders and the monotone properties are properly reflected in this setting, and we do not need higher precision.

There are a lot of optimal iterative methods having the convergence order 8. We shall present only a few of them, for which we have found a better convergence of the Aitken-Newton iterates in certain situations.

We begin with two examples, which show the rapid convergence of the Aitken-Newton iterates (10).
Example 1. Let

f(x)=e2x+sinx2,x[0,1].f(x)=e^{2x}+\sin x-2,\quad x\in[0,1].

We have: f(0)=1,f(1)=e2+sin12>0,f(x)=2e2x+cosx>0,x[0,1],f′′(x)=4e2xsinx>0,f′′′(x)=8e2xcosxf(0)=-1,f(1)=e^{2}+\sin 1-2>0,f^{\prime}(x)=2e^{2x}+\cos x>0,x\in[0,1],f^{\prime\prime}(x)=4e^{2x}-\sin x>0,f^{\prime\prime\prime}(x)=8e^{2x}-\cos x and

Ef(x)>32e4x30e2x+1>0,x>0E_{f}(x)>32e^{4x}-30e^{2x}+1>0,\quad\forall x>0

Taking x0=1x_{0}=1, Theorem 2 applies and we obtain decreasing sequences xn,yn,znx_{n},y_{n},z_{n}, as can be seen in Table 1 .
Example 2. Consider

f(x)=ex4x2,x[12,1].f(x)=e^{x}-4x^{2},\quad x\in\left[\frac{1}{2},1\right].

We have f(12)=e1>0,f(1)=e4<0,f(x)=ex8x<0,f′′(x)=ex8<0,f′′′(x)=exf\left(\frac{1}{2}\right)=\sqrt{e}-1>0,f(1)=e-4<0,f^{\prime}(x)=e^{x}-8x<0,f^{\prime\prime}(x)=e^{x}-8<0,f^{\prime\prime\prime}(x)=e^{x} and Ef(x)=2e2x48ex+8xex+192>0,x[12,1]E_{f}(x)=2e^{2x}-48e^{x}+8xe^{x}+192>0,x\in\left[\frac{1}{2},1\right].

Taking x0=1x_{0}=1, Theorem 2 applies again and we obtain the results presented in Table 2.
Next, we present two examples and we compare the studied method to other methods. In order to obtain smaller tables, we used the format short command in Matlab. It is worth mentioning that such choice may lead to results that appear integers, while they are not (e.g., the value of y4y_{4} in Table 6 is shown to be 2 , while f(y4)f\left(y_{4}\right) should be 0 ); the explanation resides in the rounding made in the conversion process.

Example 3. Consider the following equation (see, e.g., [6])

f(x)=exsinx+ln(x2+1),x=0.f(x)=e^{x}\sin x+\ln\left(x^{2}+1\right),\quad x^{*}=0.

The largest interval to study the monotone convergence of our method by Theorems 252-5 is [a,b]:=[x,1.54][a,b]:=\left[x^{*},1.54\ldots\right], since f′′f^{\prime\prime} vanishes at bb (being positive on [a,b][a,b] ). The Fourier condition (D) holds on [a,b][a,b] (and does not hold for x<ax<a ), Ef(x)>0E_{f}(x)>0 on [a,b][a,b], while the derivatives f,f′′f^{\prime},f^{\prime\prime} are positive on this interval. The conclusions of Theorem 2 apply.

The Aitken-Newton method leads to the following results, presented in Table 3.

Table 3: Table 3
Aitken-Newton iterates, f(x)=exsinx+ln(x2+1)f(x)=e^{x}\sin x+\ln\left(x^{2}+1\right).
nn xnx_{n} f(xn)f\left(x_{n}\right) yny_{n} f(yn)f\left(y_{n}\right) znz_{n} f(zn)f\left(z_{n}\right)
0 1.54 5.8778 0.51233 1.0513 0.17152 0.2316
1 0.048016 0.052662 0.0039166 0.0039473 3.0245e053.0245\mathrm{e}-05 3.0246e053.0246\mathrm{e}-05
2 3.4821e093.4821\mathrm{e}-09 3.4821e093.4821\mathrm{e}-09 3.6375e173.6375\mathrm{e}-17 3.6375e173.6375\mathrm{e}-17 0 0
Table 4: Table 4
Cordero-Torregrosa-Vassileva iterates, f(x)=exsinx+ln(x2+1)f(x)=e^{x}\sin x+\ln\left(x^{2}+1\right).
nn xnx_{n} f(xn)f\left(x_{n}\right) yny_{n} f(yn)f\left(y_{n}\right) znz_{n} f(zn)f\left(z_{n}\right)
0 1.49 5.592 0.51 1.0442 0.21795 0.31529
1 -0.36451 -0.12284 -0.87167 0.24508 -0.66891 0.052125
2 -0.57963 -0.017122 -0.60388 0.00048485 -0.60323 5.9483e075.9483\mathrm{e}-07
3 -0.60323 -1.9295e-12 -0.60323 0 -0.60323 0
0 1.48 5.535 0.50911 1.0414 0.21622 0.31202
1 -0.32703 -0.13002 -1.258 0.67839 -0.83324 0.20558
2 0.10514 0.12758 0.015884 0.01639 4.5216e-4 4.5257e44.5257\mathrm{e}-4
3 -6.6097e-07 -6.6097e-07 8.7366e138.7366\mathrm{e}-13 8.7366e138.7366\mathrm{e}-13 -2.1176e-22 -2.1176e-22
Table 5: Table 5
Kou-Wang iterates, f(x)=exsinx+ln(x2+1)f(x)=e^{x}\sin x+\ln\left(x^{2}+1\right).
nn xnx_{n} f(xn)f\left(x_{n}\right) yny_{n} f(yn)f\left(y_{n}\right) znz_{n} f(zn)f\left(z_{n}\right)
0 1.49 5.592 0.51 1.0442 0.21795 0.31529
1 -0.26375 -0.13301 2.4987 9.2742 1.1273 3.6088
2 -189.6439 10.4903 805.0965 Inf NaN NaN
0 1.48 5.535 0.50911 1.0414 0.21622 0.31202
1 -0.23421 -0.13021 0.68334 1.6336 0.24215 0.36247
2 1.1187 3.5648 0.41752 0.7763 0.14701 0.19107
3 -0.015069 -0.014616 4.8068e44.8068\mathrm{e}-4 4.8114e44.8114\mathrm{e}-4 4.2097e074.2097\mathrm{e}-07 4.2097e074.2097\mathrm{e}-07
4 7.7382e137.7382\mathrm{e}-13 7.7382e137.7382\mathrm{e}-13 1.7964e241.7964\mathrm{e}-24 1.7964e241.7964\mathrm{e}-24 2.7804e36-2.7804\mathrm{e}-36 2.7804e36-2.7804\mathrm{e}-36

The convergence may be not very fast for initial approximations away from the solution.
It is worth noting that the method converges for 0.3x0<x1-0.3\leq x_{0}<x_{1}^{*} too (local convergence near x=0x^{*}=0 assured by Theorem 1), despite the Fourier condition does not hold. For x0=0.3x_{0}=-0.3 one obtains x10.25<0x_{1}\approx-0.25<0, then y11.7>0y_{1}\approx 1.7>0 and the rest of the iterates remain positive, converging monotonically to x1x_{1}^{*}. For x0=0.4x_{0}=-0.4, the method converges to another solution of the equation, x2=0.603x_{2}^{*}=-0.603\ldots

The optimal method introduced by Cordero et al. [8] has a smaller convergence domain for x0>x1x_{0}>x_{1}^{*}, since the iterates converge for x0=1.48(x4=1.3741e32)x_{0}=1.48\left(x_{4}=1.3741e-32\right), while for x0=1.49x_{0}=1.49 the iterates jump over x1x_{1}^{*} and converge to x2x_{2}^{*} as can be seen in Table 4. By checking all the initial approximations between 0 and 1.48 with step 0.001 , we found out that the convergence domain is even smaller, since taking 1.442 as initial approximation does not lead to convergence.

The Kou-Wang method (formula (25) in [9]) converges for x0=1.48(x4=0)x_{0}=1.48\left(x_{4}=0\right) and diverges for x0=1.49x_{0}=1.49, as can be seen in Table 5.

Example 4. Consider the following equation (see, e.g., [14])

f(x)=(x2)(x10+x+1)ex1,x=2.f(x)=(x-2)\left(x^{10}+x+1\right)e^{-x-1},\quad x^{*}=2.

The largest interval to study the monotone convergence of our method by Theorems 252-5 is [a,b]:=[x,7.9][a,b]:=\left[x^{*},7.9\ldots\right], since f′′f^{\prime\prime} vanishes at bb (being positive on [a,b][a,b] ).

The Fourier condition (D) holds on [a,b][a,b] (and does not hold for x<ax<a ), Ef(x)>0E_{f}(x)>0 on [a,b][a,b], while both the derivatives ff^{\prime}, f′′f^{\prime\prime} are positive on this interval. The conclusions of Theorem 2 apply.

It is interesting to note that in [14, Remark 6] Petković observed that the methods studied there have a small convergence domain to the left of the solution: the choice of x0=1.8x_{0}=1.8 caused a bad convergence behavior of those iterates. We believe that this behavior may be explained by the fact that the derivative of ff vanishes at x=1.78x=1.78\ldots

The Aitken-Newton method leads to the results presented in Table 6. The iterates converge even for x0>7.9x_{0}>7.9 (and to the left of the solution as well, but for x0x_{0} higher than 1.72). Of course the convergence may be not very fast when the initial approximations are away from the solution.

The optimal method introduced by Cordero et al. [8] converges to xx^{*} for x0=6.46x_{0}=6.46 and it does not for x0=6.47x_{0}=6.47, as shown in Table 7.

The optimal method introduced by Liu and Wang (formula (18) in [10]) converges to x=2x^{*}=2 for x0=2.359x_{0}=2.359 (it needs 5 iterates) but for x0=2.36x_{0}=2.36 it converges to another solution, x1=1512.626x_{1}^{*}=1512.626\ldots. The results are presented in Table 8.

Table 6: Table 6
Aitken-Newton iterates, f(x)=(x2)(x10+x+1)ex1f(x)=(x-2)\left(x^{10}+x+1\right)e^{-x-1}.
nn xnx_{n} f(xn)f\left(x_{n}\right) yny_{n} f(yn)f\left(y_{n}\right) znz_{n} f(zn)f\left(z_{n}\right)
0 7.9 761907.1334 5.6028 148982.786 4.6615 44837.6641
1 4.0818 16594.4155 3.5637 5385.3696 3.1548 1769.5473
2 2.8568 655.665 2.5841 215.3342 2.3658 69.4249
3 2.2125 24.0727 2.0909 6.6087 2.0232 1.3004
4 2.0026 0.13254 2 0.0013264 2 1.3712e071.3712\mathrm{e}-07
5 2 -1.1353e-14 2 0
Table 7: Table 7
Cordero, Torregrosa and Vassileva iterates, f(x)=(x2)(x10+x+1)ex1f(x)=(x-2)\left(x^{10}+x+1\right)e^{-x-1}.
nn xnx_{n} f(xn)f\left(x_{n}\right) yny_{n} f(yn)f\left(y_{n}\right) znz_{n} f(zn)f\left(z_{n}\right)
0 6.46 324955.0041 5.165 89878.025 4.3634 27700.9456
1 1.8472 -4.1246 2.3095 48.8543 2.0877 6.3062
2 1.9112 -3.155 2.053 3.3344 2.0049 0.25362
3 1.9998 -0.0091403 2 6.5281e066.5281\mathrm{e}-06 2 1.9074e121.9074\mathrm{e}-12
4 2 0
0 6.47 327469.2878 5.1701 90456.4068 4.3678 27912.268
1 1.8136 -4.336 2.9391 879.3346 2.3777 74.4975
7 -56.4878 2.424e+43-2.424\mathrm{e}+43
Table 8: Table 8
Liu-Wang iterates, f(x)=(x2)(x10+x+1)ex1f(x)=(x-2)\left(x^{10}+x+1\right)e^{-x-1}.
nn xnx_{n} f(xn)f\left(x_{n}\right) yny_{n} f(yn)f\left(y_{n}\right) znz_{n} f(zn)f\left(z_{n}\right)
0 2.359 66.6580 2.1929 20.3973 2.0620 4.0392
1 1.8237 -4.2901 2.6406 277.0720 2.2353 28.8601
2 2.7206 387.8421 2.4745 126.5702 2.2432 30.6641
3 2.1785 17.9263 2.0697 4.6753 2.0103 0.5496
4 2.0009 0.0449 2.0000 1.5555e041.5555\mathrm{e}-04 2.0000 1.0885e091.0885\mathrm{e}-09
5 2 0
0 2.36 67.0603 2.1937 20.5289 2.0624 4.0706
1 1.7919 -4.3898 5.5436 1.3980e+051.3980\mathrm{e}+05 3.6678 6.9018e+036.9018\mathrm{e}+03
2 1.5126e+031.5126\mathrm{e}+03 0
Table 9: Table 9
Petković-Euler-like iterates, f(x)=(x2)(x10+x+1)ex1f(x)=(x-2)\left(x^{10}+x+1\right)e^{-x-1}.
nn xnx_{n} f(xn)f\left(x_{n}\right)
0 2.15 13.5861
1 2.0016 0.082608
2 2 0
0 2.16 15.0282
1 2.0053-0.0052421i 0.271990.2795i0.27199-0.2795\mathrm{i}
2 2+1.7364e15i2+1.7364\mathrm{e}-15\mathrm{i} 1.0672e12+8.8785e14i1.0672\mathrm{e}-12+8.8785\mathrm{e}-14\mathrm{i}
3 2 0

Among the optimal methods in [14] (the methods with convergence orders higher than 8 were corrected in a subsequent paper), the modified Ostrowski and Maheshwari methods behave very well for this example (we have studied the convergence only to the right of the solution). The modified Euler-like method has a small domain of convergence (in \mathbb{R} ), since it converges to xx^{*} for x0=2.15x_{0}=2.15, while for x0=2.16x_{0}=2.16 it generates square roots of negative numbers. Matlab has the feature of implicitly dealing with complex numbers, and the iterates finally converge (in \mathbb{C} ) to the solution (see Table 9).

Conclusions

The Aitken-Newton method studied in this paper present, under certain circumstances, some advantages over the optimal methods; the obtained sufficient conditions for guaranteed convergence may theoretically lead to larger convergence domains (especially sided convergence intervals) than from estimates of attraction balls, while a few examples shown that the attraction domain of the method is larger than for some optimal methods.

References

[1] S. Amat, J. Blanda, S. Busquier, A Steffensen type method with modified functions, Riv. Mat. Univ. Parma 7 (2007) 125-133.
[2] S. Amat, S. Busquier, J.A. Ezquerro, M.A. Hernández-Verón, A Steffensen type method of two steps in banach spaces with applications, J. Comput. Appl. Math. 291 (2016) 317-331.
[3] I.K. Argyros, A new convergence theorem for the Steffensen method in Banach space and applications, Rev. Anal. Numér. Théor. Approx. 29 (1) (2000) 119-127.
[4] I.K. Argyros, S. Hilout, An improved local convergence analysis for Newton-Steffensen-type method, J. Appl. Math. Comp. 32 (1) (2010) 111-118.
[5] E. Cătinaş, On some Steffensen-type iterative method for a class of nonlinear equations, Rev. Anal. Numér. Théor. Approx. 24 (1-2) (1995) 37-43.
[6] C. Chun, Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence, Appl. Math. Comput. 189 (1) (2007) 597-601.
[7] A. Cordero, J.L. Hueso, E. Martinez, J.R. Torregrosa, Generating optimal derivative free iterative methods for nonlinear equations by using polynomial interpolations, Math. Comput. Model. 57 (7-8) (2013) 1950-1956.
[8] A. Cordero, J.R. Torregrosa, M.P. Vassileva, A family of modified Ostrowski’s methods with optimal eighth order of convergence, Appl. Math. Lett. 24 (12) (2011) 2082-2086.
[9] J. Kou, X. Wang, Some improvements of Ostrowski’s method, Appl. Math. Lett. 23 (1) (2010) 92-96.
[10] L. Liu, X. Wang, Eighth-order methods with high efficiency index for solving nonlinear equations, Appl. Math. Comput. 215 (9) (2010) 3449-3454.
[11] I. Păvăloiu, Approximation of the root of equations by Aitken-Steffensen-type monotonic sequences, Calcolo 32 (1-2) (1995) 69-82.
[12] I. Păvăloiu, E. Cătinaş, Bilateral approximation for some Aitken-Steffensen-Hermite type method of order three, Appl. Math. Comput. 217 (12) (2011) 5838-5846.
[13] I. Păvăloiu, E. Cătinaş, On an Aitken-Newton type method, Numer. Algorithm 62 (2) (2013) 253-260.
[14] M.S. Petković, On a general class of multipoint root-finding methods of high computational efficiency, SIAM J. Numer. Anal. 47 (6) (2010) 4402-4414.
[15] J.R. Sharma, A composite third order Newton-Steffensen method for solving nonlinear equations, Appl. Math. Comput. 169 (1) (2005) 242-246.
[16] F. Soleymani, S.K. Vanani, Optimal Steffensen-type method with eighth order of convergence, Comput. Math. Appl. 62 (12) (2011) 4619-4626.

2016

Related Posts