Return to Article Details Accelerating the convergence of Newton-type iterations

Accelerating the convergence of Newton-type iterations

T. Zhanlav§, O. Chuluunbaatar§, , V. Ulziibayar§,

January 2, 2017.

§Institute of Mathematics, National University of Mongolia, Mongolia, e-mail: tzhanlav@yahoo.com

Joint Institute for Nuclear Research, Dubna, 141980 Moscow region, Russia, e-mail: chuka@jinr.ru

School of Applied Sciences, Mongolian University of Science and Technology, Mongolia, e-mail: ulzii@jinr.ru

In this paper, we present a new accelerating procedure in order to speed up the convergence of Newton-type methods. In particular, we derive iterations with a high and optimal order of convergence. This technique can be applied to any iteration with a low order of convergence. As expected, the convergence of the proposed methods is remarkably fast. The effectiveness of this technique is illustrated by numerical experiments.

MSC. 65H05

Keywords. Newton-type iterations, accelerating procedure, convergence order, efficiency index

1 Introduction

As it is known, the monotone approximations for the solutions of nonlinear equations in R is interesting not only from theoretical, but also from practical view points. In particular, two-sided approximations can be efficiently used as a posteriori estimations for the errors in approximating the desired solution. It means that one can control the error at each iteration step. In the last decade, many authors have developed new monotone iterative methods [ 9 , 19 , 18 ] . The main advantage of the monotone iterations is that it does not require good initial approximations contrary to what occurs in the other iteration methods, such as secant-like methods, Newton’s methods and others [ 4 ] . On the other hand, accelerating the convergence of iterative methods is also of interest both from theoretical and computational view points [ 1 , 2 , 3 , 10 , 12 , 14 , 5 , 11 , 13 , 16 ] . For example, in [ 4 ] was constructed a family of the predictor-corrector iterative method from the simplified secant method and a family of secant-like methods; the authors analyzed the initial conditions on the starting point in order to improve the semilocal convergence of the method. In general, it is desirable to choose the starting point from the convergence domain [ 15 , 16 , 19 ] .

In recent years, many iterative methods for solving nonlinear equations have been developed [ 1 , 2 , 3 , 10 , 12 , 14 , 5 , 11 , 13 , 16 ] to improve the local order of convergence of some methods such as Newton, Ostrowski, Potra-Ptak’s methods and so on. The most efficient methods studied in the literature are the optimal eighth-order methods with four function evaluations per iteration, see [ 1 , 2 , 3 , 10 , 12 ] and references therein. The methods developed in [ 1 , 12 , 7 ] are based on optimal Ostrowski’s or King’s method and use arbitrary real parameters and weight functions. The methods proposed in [ 2 , 3 , 10 ] are obtained by composing an iterative method proposed by Chun and Potra-Ptak’s method with Newton’s method.

In this paper we propose a new accelerating procedure for Newton-type methods. By virtue of this procedure, we obtain a higher order, in particular optimal order methods. The usage of the optimal choice of parameter allows us to improve the convergence speed. This may be also helpful in order to extend the domain of convergence.

The paper is organized as follows. Section 2 describes monotone and two-sided approximations. In Section 3, we show the accelerating procedure and establish a convergence order of the new proposed methods. Section 4 is devoted to finding an optimal parameter in the proposed iterations. Finally, Section 5 presents various numerical examples which confirm the theoretical results, and a numerical comparison with other existing optimal order methods.

2 Statement of the problems

Let a,bR,a<b,f:[a,b]R and consider the following nonlinear equation

f(x)=0.

Assume that f(x)C4[a,b],f(x)0,x(a,b) and Eq. (2.1) has a unique root x(a,b). In [ 19 , 18 ] the following iterations were proposed:

x2n+1=x2nτnf(x2n)f(x2n),\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleq2a\thesubequation\thepage\write\@auxout\string\newlabeleq2a\thesubequation\thepage\if@nobreak\fi\fi\@esphackx2n+2=x2n+1f(x2n+1)f(x2n+1),n=0,1,\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleq2b\thesubequation\thepage\write\@auxout\string\newlabeleq2b\thesubequation\thepage\if@nobreak\fi\fi\@esphack

In [ 19 ] it is shown that the iterations (??) and (??) monotone convergent under conditions

0<τn1,an=M2|f(xn)|(f(xn))2<12,M2=supxUr(x)|f(x)|,

and under the assumption H : f(x)0, f(x) preserve sign in the small neighborhood Ur(x)={x:|f(x)|<r}. However the iterations (??) and (??) are not equipped with a suitable choice of parameter τn. In [ 18 ] it was shown that the iterations (??) and (??) have a two-sided approximation behavior under conditions

τnI2n=[t112a2na2n,1+1+4a2na2n)(1,2),a2n<49.

It is also proved that the convergence rate of these iterations is at least 2, and the order of convergence is increased up to 4, when τn1. From this it is clear that the accelerating of the convergence of iterations (??) and (??) is important, especially at the early stage of iterations.

3 Monotone and two-sided convergence of iterations
and their acceleration

If τn1, then the iterations (??) and (??) become as Newton’s one

xn+1=xnf(xn)f(xn),n=0,1

According to [ 19 ] the iteration (3.6) is a monotone convergent under condition (2.4) and assumption H.

Let θn=f(xn+1)f(xn). Then the Taylor expansion of f(xn+1) gives

0<θnan2<14.

Now we proceed to accelerate the convergence of monotone iteration (3.6). To this end, we use two known approximations xn, xn+m satisfying either xn<xn+m<x or x<xn+m<xn and consider

yn=xn+t(xn+mxn),t>1.

From (3.8) it is clear that yn belongs to interval connecting xn and xn+m under condition 0t1. Hence, the extrapolating approach corresponds to the case t>1. Our aim is to find the optimal value of t=topt in (3.8) such that the new approximation yn given by (3.8) will be situated more close to x as compared with xn and xn+m. We use Taylor expansion of the smooth function f(x)Ck+1[a,b]:

f(yn)=f(xn)+f(xn)t(xn+mxn)++f(k)(xn)k!tk(xn+mxn)k+O((xn+mxn)k+1).

Neglecting small term O((xn+mxn)k+1) in (3.9), we have

f(yn)f(xn)+f(xn)t(xn+mxn)++f(k)(xn)k!tk(xn+mxn)kPk(t).
3.10

From (3.10) it is clear that

f(xn)=Pk(0).

We also require that

f(xn+m)=Pk(1).

From (3.12) we find f(k)(xn)k!(xn+mxn)k and substituting it into (3.10), we get Pk(t). From this we find t>1 such that

f(yn)Pk(t)=0.

Geometrically, (3.10) means that the graph (plot) of f(x) in the vicinity of root x is replaced by curve Pk(t) passing through the given points (xn,f(xn)) and (xn+m,f(xn+m)). Thus, Eq. (3.10) for k=1,2 and k=3 gives us

P0(t)=f(xn)f(yn),yn=xn,\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleqn19a\thesubequation\thepage\write\@auxout\string\newlabeleqn19a\thesubequation\thepage\if@nobreak\fi\fi\@esphackP1(t)=f(xn)+(f(xn+m)f(xn))t,\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleq19a\thesubequation\thepage\write\@auxout\string\newlabeleq19a\thesubequation\thepage\if@nobreak\fi\fi\@esphackP2(t)=f(xn)+f(xn)(xn+mxn)t+(f(xn+m)f(xn)f(xn)(xn+mxn))t2,\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleq19b\thesubequation\thepage\write\@auxout\string\newlabeleq19b\thesubequation\thepage\if@nobreak\fi\fi\@esphackP3(t)=f(xn)+f(xn)(xn+mxn)t+f(xn)2(xn+mxn)2t2+(f(xn+m)f(xn)f(xn)(xn+mxn)f(xn)2(xn+mxn)2)t3,\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleq19c\thesubequation\thepage\write\@auxout\string\newlabeleq19c\thesubequation\thepage\if@nobreak\fi\fi\@esphack

respectively. Thus, the parameter t in (3.8) is calculated as a root greater than 1 of Eq. (3.13). In particular, for k=1, we have

topt=f(xn)f(xn)f(xn+m)>1.

Since Pk(0)Pk(1)=f(xn)f(xn+m)>0 for k1, Eq. (3.13) may have at least one root satisfying the condition t>1. From (3.7) it follows that

|f(xn+1)|<14|f(xn)|.

Therefore, it is desirable to choose n and m such that

|f(xn+m)|<(14)m|f(xn)|0.1.

This inequality is written in term of Pk(t) as

|Pk(1)|(14)m|Pk(0)|<0.1.

On the other hand, from (3.10) we see that Pk(1) is not equal to 0 under the assumption H, i.e. t=1 is not a critical point of Pk(t). Thus, Pk(t) is decreasing around t=1. Therefore, there exists topt>1 such that Pk(topt)=0.

Lemma 3.1

Assume that fC4[a,b], the assumption H is satisfied and

|xxn|=εn<1.

Then the following holds

topt1=O(εn).
Proof â–¼
First of all, let us note that the inequality (3.22) is equivalent to
|f(xn)|=O(εn),

which follows from the expansion

0=f(x)=f(xn)+f(ξ)(xxn).

Of course, |xxn+m|<εn and |xn+mxn|<εn under (3.22).

We also use an analogous expansion for Pk(t)

0=Pk(topt)=Pk(1)+Pk(η)(topt1),η(1,topt).

Since Pk(t) is decreasing around t=1, then Pk(η)0.

Hence, from (3.24), (3.25) and (3.20) we conclude that

topt1=f(xn+m)Pk(η)O(εn).

The Lemma is proved.

Proof â–¼

Note that in [ 16 ] the iterations was proposed:

x2n+1=x2nτnf(x2n)f(x2n),\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleq37a\thesubequation\thepage\write\@auxout\string\newlabeleq37a\thesubequation\thepage\if@nobreak\fi\fi\@esphackx2n+2=x2n+1x2n+1x2nf(x2n+1)f(x2n)f(x2n+1),n=0,1,,\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleq37b\thesubequation\thepage\write\@auxout\string\newlabeleq37b\thesubequation\thepage\if@nobreak\fi\fi\@esphack

which has a third order of convergence when τn=1 or τn tends to 1. It is easy to show that the iteration (??) coincides fully with our acceleration procedure (3.8) and (3.18) with m=1 and k=1. Therefore, one can expect a high acceleration when k=2,3 for Newton’s method.

How to accelerate the convergence rate of iteration (3.6)? The answer of this question gives the following theorem.

Theorem 3.2

Assume f(x)Ck+2 and the condition (3.22) is satisfied. Then for yn with topt we have

|xyn||xxn|k+2O(1),

where O is the Landau symbol.

Proof â–¼
Let
x=xn+t(xn+mxn),t1,yn=xn+topt(xn+mxn).

We use Taylor expansions of f(x)Ck+2

0=f(x)==p=0kf(p)(xn)p!(t)p(xn+mxn)p+f(k+1)(ηn)(k+1)!(t)(k+1)(xn+mxn)(k+1),
f(xn+m)p=0k1f(p)(xn)p!(xn+mxn)p==f(k)(xn)k!(xn+mxn)k+f(k+1)(ξn)(k+1)!(xn+mxn)k+1,

and

0=Pk(topt)=p=0k1f(p)(xn)p!(topt)p(xn+mxn)p+(f(xn+m)p=0k1f(p)(xn)p!(xn+mxn)p)(topt)k,

where ηn(xn,x) and ξn(xn,xn+m). Using (3.31) in (3.32) and subtracting (3.32) from (3.31) we get

[f(xn)+f(xn)2(xn+mxn)(t+topt)+f(xn)6(xn+mxn)2(t2+ttopt+topt2)++f(k)(xn)k!(tk1+tk2topt++toptk1)(xn+mxn)k1](ttopt)==(xn+mxn)k(k+1)!(f(k+1)(ηn)tk+1f(k+1)(ξn)toptk).

Since f(xn)0, then from last expression we deduce that

ttopt=O(εnk).

It is possible to derive a more precise estimation than (3.34). Indeed, using (3.34) and fCk+2 we evaluate

An=f(k+1)(ηn)tk+1f(k+1)(ξn)toptk=f(k+1)(ξn)(tk+1toptk)+f(k+2)(ωn)(ηnξn).

By definition we have

|ηnξn||xxn|=εn.

Using (3.23) and (3.34) we have

tk+1toptk=(topt+O(εnk))k+1toptk=toptk(topt(1+O(εnk))k+11)=toptk(topt+O(εnk)1)=O(εn).

Then An=O(εn) and thereby from (3.33) we get

ttopt=O(εnk+1).

Hence, from (3.30) and (3.31) we find that

xyn=O(εnk+2).

which proves (3.29).

Proof â–¼
The sequence {yn} given by formula (3.8) can be considered as a new a iteration. For it we have the followlng:

Theorem 3.3

Assume f(x)Ck+1 and the convergence order of iterations (3.6) equal to 2 i.e., the following holds

|xxn|Mq2n|xx0|,0<q<1,M=const.

If the equation (3.13) has at least one root topt, greater than 1, then the convergence order of new iteration (3.8) is the same as (3.6) and we have

\begin{equation} |x^*-y_n|\leq M_1q_1^{2^n}|x^*-y_0|,\quad 0 3.40

Proof â–¼

By virtue of (3.39) the condition (3.22) is satisfied for large n. Then by Theorem 3.2, the relation (3.29) holds. Using (3.39) in (3.29), we get

|xyn|C(qdn)k+2|xx0|k+2=C(qk+2)dn|xx0|k+2=Cq1dn|xx0|k+2M1q1dn|xy0|,q1=qk+2<q<1.

The proof is completed.

Proof â–¼
Theorem 3.3 shows that the convergence order of iteration (3.8) is the same as iteration (3.6).

However, the speed of convergence of these iterations depends on the factor q1 and q in (3.39) and (3.40), respectively. Since q1=qk+2<q for k=1,2,3, one can expect a more rapid convergence of iteration (3.8). Of course, the higher is acceleration of iteration attained at k=3.

From (3.39) and (3.40) it is clear that the iteration (3.8) converges to x more rapidly than iteration (3.6) by virtue of q1=qk+1<q. This accelerating procedure is useful, especially at the beginning of iterations, but under condition (3.22). From Theorem 3.3, it is clear that the sequence {yn} given by (3.8) together with (3.6) can be considered as a new iteration process with a small factor compared to (3.6). The acceleration procedure is achieved without additional calculations, so that the iteration (3.8) possesses a high computational efficiency. However, despite the sequence xn is monotone, the new iteration (3.8) may not be monotone. For instance, when k=1 it is easy to show that

f(yn)=f(ξn)2(xn+mxn)2.

From this it is clear that

f(yn)>0 iff(x)>0,

and

f(yn)<0 iff(x)<0.

Let us know two successive approximations xn and xn+1, for which

f(xn)f(xn+1)<0

holds. We consider

yn=xn+t(xn+1xn),0t1.

The acceleration technique will be the same as a previous case with m=1. In this case, according to (3.45) we have Pk(0)Pk(1)=f(xn)f(xn+1)<0 for k=1,2,3. Hence, Eq. (3.13) has a root topt(0,1). Obviously, the new approximation

yn=xn+topt(xn+1xn),0t1,

will be situated more close to x as compared to xn, xn+1 and Theorem 3.2 holds true for this case, too. It indicates that the two-sided approximations are useful not only for estimations of roots, but also for finding it approximately with a higher accuracy. Of course, the acceleration procedure (3.8) can be continued further with xn+m:=yn, xn:=xn+m and with t>1 if yn and xn+m are located on side of x and with t(0,1) if yn and xn+m are located on two-sides of root. Note that the accelerating procedure (3.8) is applicable not only for iterations (3.6), but also for any iteration, in particular, for the following iterations (A), (B), (C) and (D).

Now we consider the accelerated iteration

(A)yn=xnf(xn)f(xn),xn+1=xn+topt(ynxn),n=0,1,
3.48

The iteration (A) is a damped Newton’s method [ 20 , 17 ] with optimal parameter τn=topt. The first step yn is used for finding the optimal parameter.

Theorem 3.4

Assume that the assumptions of Theorem 3.2 are fulfilled. Then the convergence order of iteration (A) with optimal topt is d=k+2,k=1,2,3, depending on the smoothness of f.

Proof â–¼
If we compare (A) with (3.6) and (3.8), then xn+m:=yn and yn:=xn+1. Therefore, the expression (3.29) in the Theorem 3.2 has a form
|xxn+1||xxn|k+2=O(1)|xxn+1|M|xxn|k+2,

which completes the proof of the Theorem 3.4.

Proof â–¼

Now let us consider another three-step iteration

yn=xnf(xn)f(xn),zn=ynf(yn)f(xn),(B)xn+1=yn+t(znyn),n=0,1,

Note that if t1 in (B), then it leads to

(B)yn=xnf(xn)f(xn),xn+1=ynf(yn)f(xn),n=0,1,
3.51

The iteration (B) is a particular case of scheme (40) given in [ 16 ] with σ=0 end τ=1 and has a third order of convergence. Therefore, the iteration (B) can be considered as improvement of iteration (B).

Theorem 3.5

The assumptions of the Theorem 3.2 are fulfilled. Then the convergence order of iteration (B) with optimal topt equal to d=2k+3.

Proof â–¼
If we compare (B) with (3.8), then xn:=yn, xn+m:=zn, yn:=xn+1. Then form (3.33) and (3.24) we get
ttopt=MAn(znyn)kO(εn2k+1),

where

x=yn+t(znyn),xn+1=yn+topt(znyn).

From this and from (3.52) we obtained

xxn+1=(ttopt)(znyn)O(εn2k+1)O(εn2)=O(εn2k+3),
3.54

i.e., we have

|xxn+1|M1|xxn|2k+3,

which means that the convergence order of iteration (B) is equal to d=2k+3, k=1,2,3.

Proof â–¼

From the Theorem 3.5, we see that the convergence order of iteration (B) can be increased two or four units at the expense of only two additional evaluations of the function. So the order of convergence and the computational efficiency of the method are greatly improved.

In [ 5 ] Algorithm 2 was constructed:

zn=xn(xn)f(xn),xn+1=znH(xn,yn)f(zn)f(xn),

and it is proved that the order of convergence equals 5, 6, 7 depending on a suitable choice of two-variable function H(xn,yn). For comparison purpose we can rewrite iteration (B) as

yn=xnf(xn)f(xn),xn+1=yntf(yn)f(yn).

We see that these two methods are different from one another only by chosen factors t and H(xn,yn).

Now we consider the following iteration:

(C)yn=xnf(xn)f(xn),zn=ynf(yn)f(yn),xn+1=yn+t(znyn),n=0,1,
3.55

The iteration (C) can be considered as improvement of iteration

(C)yn=xnf(xn)f(xn),xn+1=ynf(yn)f(yn),n=0,1,,
3.56

since if t1 in (C), then it leads to (C).

In [ 16 ] , it was proven that the convergence order of (C) is four.

Theorem 3.6

The assumptions of Theorem 3.2 are fulfilled. Then the convergence order of iteration (C) with optimal topt equal to d=2(k+2), k=1,2,3.

Proof â–¼
If we compare (C) with (3.8), then xn:=yn, xn+m:=zn, yn:=xn+1. Therefore, the expression (3.29) reads as
|xxn+1||xyn|k+2=O(1)|xxn+1|M|xyn|k+2.

From (C), we find that

xyn=xxn+f(xn)f(x)f(xn).

Substituting here the expansion of f(x)

f(x)=f(xn)+f(xn)(xxn)+f(ξn)2(xxn)2,

we have

|xyn||f(ξn)||f(xn)||xxn|2.

Using the last estimate in (3.52), we obtain

|xxn+1|M1|xxn|2(k+2),

which means that the convergence order of iteration (C) equals d=2(k+2), k=1,2,3.

Proof â–¼
Note that the iterations (A), (B) and (C) can be rewritten as a damped Newton’s method [ 20 ]
xn+1=xnτnf(xn)f(xn),τn=topt,τn=1+toptf(yn)f(xn),τn=1+toptf(yn)f(xn)f(xn)f(yn),

respectively. The unified representation (3.61) of different iterations shows that the choice of the damped parameter τn in (3.61) is essentially affected for the convergence order. Of course, the parameter τn in (3.61) is defined by different ways, but in all cases τn1 as n.

The speed of convergence of sequence {τn} to unit is different for each iteration methods. In [ 17 ] the conjecture was proposed:

|1τn|Mqρn,0<q<1.

Now we consider the following three-point iterative method:

yn=xnf(xn)f(xn),zn=xn+t¯(ynxn),(D)xn+1=yn+t(znyn),n=0,1,,

where t¯ and t in (D) are some parameters to be determined. We can formulate the following theorem.

Theorem 3.7

Assume that f(x)C4(a,b) and an initial approximation x0 is sufficiently close to the zero x(a,b) and the parameter t¯ is chosen by as a root of equation

θnt¯2t¯+1=0,θn=f(yn)f(xn),

and t is a root of equation

Ψ(t,α)=αΨ1(t)+(1α)Ψ2(t)=0,

where

Ψ1(t)=at2(a+f(xn)f(yn)(f(zn)f(yn)))tf(xn),a=2f(zn)f(xn)(1t¯)2,

and

Ψ2(t)=((1t¯)(2t¯)f(xn)(23t¯)f(zn))t+(1t¯)(2f(zn)(2t¯)f(xn)).

Then the three-point methods (D) is of eight order of convergence.

Proof â–¼
Using znyn=(1t¯)f(xn)f(xn) in Taylor expansion
f(xn+1)=f(yn)+f(yn)t(znyn)+f(yn)2t2(znyn)2+O((znyn)3),

we get

f(xn+1)=f(yn)+t(1t¯)f(yn)f(xn)f(xn)+f(yn)2t2(1t¯)2f2(xn)(f(xn))2+O(f6(xn)).

Analogously, the Taylor expansion of f(xn+1) at point x=zn gives

f(xn+1)=f(zn)(1t)(1t¯)f(zn)f(xn)f(xn)+f(zn)2(1t)2(1t¯)2f2(xn)(f(xn))2+O((1t)3f6(xn)).

Using f(zn)=f(yn)+f(yn)(znyn)+O((znyn)2) in the last expansion, we have

f(xn+1)=f(zn)(1t)(1t¯)f(zn)f(xn)f(xn)+f(zn)2(1t)2(1t¯)2f2(xn)(f(xn))2+O((1t)3f6(xn)).

Using f(zn)=f(yn)+f(yn)(znyn)+O((znyn)2) in the last expansion, we have

f(xn+1)=f(zn)(1t)(1t¯)f(yn)f(xn)f(xn)f(yn)f2(xn)2(f(xn))2(1t2)(1t¯)2+O(f8(xn)).

From (3.71) and (3.74) one can eliminate term with f(yn)f(xn)f(xn). As a result, we have

f(xn+1)=tf(zn)+(1t)f(yn)f(yn)f2(xn)2(f(xn))2(1t¯)2t(1t)+O(f8(xn)).

Note that in driving (3.75), we keep in mind that

1t=O(f2(xn)).

Further, using Taylor expansion of f(x)C4(D) at points yn we obtain

f(yn)=2(f(xn))2f2(xn)t¯(1t¯)[(1t¯)f(xn)+t¯f(yn)f(zn))f(yn)3f(xn)f(xn)(2t¯)+O(f2(xn)].

The same technique gives us

f(yn)=2(f(zn)(1t¯)f(xn))t¯2f2(xn)(f(xn))2f(yn)3f(xn)f(xn)(3t¯)+O(f2(xn)).

For (3.77) and (3.78) one can eliminate the term with f(yn). As a result, we obtain

f(yn)f2(xn)2(f(xn))2=1t¯2(1t¯)((3t¯)t¯(f(zn)+t¯f(yn)+(1t¯)f(xn))(2t¯)(1t¯)(f(zn)(1t¯)f(xn)))+O(f4(xn)).

Substituting (3.79) into (3.74), we obtain

f(xn+1)=Ψ1(t)+O(f8(xn)),

where

Ψ1(t)=at2(a+f(xn)f(yn)(f(zn)f(yn)))tf(xn),a=2f(zn)f(xn)(1t¯)2.

On the other hand, by virtue of (D) we have

xn+1zn=(1t¯)(1t)f(xn)f(xn).

If we take (3.67) and (3.76) into account in (3.82), from it we deduce

xn+1zn=O(f4(xn)).

Then, from (3.72) we get

f(xn+1)=f(zn)+(1t¯)(1t)f(zn)f(xn)f(xn)+O(f8(xn)).

Now we approximate f(zn) by the method of undetermined coefficient such that

f(zn)anf(xn)+bnf(yn)+cnf(zn)+dnf(xn)+O(f4(xn)),
3.84

This can be done by means of Taylor expansion of f(x)C4(a,b) at point zn and we obtain the following linear system of equations

{an+bn+cn=0,an(xnzn)+bn(ynzn)+dn=1,an(xnzn)2+bn(ynzn)2+2dn(xnzn)=0,an(xnzn)3+bn(ynzn)3+2dn(xnzn)2=0,

which has a unique solution

an=βn(2βn3ωn)ωn(βnωn)2,bn=ωn2βn(βnωn)2,cn=2βn+ωnβnωn,dn=βnβnωn,

where

ωn=xnzn=t¯f(xn)f(xn),βn=ynzn=(1t¯)f(xn)f(xn).

Substituting (3.84) with coefficients defined by (??) into (3.83), we get

f(xn+1)=Ψ2(t)+O(f8(xn)),

where

Ψ2(t)=((1t¯)(2t¯)f(xn)(23t¯)f(zn))t+(1t¯)(2f(zn)(2t¯)f(xn)).

The linear combination of (3.80) and (3.86) gives

f(xn+1)=αΨ1(t)+(1α)Ψ2(t)+O(f8(xn)).

Clearly, if we choose t as a root of quadratic equations

Ψ(t,α)=αΨ1(t)+(1α)Ψ2(t)=0,

then we have

f(xn+1)=O(f8(xn))

which completes the proof.

Proof â–¼

Remark 3.8

Since t is a root of Eq. (3.88), it depends on the parameter α, i.e. t=t(α). Therefore, (D) are one parameter family of iterations.â–¡

It is easy to show that

Ψ2(t^)=0,t^1,Ψ1(t˘)=0,t˘1.

Then taking this into account and passing to the limit t1 in Eq. (3.88), we get

Ψ(t,α)t1αΨ1(1)+(1α)Ψ2(1)0.

It means that Eq. (3.88) or (3.68) has a root tending to unit for any α[0,1].

We recall that, according to Kung-Traub hypothesis, the order of convergence of any multipoint method without memory cannot exceed the bound 2n1 (called optimal order), where n is the number of function evaluations per iteration. As is known, the efficiency index of iteration defined by formula E=d1m, where d is the convergence order and m is the number of function and its derivative evaluations per iteration. Therefore, the optimal efficiency index would be 2n1n.

According to the Theorem 3.4, the iteration (A) has the convergence order fourth for k=2, requiring only three function evaluations (f(xn), f(yn) and f(xn)), whereas Theorem 3.7 shows that the iteration (D) has the convergence order eight, requiring four function evaluations (f(xn),f(yn),f(zn),f(xn)).

Methods

k

d

E

 

1

3

3131.442249

(A)

2

4

4131.587401

 

3

5

5141.495348

 

1

5

5141.495348

(B)

2

7

7151.475773

 

3

9

9151.551845

 

1

6

6151.430969

(C)

2

8

8151.515716

 

3

10

10161.467799

(D)

-

8

8141.681792

Table 3. The efficiency index of the methods (A), (B), (C) and (D).

Hence, this order of convergence is optimal in the above mentioned sense of the Kung-Traub conjecture. This efficiency index is 4131.587 and 8141.681, respectively.

Thus, we obtain the iterations (A) and (D) with the optimal order of convergence 4 and 8, accelerating Newton’s method. Our procedure of accelerations gives a genuine improvement of Newton’s method. One of the advantages of iterations (A) and (D) is that these methods work well for the system of nonlinear equations, whereas the optimal order methods in [ 1 , 2 , 3 , 10 , 12 ] do not extend to the system of equations.

For convenience we present the efficiency index of the proposed above methods (A), (B), (C) and (D) in Table 3.. From Table 3. one can see that the efficiency index of the iterations (A), (B), (C) and (D) is better or much better than that of Newton’s method 21.414.

4 Finding optimal parameter

Let m=1 in (3.8). Then the root (3.18) can be written as

topt(1)=11θn,θn=f(yn)f(xn).\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleqn8\thesubequation\thepage\write\@auxout\string\newlabeleqn8\thesubequation\thepage\if@nobreak\fi\fi\@esphack

For k=2, from (??) we obtain

P2(t)f(yn)t2f(xn)t+f(xn)=0,\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleqn9a\thesubequation\thepage\write\@auxout\string\newlabeleqn9a\thesubequation\thepage\if@nobreak\fi\fi\@esphack

or

P2(t)θnt2t+1=0.\@bsphack\if@filesw\edef\@tempa\write\@auxout\string\newlabeleqn9b\thesubequation\thepage\write\@auxout\string\newlabeleqn9b\thesubequation\thepage\if@nobreak\fi\fi\@esphack

By the well known assertion and (3.19) we have

t1+t2=f(xn)f(yn)>1,t1t2=f(xn)f(yn).

Hence

t1+t2=t1t2.

From this we obtian

t12f(xn)f(yn)t1+f(xn)f(yn)=0.

The root of (4.93) greater than 1 is

topt(2)=114θn2θn=21+14θn.

In a similar way, from (3.6) and (??) we obtain

P3(t)=(θnωn)t3+ωnt2t+1=0,

where

ωn=f(xn)f(xn)2(f(xn))2.

Since in all iterations (A), (B), (C) we have

f(yn)=f(xn)2fn2(fn)2+O(f3(xn)),

then

ωn=f(yn)f(xn)+O(f2(xn)).

Using (3.26) and (4.98) in (4.95) we obtain approximates equation

(f(zn)f(yn)f(yn)f(xn))t3+f(yn)f(xn)t2t+1=0.

Eq. (4.99) approximates (4.95) with accuracy O(fn4) in case of (A) and with accuracy O(fn5) in case of (B) and (C) since topt1=O(εn2) for (A) and topt1=O(εn3) for (B) and (C). Therefore, Eq. (4.99) may be useful especially for (A).

Above, we obtain formula (4.90) for finding optimal value topt for iteration (A). However, it may be changed for iteration (B) and (C). Since xn+m:=zn, xn:=yn and yn:=xn+1 for iteration (B), then according to (??) we have

P2(t)f(yn)+f(yn)(znyn)t+(f(zn)f(yn)f(yn)(znyn))t2=0,

or

P2(t)(f(zn)f(yn)1+f(yn)f(xn))t2f(yn)f(xn)t+1=0.

We rewrite (4.100) as

P2(t)=f(zn)f(yn)t2t+1+(1f(yn)f(xn))t(1t)=0.

From the last equation, it is clear that if we take into account the following estimate

1f(yn)f(xn)=2f(yn)f(xn)+O(f2(xn))=O(f(xn))

and

1t=O(f(xn+m))=O(f(zn))=O(f3(xn)),

which follows from (3.26), then the equation (??) with θn=f(zn)f(yn) holds within the accuracy O(f4(xn)).

If we wish to include the precise correction to (4.100), one can replace 1f(yn)f(xn) by 2f(yn)f(xn), then we arrive at

(θn2f(yn)f(xn))t2+(2f(yn)f(xn)1)t+1=0.

By virtue of (4.102) and (4.103), Eq. (4.104) approximates Eq. (4.100) with accuracy O(f5(xn)).

With respect to the iteration (C), Eq. (??) remains true with θn=f(zn)f(yn).

Note that in most cases the value of the iteration parameter of the damped Newton’s method varies from zero to unit, whereas in our case the value of the optimal parameter may be greater than unit.

5 Numerical experiments

We consider the following four examples [ 18 , 2 , 12 , 8 ] .

Example 5.1

Let f(x)=exp(x)4x2=0. This equation has three roots. It is easy to show that

(a)f(x)>0,f(x)>0atx[4, 92],andx(4, 92),(b)f(x)>0,f(x)<0atx[12, 0],andx(12, 0).

We considered only first and third roots.

Method

k

|xx0|

|xx1|

|xx2|

|xx3|

dx2

dx3

ρτ2

ρτ3

 

1

1.93(-01)

3.87(-03)

4.00(-08)

4.45(-023)

2.93

 3.00

2.93

 3.00

(A)

2

1.93(-01)

3.48(-04)

3.80(-15)

5.40(-059)

3.99

 4.00

3.99

 4.00

 

3

1.93(-01)

1.68(-05)

8.74(-26)

3.31(-127)

5.00

 5.00

5.00

 5.00

 

1

1.93(-01)

1.43(-04)

5.70(-20)

5.78(-097)

4.92

 5.00

4.92

 5.00

(B)

2

1.93(-01)

1.46(-06)

4.15(-42)

6.35(-291)

6.94

 7.00

6.94

 7.00

 

3

1.93(-01)

9.66(-09)

4.56(-74)

5.31(-662)

8.94

 9.00

8.94

 9.00

 

1

1.93(-01)

1.24(-05)

1.47(-30)

4.13(-180)

5.95

 6.00

5.95

 6.00

(C)

2

1.93(-01)

1.26(-07)

8.02(-57)

2.14(-450)

7.95

 8.00

7.95

 8.00

 

3

1.93(-01)

8.38(-10)

4.41(-93)

7.23(-926)

9.96

10.00

9.96

10.00

Table 5. Example 1a. x=4.306584

Method

k

|xx0|

|xx1|

|xx2|

|xx3|

dx2

dx3

ρτ2

ρτ3

 

1

9.22(-02)

5.38(-04)

1.36(-010)

2.18(-0030)

2.95

 3.00

2.95

 3.00

(A)

2

9.22(-02)

1.56(-06)

1.56(-025)

1.55(-0101)

3.98

 4.00

3.98

 4.00

 

3

9.22(-02)

3.56(-08)

3.77(-040)

5.04(-0200)

4.99

 5.00

4.99

 5.00

 

1

9.22(-02)

6.10(-06)

1.29(-026)

5.39(-0130)

4.95

 5.00

4.95

 5.00

(B)

2

9.22(-02)

1.26(-09)

2.17(-064)

9.62(-0448)

6.96

 7.00

6.96

 7.00

 

3

9.22(-02)

2.14(-12)

9.57(-108)

6.74(-0966)

8.96

 9.00

8.96

 9.00

 

1

9.22(-02)

2.70(-07)

2.76(-040)

3.13(-0238)

5.96

 6.00

5.96

 6.00

(C)

2

9.22(-02)

5.57(-11)

1.87(-084)

2.96(-0672)

7.97

 8.00

7.97

 8.00

 

3

9.22(-02)

9.48(-14)

2.74(-133)

1.12(-1328)

9.97

10.00

9.97

10.00

Table 5. Example 1b. x=0.4077767

Example 5.2

f(x)=x22cos(x)=0. This equation has two roots. It is also easy to show that

f(x)>0, f(x)>0 at x[π6, π2], andx(π6, π2),

We considered only first root, because f(x) is an even function with respect to x.

Method

k

|xx0|

|xx1|

|xx2|

|xx3|

dx2

dx3

ρτ2

ρτ3

 

1

5.49(-01)

1.11(-02)

2.18(-07)

1.71(-021)

2.77

 3.00

2.77

 3.00

(A)

2

5.49(-01)

1.73(-03)

2.73(-13)

1.71(-052)

3.92

 4.00

3.92

 4.00

 

3

5.49(-01)

5.18(-05)

1.76(-24)

7.93(-122)

4.84

 5.00

4.84

 5.00

 

1

5.49(-01)

4.63(-04)

1.16(-18)

1.12(-091)

4.75

 5.00

4.75

 5.00

(B)

2

5.49(-01)

6.44(-06)

1.90(-39)

3.62(-274)

6.80

 7.00

6.80

 7.00

 

3

5.49(-01)

6.17(-08)

3.33(-69)

1.29(-620)

8.81

 9.00

8.81

 9.00

 

1

5.49(-01)

4.84(-05)

1.41(-28)

8.72(-170)

5.80

 6.00

5.80

 6.00

(C)

2

5.49(-01)

6.65(-07)

3.21(-53)

9.36(-424)

7.83

 8.00

7.83

 8.00

 

3

5.49(-01)

6.42(-09)

6.22(-87)

4.48(-867)

9.84

10.00

9.84

10.00

Table 5. Example 2. x=1.021689

Example 5.3

Let f(x)=(x2)(x10+x+1)exp(x1)=0. We chose the initial approximation x0=2.1 for x=2.

Method

|xx1|

|xx2|

|xx3|

dx3

h(t)=1+4t25t,β=3 in [ 1 , (14) ]

1.83(-5)

3.15(-34)

2.45(-264)

7.99986

h(t)=112tt2+t3,β=3 in [ 1 , (14) ]

6.02(-6)

7.91(-38)

6.99(-293)

8.00007

ψ(t)=52t+t2512t in [ 12 , (12) ]

6.12(-5)

1.11(-29)

1.34(-224)

7.99947

ψ(t)=112tt2 in [ 12 , (12) ]

6.01(-5)

9.29(-30)

3.02(-228 )

8.00050

(D), α=0

2.18(-5)

1.12(-34)

5.40(-269)

7.99999

(D), α=0.5

2.14(-5)

2.25(-34)

3.39(-266)

8.00003

(D), α=1

2.89(-5)

2.45(-33)

6.63(-258)

7.99999

Table 5. Example 3. x=2

All numerical calculations were performed using Maple 16 system. Also, to study the convergence of iterations (3.6), (A), (B) (C) and (D), we compute the computational order of convergence dxn using the formulae [ 14 ]

dxn=ln(|xn+1x|/|xnx|)ln(|xnx|/|xn1x|),

where xn+1, xn, xn1 are three consecutive approximations. In numerical examples we also check out the computational order of convergence (COC) of τn by formula [ 14 ]

ρτn=ln|(τn+11)/(τn1)|ln|(τn1)/(τn11)|,

which is included in the presented tables (see Tables 5.5.) and agrees with the conjecture.

Comparisons of the convergence of the iterations (A), (B) and (C) are given in Tables 5.5.. The third, fourth, fifth and sixth columns show the absolute errors |xxn| in the first four iterations. The last four columns display the computational order of convergence dx2, dx3, ρτ2 and ρτ3, respectively. The factor l in the brackets denotes 10l. As expected, the convergence of the proposed methods was remarkably fast. A comparison of the convergence of (D) iteration with other optimal order iterations with eighth order of convergence [ 1 , 12 ] is given in Table 5.. From the Tables we see that the COC perfectly coincides with the theoretical order.

Conclusions

We propose a new acceleration procedure for Newton-type methods. The effect of the acceleration is more perceptible when k increases. The proposed accelerating procedure allows us to derive high and optimal order iteration methods. Numerical results clear demonstrate the theoretical analysis (speed of convergence, and effect of acceleration). Moreover, our acceleration procedure can also be applied to any iteration and systems of nonlinear equations, to which a forthcoming paper will be devoted.

Acknowledgement

The authors would like to thank the anonymous referee for the valuable comments and suggestions which substantially improved the quality of this article. The work was supported partially by the Foundation of Science and Technology of Mongolia under grant SST_007/2015. O. Ch. acknowledges support within the Hulubei-Meshcheryakov programme JINR-Romania.

Bibliography

1

W. Bi, H. Ren, Q. Wu,Three-step iterative methods with eighth-order convergence for solving nonlinear equations, J. Comput. Appl. Math., 225 (2009), pp. 105–112. \includegraphics[scale=0.1]{ext-link.png}

2

A. Cordero, J.L. Hueso, E. Martinez, J. R. Torregrosa, New modifications of Potra-Pta´k’s method with optimal fourth and eighth orders of convergence, J. Comput. Appl. Math., 234 (2010), pp. 2969–2976. \includegraphics[scale=0.1]{ext-link.png}

3

A. Cordero, J.R. Torregrosa, M.P. Vassileva, Three-step iterative methods with optimal eighth-order convergence, J. Comput. Appl. Math., 235 (2011), pp. 3189–3194. \includegraphics[scale=0.1]{ext-link.png}

4

J.A. Ezquerro, M.A. Hernandez, N. Romero, A.I. Velasco,Improving the domain of starting points for secant-like methods, Appl. Math. Comput., 219 (2012), pp. 3677–3692. \includegraphics[scale=0.1]{ext-link.png}

5

L. Fang, G. He, Some modifications of Newton’s method with higher-order convergence for solving nonlinear equations, J. Comput. Appl. Math., 228 (2009), pp. 296–303. \includegraphics[scale=0.1]{ext-link.png}

6

M.A. Hernandez, M.A. Salanova, Modification of the Kantorovich assumptions for semilocal convergence for the Chebyshev method, Comput. Appl. Math., 126 (2000), pp. 131–143. \includegraphics[scale=0.1]{ext-link.png}

7

L. Liu and X. Wang, Eighth-order methods with high efficiency index for solving nonlinear equations, Appl. Math. Comput., 215 (2010), pp. 3449–3454. \includegraphics[scale=0.1]{ext-link.png}

8

I. Pavaloiu, E. Catinas, Bilateral approximations for some Aitken-Steffensen-Hermite type methods of order three, Appl. Math. Comput., 217 (2011), pp. 5838–5846. \includegraphics[scale=0.1]{ext-link.png}

9

B.M. Podlevskii, On certain two-sided analogues of Newton’s method for solving nonlinear eigenvalue problems, Comput. Math. Math. Phys., 47 (2007), pp. 1745–1755. \includegraphics[scale=0.1]{ext-link.png}

10

H.I. Siyyam, M.T. Shatnawi, I.A. Al-Subaihi, A new one parameter family of iterative methods with eighth-order of convergence for solving nonlinear equations, Inter. J. Pure. Appl. Math., 84 (2013), pp. 451–461. \includegraphics[scale=0.1]{ext-link.png}

11

J.R. Sharma, H. Arora, On efficient weighted-Newton methods for sollving systems of nonlinear equations, Appl. Math. Comput., 222 (2013), pp. 497–506. \includegraphics[scale=0.1]{ext-link.png}

12

R. Thukral, M.S. Petkovic, A family of three-point methods of optimal order for solving nonlinear equations, J. Comput. Appl. Math., 233 (2010), pp. 2278–2284. \includegraphics[scale=0.1]{ext-link.png}

13

X. Wang, T. Zhang, A new family of Newton-type iterative methods with and without memory for solving nonlinear equations, Calcolo 51 (2014), pp. 1–15. \includegraphics[scale=0.1]{ext-link.png}

14

S. Weerakoon, T.G.I. Fernando, A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Lett., 13 (8) (2000), pp. 87–93. \includegraphics[scale=0.1]{ext-link.png}

15

T. Zhanlav, Note on the cubic decreasing region of the Chebyshev method, J. Comput. Appl. Math., 235 (2010), pp. 341–344. \includegraphics[scale=0.1]{ext-link.png}

16

T. Zhanlav, O. Chuluunbaatar, Some iteration methods with high order convergence for nonlinear equations, Bulleten of PFUR, Series Mathematics.Information sciences. Physics, 4 (2009), pp. 47–55.

17

T. Zhanlav, O. Chuluunbaatar, Convergence of the continuous analogy of Newton’s method for solving nonlinear equations, Numerical methods and programming, Moscow State University, 10 (2009) pp. 402–407.

18

T. Zhanlav, O. Chuluunbaatar, V. Ulziibayar, Two-sided approximation for some Newton’s type methods, Appl. Math. Comput., 236 (2014), pp. 239–246. \includegraphics[scale=0.1]{ext-link.png}

19

T. Zhanlav, D. Khongorzul, On convergence behavior of combined iterative method for solving nonlinear equations, Comput. Math. Math. Phys., 52 (2012), pp. 790–800.

20

T. Zhanlav, I.V. Puzynin, The convergence of iteration based on a continuous analogue of Newton’s method, Comput. Math and Math Phys., 32 (1992), pp. 729–737.