Accelerating the convergence of Newton-type iterations\(^\ast \)
January 2, 2017.
\(^\S \)Institute of Mathematics, National University of Mongolia, Mongolia, e-mail: tzhanlav@yahoo.com
\(^\bullet \)Joint Institute for Nuclear Research, Dubna, 141980 Moscow region, Russia, e-mail: chuka@jinr.ru
\(^\ast \)School of Applied Sciences, Mongolian University of Science and Technology, Mongolia, e-mail: ulzii@jinr.ru
In this paper, we present a new accelerating procedure in order to speed up the convergence of Newton-type methods. In particular, we derive iterations with a high and optimal order of convergence. This technique can be applied to any iteration with a low order of convergence. As expected, the convergence of the proposed methods is remarkably fast. The effectiveness of this technique is illustrated by numerical experiments.
MSC. 65H05
Keywords. Newton-type iterations, accelerating procedure, convergence order, efficiency index
1 Introduction
As it is known, the monotone approximations for the solutions of nonlinear equations in \(\mathbb R\) is interesting not only from theoretical, but also from practical view points. In particular, two-sided approximations can be efficiently used as a posteriori estimations for the errors in approximating the desired solution. It means that one can control the error at each iteration step. In the last decade, many authors have developed new monotone iterative methods [ 9 , 19 , 18 ] . The main advantage of the monotone iterations is that it does not require good initial approximations contrary to what occurs in the other iteration methods, such as secant-like methods, Newton’s methods and others [ 4 ] . On the other hand, accelerating the convergence of iterative methods is also of interest both from theoretical and computational view points [ 1 , 2 , 3 , 10 , 12 , 14 , 5 , 11 , 13 , 16 ] . For example, in [ 4 ] was constructed a family of the predictor-corrector iterative method from the simplified secant method and a family of secant-like methods; the authors analyzed the initial conditions on the starting point in order to improve the semilocal convergence of the method. In general, it is desirable to choose the starting point from the convergence domain [ 15 , 16 , 19 ] .
In recent years, many iterative methods for solving nonlinear equations have been developed [ 1 , 2 , 3 , 10 , 12 , 14 , 5 , 11 , 13 , 16 ] to improve the local order of convergence of some methods such as Newton, Ostrowski, Potra-Ptak’s methods and so on. The most efficient methods studied in the literature are the optimal eighth-order methods with four function evaluations per iteration, see [ 1 , 2 , 3 , 10 , 12 ] and references therein. The methods developed in [ 1 , 12 , 7 ] are based on optimal Ostrowski’s or King’s method and use arbitrary real parameters and weight functions. The methods proposed in [ 2 , 3 , 10 ] are obtained by composing an iterative method proposed by Chun and Potra-Ptak’s method with Newton’s method.
In this paper we propose a new accelerating procedure for Newton-type methods. By virtue of this procedure, we obtain a higher order, in particular optimal order methods. The usage of the optimal choice of parameter allows us to improve the convergence speed. This may be also helpful in order to extend the domain of convergence.
The paper is organized as follows. Section 2 describes monotone and two-sided approximations. In Section 3, we show the accelerating procedure and establish a convergence order of the new proposed methods. Section 4 is devoted to finding an optimal parameter in the proposed iterations. Finally, Section 5 presents various numerical examples which confirm the theoretical results, and a numerical comparison with other existing optimal order methods.
2 Statement of the problems
Let \(a, b\in \mathbb {R},\, a{\lt}b,\, f:[a, b]\rightarrow \mathbb {R}\) and consider the following nonlinear equation
Assume that \(f(x)\in \mathcal{C}^4[a,b],\, f'(x)\neq 0,\, x\in (a,b)\) and Eq. (2.1) has a unique root \(x^*\in (a,b)\). In [ 19 , 18 ] the following iterations were proposed:
In [ 19 ] it is shown that the iterations (??) and (??) monotone convergent under conditions
and under the assumption H : \(f'(x)\neq 0\), \(f''(x)\) preserve sign in the small neighborhood \(U_r(x^*)=\{ x:|f(x)|{\lt}r\} .\) However the iterations (??) and (??) are not equipped with a suitable choice of parameter \(\tau _n\). In [ 18 ] it was shown that the iterations (??) and (??) have a two-sided approximation behavior under conditions
It is also proved that the convergence rate of these iterations is at least 2, and the order of convergence is increased up to 4, when \(\tau _n\to 1\). From this it is clear that the accelerating of the convergence of iterations (??) and (??) is important, especially at the early stage of iterations.
3 Monotone and two-sided convergence of iterations
and their acceleration
If \(\tau _n\equiv 1\), then the iterations (??) and (??) become as Newton’s one
According to [ 19 ] the iteration (3.6) is a monotone convergent under condition (2.4) and assumption H.
Let \(\theta _n=\displaystyle \tfrac {f(x_{n+1})}{f(x_n)}\). Then the Taylor expansion of \(f(x_{n+1})\) gives
Now we proceed to accelerate the convergence of monotone iteration (3.6). To this end, we use two known approximations \(x_n\), \(x_{n+m}\) satisfying either \(x_{n}{\lt}x_{n+m}{\lt}x^*\) or \(x^*{\lt}x_{n+m}{\lt}x_n\) and consider
From (3.8) it is clear that \(y_n\) belongs to interval connecting \(x_n\) and \(x_{n+m}\) under condition \(0\leq t\leq 1\). Hence, the extrapolating approach corresponds to the case \(t{\gt}1\). Our aim is to find the optimal value of \(t=t_{\textnormal{opt}}\) in (3.8) such that the new approximation \(y_n\) given by (3.8) will be situated more close to \(x^*\) as compared with \(x_n\) and \(x_{n+m}\). We use Taylor expansion of the smooth function \(f(x)\in \mathcal{C}^{k+1}[a,b]\):
Neglecting small term \(\mathcal{O}\big((x_{n+m}-x_n)^{k+1}\big)\) in (3.9), we have
From (3.10) it is clear that
We also require that
From (3.12) we find \(\displaystyle \tfrac {f^{(k)}(x_n)}{k!}(x_{n+m}-x_n)^k\) and substituting it into (3.10), we get \(P_k(t)\). From this we find \(t{\gt}1\) such that
Geometrically, (3.10) means that the graph (plot) of \(f(x)\) in the vicinity of root \(x^*\) is replaced by curve \(P_k(t)\) passing through the given points \((x_n,f(x_n))\) and \((x_{n+m},f(x_{n+m}))\). Thus, Eq. (3.10) for \(k=1,2\) and \(k=3\) gives us
respectively. Thus, the parameter \(t\) in (3.8) is calculated as a root greater than 1 of Eq. (3.13). In particular, for \(k=1\), we have
Since \(P_k(0)P_k(1)=f(x_n)f(x_{n+m}){\gt}0\) for \(k\geq 1\), Eq. (3.13) may have at least one root satisfying the condition \(t^*{\gt}1\). From (3.7) it follows that
Therefore, it is desirable to choose \(n\) and \(m\) such that
This inequality is written in term of \(P_k (t)\) as
On the other hand, from (3.10) we see that \(P'_k(1)\) is not equal to 0 under the assumption H, i.e. \(t=1\) is not a critical point of \(P_k(t)\). Thus, \(P_k(t)\) is decreasing around \(t=1\). Therefore, there exists \(t_{\textnormal{opt}}{\gt}1\) such that \(P_k(t_{\textnormal{opt}})=0\).
Assume that \(f\in \mathcal{C}^4[a,b]\), the assumption H is satisfied and
Then the following holds
which follows from the expansion
Of course, \(|x^*-x_{n+m}|{\lt}\varepsilon _n\) and \(|x_{n+m}-x_n|{\lt}\varepsilon _n\) under (3.22).
We also use an analogous expansion for \(P_k(t)\)
Since \(P_k(t)\) is decreasing around \(t=1\), then \(P'_k(\eta )\neq 0\).
Hence, from (3.24), (3.25) and (3.20) we conclude that
The Lemma is proved.
Note that in [ 16 ] the iterations was proposed:
which has a third order of convergence when \(\tau _n=1\) or \(\tau _n\) tends to 1. It is easy to show that the iteration (??) coincides fully with our acceleration procedure (3.8) and (3.18) with \(m=1\) and \(k=1\). Therefore, one can expect a high acceleration when \(k=2,3\) for Newton’s method.
How to accelerate the convergence rate of iteration (3.6)? The answer of this question gives the following theorem.
Assume \(f(x)\in \mathcal{C}^{k+2}\) and the condition (3.22) is satisfied. Then for \(y_n\) with \(t_{\textnormal{opt}}\) we have
where \(\mathcal{O}\) is the Landau symbol.
We use Taylor expansions of \(f(x)\in \mathcal{C}^{k+2}\)
and
where \(\eta _n\in (x_n,x^*)\) and \(\xi _n\in (x_n,x_{n+m})\). Using (3.31) in (3.32) and subtracting (3.32) from (3.31) we get
Since \(f'(x_n)\neq 0\), then from last expression we deduce that
It is possible to derive a more precise estimation than (3.34). Indeed, using (3.34) and \(f\in \mathcal{C}^{k+2}\) we evaluate
By definition we have
Using (3.23) and (3.34) we have
Then \(A_n=\mathcal{O}(\varepsilon _n)\) and thereby from (3.33) we get
Hence, from (3.30) and (3.31) we find that
which proves (3.29).
Assume \(f(x)\in \mathcal{C}^{k+1}\) and the convergence order of iterations (3.6) equal to 2 i.e., the following holds
If the equation (3.13) has at least one root \(t_{\textnormal{opt}}\), greater than 1, then the convergence order of new iteration (3.8) is the same as (3.6) and we have
By virtue of (3.39) the condition (3.22) is satisfied for large \(n\). Then by Theorem 3.2, the relation (3.29) holds. Using (3.39) in (3.29), we get
The proof is completed.
However, the speed of convergence of these iterations depends on the factor \(q_1\) and \(q\) in (3.39) and (3.40), respectively. Since \(q_1=q^{k+2}{\lt}q\) for \(k=1,2,3\), one can expect a more rapid convergence of iteration (3.8). Of course, the higher is acceleration of iteration attained at \(k=3\).
From (3.39) and (3.40) it is clear that the iteration (3.8) converges to \(x^*\) more rapidly than iteration (3.6) by virtue of \(q_1=q^{k+1}{\lt}q\). This accelerating procedure is useful, especially at the beginning of iterations, but under condition (3.22). From Theorem 3.3, it is clear that the sequence \(\{ y_n\} \) given by (3.8) together with (3.6) can be considered as a new iteration process with a small factor compared to (3.6). The acceleration procedure is achieved without additional calculations, so that the iteration (3.8) possesses a high computational efficiency. However, despite the sequence \(x_n\) is monotone, the new iteration (3.8) may not be monotone. For instance, when \(k=1\) it is easy to show that
From this it is clear that
and
Let us know two successive approximations \(x_{n}\) and \(x_{n+1}\), for which
holds. We consider
The acceleration technique will be the same as a previous case with \(m=1\). In this case, according to (3.45) we have \(P_k(0)P_k(1)=f(x_{n})f(x_{n+1}){\lt}0\) for \(k=1,2,3.\) Hence, Eq. (3.13) has a root \(t_{\textnormal{opt}}\in (0,1)\). Obviously, the new approximation
will be situated more close to \(x^*\) as compared to \(x_{n}\), \(x_{n+1}\) and Theorem 3.2 holds true for this case, too. It indicates that the two-sided approximations are useful not only for estimations of roots, but also for finding it approximately with a higher accuracy. Of course, the acceleration procedure (3.8) can be continued further with \(x_{n+m}:=y_n\), \(x_n:=x_{n+m}\) and with \(t{\gt}1\) if \(y_n\) and \(x_{n+m}\) are located on side of \(x^*\) and with \(t\in (0,1)\) if \(y_n\) and \(x_{n+m}\) are located on two-sides of root. Note that the accelerating procedure (3.8) is applicable not only for iterations (3.6), but also for any iteration, in particular, for the following iterations (A), (B), (C) and (D).
Now we consider the accelerated iteration
The iteration (A) is a damped Newton’s method [ 20 , 17 ] with optimal parameter \(\tau _n=t_{\textnormal{opt}}\). The first step \(y_n\) is used for finding the optimal parameter.
Assume that the assumptions of Theorem 3.2 are fulfilled. Then the convergence order of iteration (A) with optimal \(t_{\textnormal{opt}}\) is \(d=k+2,\, k=1,2,3\), depending on the smoothness of \(f\).
which completes the proof of the Theorem 3.4.
Now let us consider another three-step iteration
Note that if \(t\equiv 1\) in (B), then it leads to
The iteration (\(B'\)) is a particular case of scheme (40) given in [ 16 ] with \(\sigma =0\) end \(\tau =1\) and has a third order of convergence. Therefore, the iteration (B) can be considered as improvement of iteration (B\('\)).
The assumptions of the Theorem 3.2 are fulfilled. Then the convergence order of iteration (B) with optimal \(t_{\textnormal{opt}}\) equal to \(d=2k+3\).
where
From this and from (3.52) we obtained
i.e., we have
which means that the convergence order of iteration (B) is equal to \(d=2k+3\), \(k=1,2,3\).
From the Theorem 3.5, we see that the convergence order of iteration (B\('\)) can be increased two or four units at the expense of only two additional evaluations of the function. So the order of convergence and the computational efficiency of the method are greatly improved.
In [ 5 ] Algorithm 2 was constructed:
and it is proved that the order of convergence equals 5, 6, 7 depending on a suitable choice of two-variable function \(H(x_n,y_n)\). For comparison purpose we can rewrite iteration (B) as
We see that these two methods are different from one another only by chosen factors \(t\) and \(H(x_n,y_n)\).
Now we consider the following iteration:
The iteration (C) can be considered as improvement of iteration
since if \(t\equiv 1\) in (C), then it leads to (C\('\)).
In [ 16 ] , it was proven that the convergence order of (C\('\)) is four.
The assumptions of Theorem 3.2 are fulfilled. Then the convergence order of iteration (C) with optimal \(t_{\textnormal{opt}}\) equal to \(d=2(k+2)\), \(k=1,2,3\).
From (C), we find that
Substituting here the expansion of \(f(x^*)\)
we have
Using the last estimate in (3.52), we obtain
which means that the convergence order of iteration (C) equals \(d=2(k+2)\), \(k=1,2,3\).
respectively. The unified representation (3.61) of different iterations shows that the choice of the damped parameter \(\tau _n\) in (3.61) is essentially affected for the convergence order. Of course, the parameter \(\tau _n\) in (3.61) is defined by different ways, but in all cases \(\tau _n\rightarrow 1\) as \(n\rightarrow \infty \).
The speed of convergence of sequence \(\{ \tau _n\} \) to unit is different for each iteration methods. In [ 17 ] the conjecture was proposed:
Now we consider the following three-point iterative method:
where \(\bar{t}\) and \(t\) in (D) are some parameters to be determined. We can formulate the following theorem.
Assume that \(f(x)\in \mathcal{C}^4(a,b)\) and an initial approximation \(x_0\) is sufficiently close to the zero \(x^*\in (a,b)\) and the parameter \(\bar{t}\) is chosen by as a root of equation
and \(t\) is a root of equation
where
and
Then the three-point methods (D) is of eight order of convergence.
we get
Analogously, the Taylor expansion of \(f(x_{n+1})\) at point \(x=z_n\) gives
Using \(f'(z_n)=f'(y_n)+f''(y_n)(z_n-y_n)+\mathcal{O}\big((z_n-y_n)^2\big)\) in the last expansion, we have
Using \(f'(z_n)=f'(y_n)+f''(y_n)(z_n-y_n)+\mathcal{O}\Bigl((z_n-y_n)^2\Bigr)\) in the last expansion, we have
From (3.71) and (3.74) one can eliminate term with \(\displaystyle \tfrac {f’(y_n)}{f’(x_n)}f(x_n)\). As a result, we have
Note that in driving (3.75), we keep in mind that
Further, using Taylor expansion of \(f(x)\in \mathcal{C}^4(\mathfrak {D})\) at points \(y_n\) we obtain
The same technique gives us
For (3.77) and (3.78) one can eliminate the term with \(f'''(y_n)\). As a result, we obtain
Substituting (3.79) into (3.74), we obtain
where
On the other hand, by virtue of (D) we have
If we take (3.67) and (3.76) into account in (3.82), from it we deduce
Then, from (3.72) we get
Now we approximate \(f'(z_n)\) by the method of undetermined coefficient such that
This can be done by means of Taylor expansion of \(f(x)\in \mathcal{C}^4(a,b)\) at point \(z_n\) and we obtain the following linear system of equations
which has a unique solution
where
Substituting (3.84) with coefficients defined by (??) into (3.83), we get
where
The linear combination of (3.80) and (3.86) gives
Clearly, if we choose \(t\) as a root of quadratic equations
then we have
which completes the proof.
Since \(t\) is a root of Eq. (3.88), it depends on the parameter \(\alpha \), i.e. \(t=t(\alpha )\). Therefore, (D) are one parameter family of iterations.â–¡
It is easy to show that
Then taking this into account and passing to the limit \(t\rightarrow 1\) in Eq. (3.88), we get
It means that Eq. (3.88) or (3.68) has a root tending to unit for any \(\alpha \in [0,1]\).
We recall that, according to Kung-Traub hypothesis, the order of convergence of any multipoint method without memory cannot exceed the bound \(2^{n-1}\) (called optimal order), where \(n\) is the number of function evaluations per iteration. As is known, the efficiency index of iteration defined by formula \(E=d^{\frac1m}\), where \(d\) is the convergence order and \(m\) is the number of function and its derivative evaluations per iteration. Therefore, the optimal efficiency index would be \(2^{\frac{n-1}{n}}\).
According to the Theorem 3.4, the iteration (A) has the convergence order fourth for \(k=2\), requiring only three function evaluations (\(f(x_n)\), \(f(y_n)\) and \(f'(x_n)\)), whereas Theorem 3.7 shows that the iteration (D) has the convergence order eight, requiring four function evaluations \((f(x_n),f(y_n),f(z_n),f'(x_n))\). Methods \(k\) \(d\) \(E\) 1 3 \(3^{\frac13}\approx 1.442249\) (A) 2 4 \(4^{\frac13}\approx 1.587401\) 3 5 \(5^{\frac14}\approx 1.495348\) 1 5 \(5^{\frac14}\approx 1.495348\) (B) 2 7 \(7^{\frac15}\approx 1.475773\) 3 9 \(9^{\frac15}\approx 1.551845\) 1 6 \(6^{\frac15}\approx 1.430969\) (C) 2 8 \(8^{\frac15}\approx 1.515716\) 3 10 \(10^{\frac16}\approx 1.467799\) (D) - 8 \(8^{\frac14}\approx 1.681792\)
Hence, this order of convergence is optimal in the above mentioned sense of the Kung-Traub conjecture. This efficiency index is \(4^{\frac13}\approx 1.587\) and \(8^{\frac14}\approx 1.681\), respectively.
Thus, we obtain the iterations (A) and (D) with the optimal order of convergence 4 and 8, accelerating Newton’s method. Our procedure of accelerations gives a genuine improvement of Newton’s method. One of the advantages of iterations (A) and (D) is that these methods work well for the system of nonlinear equations, whereas the optimal order methods in [ 1 , 2 , 3 , 10 , 12 ] do not extend to the system of equations.
For convenience we present the efficiency index of the proposed above methods (A), (B), (C) and (D) in Table 3.. From Table 3. one can see that the efficiency index of the iterations (A), (B), (C) and (D) is better or much better than that of Newton’s method \(\sqrt2\approx 1.414.\)
4 Finding optimal parameter
Let \(m=1\) in (3.8). Then the root (3.18) can be written as
For \(k=2\), from (??) we obtain
or
By the well known assertion and (3.19) we have
Hence
From this we obtian
The root of (4.93) greater than 1 is
In a similar way, from (3.6) and (??) we obtain
where
Since in all iterations (A), (B), (C) we have
then
Using (3.26) and (4.98) in (4.95) we obtain approximates equation
Eq. (4.99) approximates (4.95) with accuracy \(\mathcal{O}(f_n^4)\) in case of (A) and with accuracy \(\mathcal{O}(f_n^5)\) in case of (B) and (C) since \(t_{\textnormal{opt}}-1=\mathcal{O}(\varepsilon _n^2)\) for (A) and \(t_{\textnormal{opt}}-1=\mathcal{O}(\varepsilon _n^3)\) for (B) and (C). Therefore, Eq. (4.99) may be useful especially for (A).
Above, we obtain formula (4.90) for finding optimal value \(t_{\textnormal{opt}}\) for iteration (A). However, it may be changed for iteration (B) and (C). Since \(x_{n+m}:=z_n\), \(x_n:=y_n\) and \(y_n:=x_{n+1}\) for iteration (B), then according to (??) we have
or
We rewrite (4.100) as
From the last equation, it is clear that if we take into account the following estimate
and
which follows from (3.26), then the equation (??) with \(\theta _n=\frac{f(z_n)}{f(y_n)}\) holds within the accuracy \(\mathcal{O}\big(f^4(x_n)\big)\).
If we wish to include the precise correction to (4.100), one can replace \(1-\displaystyle \tfrac {f’(y_n)}{f’(x_n)}\) by \(2\displaystyle \tfrac {f(y_n)}{f(x_n)}\), then we arrive at
By virtue of (4.102) and (4.103), Eq. (4.104) approximates Eq. (4.100) with accuracy \(\mathcal{O}\big(f^5(x_n)\big)\).
With respect to the iteration (C), Eq. (??) remains true with \(\theta _n=\frac{f(z_n)}{f(y_n)}\).
Note that in most cases the value of the iteration parameter of the damped Newton’s method varies from zero to unit, whereas in our case the value of the optimal parameter may be greater than unit.
5 Numerical experiments
We consider the following four examples [ 18 , 2 , 12 , 8 ] .
Let \(f(x)=\exp (x)-4x^2=0\). This equation has three roots. It is easy to show that
We considered only first and third roots.
Method \(k\) \(|x^*-x_0|\) \(|x^*-x_1|\) \(|x^*-x_2|\) \(|x^*-x_3|\) \(d_{x_2}\) \(d_{x_3}\) \(\rho _{\tau _2}\) \(\rho _{\tau _3}\) 1 1.93(-01) 3.87(-03) 4.00(-08) 4.45(-023) 2.93 3.00 2.93 3.00 (A) 2 1.93(-01) 3.48(-04) 3.80(-15) 5.40(-059) 3.99 4.00 3.99 4.00 3 1.93(-01) 1.68(-05) 8.74(-26) 3.31(-127) 5.00 5.00 5.00 5.00 1 1.93(-01) 1.43(-04) 5.70(-20) 5.78(-097) 4.92 5.00 4.92 5.00 (B) 2 1.93(-01) 1.46(-06) 4.15(-42) 6.35(-291) 6.94 7.00 6.94 7.00 3 1.93(-01) 9.66(-09) 4.56(-74) 5.31(-662) 8.94 9.00 8.94 9.00 1 1.93(-01) 1.24(-05) 1.47(-30) 4.13(-180) 5.95 6.00 5.95 6.00 (C) 2 1.93(-01) 1.26(-07) 8.02(-57) 2.14(-450) 7.95 8.00 7.95 8.00 3 1.93(-01) 8.38(-10) 4.41(-93) 7.23(-926) 9.96 10.00 9.96 10.00
Method \(k\) \(|x^*-x_0|\) \(|x^*-x_1|\) \(|x^*-x_2|\) \(|x^*-x_3|\) \(d_{x_2}\) \(d_{x_3}\) \(\rho _{\tau _2}\) \(\rho _{\tau _3}\) 1 9.22(-02) 5.38(-04) 1.36(-010) 2.18(-0030) 2.95 3.00 2.95 3.00 (A) 2 9.22(-02) 1.56(-06) 1.56(-025) 1.55(-0101) 3.98 4.00 3.98 4.00 3 9.22(-02) 3.56(-08) 3.77(-040) 5.04(-0200) 4.99 5.00 4.99 5.00 1 9.22(-02) 6.10(-06) 1.29(-026) 5.39(-0130) 4.95 5.00 4.95 5.00 (B) 2 9.22(-02) 1.26(-09) 2.17(-064) 9.62(-0448) 6.96 7.00 6.96 7.00 3 9.22(-02) 2.14(-12) 9.57(-108) 6.74(-0966) 8.96 9.00 8.96 9.00 1 9.22(-02) 2.70(-07) 2.76(-040) 3.13(-0238) 5.96 6.00 5.96 6.00 (C) 2 9.22(-02) 5.57(-11) 1.87(-084) 2.96(-0672) 7.97 8.00 7.97 8.00 3 9.22(-02) 9.48(-14) 2.74(-133) 1.12(-1328) 9.97 10.00 9.97 10.00
\(f(x)=x^2-2\cos (x)=0\). This equation has two roots. It is also easy to show that
We considered only first root, because \(f(x)\) is an even function with respect to \(x\).
Method \(k\) \(|x^*-x_0|\) \(|x^*-x_1|\) \(|x^*-x_2|\) \(|x^*-x_3|\) \(d_{x_2}\) \(d_{x_3}\) \(\rho _{\tau _2}\) \(\rho _{\tau _3}\) 1 5.49(-01) 1.11(-02) 2.18(-07) 1.71(-021) 2.77 3.00 2.77 3.00 (A) 2 5.49(-01) 1.73(-03) 2.73(-13) 1.71(-052) 3.92 4.00 3.92 4.00 3 5.49(-01) 5.18(-05) 1.76(-24) 7.93(-122) 4.84 5.00 4.84 5.00 1 5.49(-01) 4.63(-04) 1.16(-18) 1.12(-091) 4.75 5.00 4.75 5.00 (B) 2 5.49(-01) 6.44(-06) 1.90(-39) 3.62(-274) 6.80 7.00 6.80 7.00 3 5.49(-01) 6.17(-08) 3.33(-69) 1.29(-620) 8.81 9.00 8.81 9.00 1 5.49(-01) 4.84(-05) 1.41(-28) 8.72(-170) 5.80 6.00 5.80 6.00 (C) 2 5.49(-01) 6.65(-07) 3.21(-53) 9.36(-424) 7.83 8.00 7.83 8.00 3 5.49(-01) 6.42(-09) 6.22(-87) 4.48(-867) 9.84 10.00 9.84 10.00
Let \(f(x)=(x-2)(x^{10}+x+1)\exp (-x-1)=0\). We chose the initial approximation \(x_0=2.1\) for \(x^*=2\).
Method \(|x^*-x_1|\) \(|x^*-x_2|\) \(|x^*-x_3|\) \(d_{x_3}\) \(h(t)=1+\frac{4t}{2-5t},\beta =3 \) in
[
1
,
(14)
]
1.83(-5) 3.15(-34) 2.45(-264) 7.99986 \(h(t)=\frac{1}{1-2t-t^2+t^3},\beta =3 \) in
[
1
,
(14)
]
6.02(-6) 7.91(-38) 6.99(-293) 8.00007 \(\psi (t)=\frac{5-2t+t^2}{5-12t}\) in
[
12
,
(12)
]
6.12(-5) 1.11(-29) 1.34(-224) 7.99947 \(\psi (t)=\frac{1}{1-2t-t^2}\) in
[
12
,
(12)
]
6.01(-5) 9.29(-30) 3.02(-228 ) 8.00050 (D), \(\alpha =0\) 2.18(-5) 1.12(-34) 5.40(-269) 7.99999 (D), \(\alpha =0.5\) 2.14(-5) 2.25(-34) 3.39(-266) 8.00003 (D), \(\alpha =1\) 2.89(-5) 2.45(-33) 6.63(-258) 7.99999
All numerical calculations were performed using Maple 16 system. Also, to study the convergence of iterations (3.6), (A), (B) (C) and (D), we compute the computational order of convergence \(d_{x_n}\) using the formulae [ 14 ]
where \(x_{n+1}\), \(x_n\), \(x_{n-1}\) are three consecutive approximations. In numerical examples we also check out the computational order of convergence (COC) of \(\tau _n\) by formula [ 14 ]
which is included in the presented tables (see Tables 5.–5.) and agrees with the conjecture.
Comparisons of the convergence of the iterations (A), (B) and (C) are given in Tables 5.–5.. The third, fourth, fifth and sixth columns show the absolute errors \(|x^*-x_n|\) in the first four iterations. The last four columns display the computational order of convergence \(d_{x_2}\), \(d_{x_3}\), \(\rho _{\tau _2}\) and \(\rho _{\tau _3}\), respectively. The factor \(l\) in the brackets denotes \(10^l\). As expected, the convergence of the proposed methods was remarkably fast. A comparison of the convergence of (D) iteration with other optimal order iterations with eighth order of convergence [ 1 , 12 ] is given in Table 5.. From the Tables we see that the COC perfectly coincides with the theoretical order.
Conclusions
We propose a new acceleration procedure for Newton-type methods. The effect of the acceleration is more perceptible when \(k\) increases. The proposed accelerating procedure allows us to derive high and optimal order iteration methods. Numerical results clear demonstrate the theoretical analysis (speed of convergence, and effect of acceleration). Moreover, our acceleration procedure can also be applied to any iteration and systems of nonlinear equations, to which a forthcoming paper will be devoted.
The authors would like to thank the anonymous referee for the valuable comments and suggestions which substantially improved the quality of this article. The work was supported partially by the Foundation of Science and Technology of Mongolia under grant SST\(\_ \)007/2015. O. Ch. acknowledges support within the Hulubei-Meshcheryakov programme JINR-Romania.
Bibliography
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
T. Zhanlav, O. Chuluunbaatar, Some iteration methods with high order convergence for nonlinear equations, Bulleten of PFUR, Series Mathematics.Information sciences. Physics, 4 (2009), pp. 47–55.
- 17
T. Zhanlav, O. Chuluunbaatar, Convergence of the continuous analogy of Newton’s method for solving nonlinear equations, Numerical methods and programming, Moscow State University, 10 (2009) pp. 402–407.
- 18
- 19
T. Zhanlav, D. Khongorzul, On convergence behavior of combined iterative method for solving nonlinear equations, Comput. Math. Math. Phys., 52 (2012), pp. 790–800.
- 20
T. Zhanlav, I.V. Puzynin, The convergence of iteration based on a continuous analogue of Newton’s method, Comput. Math and Math Phys., 32 (1992), pp. 729–737.