We consider an Aitken-Steffensen type method in which the nodes are controlled by Newton and two-step Newton iterations.
We prove a local convergence result showing the -convergence order 7 of the iterations. Under certain supplementary conditions, we obtain monotone convergence of the iterations, providing an alternative to the usual ball attraction theorems.
Numerical examples show that this method may, in some cases, have larger (possibly sided) convergence domains than other methods with similar convergence orders.
Authors
Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)
Keywords
Cite this paper as
I. Păvăloiu, On an Aitken-Steffensen-Newton type method, Carpathian J. Math. 34 (2018) no. 1, pp. 85-92.
We consider an Aitken-Steffensen type method in which the nodes are controlled by Newton and two-step Newton iterations. We prove a local convergence result showing the qq-convergence order 7 of the iterations. Under certain supplementary conditions, we obtain monotone convergence of the iterations, providing an alternative to the usual ball attraction theorems.
Numerical examples show that this method may, in some cases, have larger (possibly sided) convergence domains than other methods with similar convergence orders.
1. Introduction
We are interested in solving the nonlinear equation
where f:[a,b]rarrRf:[a, b] \rightarrow \mathbb{R} is given. We consider two auxiliary functions g_(1),g_(2):[a,b]rarr[a,b]g_{1}, g_{2}:[a, b] \rightarrow[a, b] such that the above equation is equivalent to the following ones
where [ x,y;fx, y ; f ] denotes the first order divided difference of ff at the nodes x,yx, y.
If one of the nodes in the above formula is controlled by g_(1)g_{1}, one obtains the Steffensen method, given by (see [1], [2], [4], [5], [8], [10], [11], [18], [22], [24], [25])
Considering the inverse interpolatory polynomial of degree 2, one can obtain methods similar to the above ones, studied in [19]-[20]. In this paper we consider again this polynomial. We assume that J=f([a,b])J=f([a, b]) and that f:[a,b]rarr Jf:[a, b] \rightarrow J is bijective, so there exists the inverse of f,f^(-1):J rarr[a,b]f, f^{-1}: J \rightarrow[a, b]. Obviously, x^(**)=f^(-1)(0)x^{*}=f^{-1}(0). Let a_(i)in[a,b]a_{i} \in[a, b] and b_(i)=f(a_(i))b_{i}=f\left(a_{i}\right), i=1,2,3i=1,2,3.
Denote by PP the interpolation polynomial for f^(-1)f^{-1}, determined by the nodes b_(1),b_(2),b_(3)b_{1}, b_{2}, b_{3} and values a_(1),a_(2),a_(3)a_{1}, a_{2}, a_{3} (see, e.g., [19]-[20]):
Given an approximation x_(n)in[a,b]x_{n} \in[a, b] for x^(**)x^{*}, we take a_(1)=x_(n),a_(2)=g_(1)(x_(n)),a_(3)=g_(2)(x_(n))a_{1}=x_{n}, a_{2}=g_{1}\left(x_{n}\right), a_{3}=g_{2}\left(x_{n}\right), and we obtain the next approximation by
In [19] we have studied the case when in (1.5) is taken g_(1)(x)=x-lambda f(x),lambda inRg_{1}(x)=x-\lambda f(x), \lambda \in \mathbb{R} a fixed parameter, and g_(2)(x)=g_(1)(g_(1)(x))g_{2}(x)=g_{1}\left(g_{1}(x)\right). We have obtained conditions assuring the monotone convergence of the sequences (x_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0} and (g_(1)(x_(n)))_(n >= 0)\left(g_{1}\left(x_{n}\right)\right)_{n \geq 0}.
In this paper we consider the case when g_(1)g_{1} is given by the the Newton iteration
while g_(2)g_{2} is given by the two-step Newton iteration: g_(2)(x)=g_(1)(g_(1)(x))g_{2}(x)=g_{1}\left(g_{1}(x)\right).
From (1.5) we therefore obtain the following iterations:
which we call Aitken-Steffensen-Newton iterations.
At this point we notice that the above formula has six equivalent writings, since the interpolation polynomial PP is the same, no matter the order of the three interpolation nodes. We shall use this fact in the proof of Theorem 3.2.
In Section 2 we provide a local convergence result for these iterates, while in Section 3 we obtain monotone convergence of the iterates under certain supplementary conditions
(Fourier type, convexity) on ff. The last section contains some numerical examples, which confirm the theory.
2. Local convergence of the iterations
We shall denote by I_(a_(1),a_(2),a_(3))I_{a_{1}, a_{2}, a_{3}} the smallest open interval determined by the numbers a_(1),a_(2),a_(3)in[a,b]a_{1}, a_{2}, a_{3} \in[a, b] and by E_(f)E_{f} the following expression
We obtain the following local convergence result.
Theorem 2.1. Assume that hypotheses AA ) and
B) f inC^(3)[a,b]f \in C^{3}[a, b], and f^(')(x)!=0,AA x in[a,b]f^{\prime}(x) \neq 0, \forall x \in[a, b],
hold. Then the Aitken-Steffensen-Newton method converges locally to x^(**)x^{*}, i.e., for any x_(0)x_{0} sufficiently close to x^(**)x^{*}, the iterations (x_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0} given by (1.6) are well defined and converge to x^(**)x^{*}. Moreover, the following estimates hold:
with some xi_(n)inI_(x^(**),x_(n),y_(n),z_(n)),theta_(n),alpha_(n)inI_(x^(**),x_(n)),mu_(n),beta_(n)inI_(x^(**),y_(n)),gamma_(n)inI_(x^(**),z_(n))\xi_{n} \in I_{x^{*}, x_{n}, y_{n}, z_{n}}, \theta_{n}, \alpha_{n} \in I_{x^{*}, x_{n}}, \mu_{n}, \beta_{n} \in I_{x^{*}, y_{n}}, \gamma_{n} \in I_{x^{*}, z_{n}}; this shows that the method attains convergence with qq-order at least 7 , with asymptotic constant given by
Proof. We suppose for the moment that the elements of the sequences (x_(n))_(n >= 0),(y_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0},\left(y_{n}\right)_{n \geq 0} and (z_(n))_(n >= 0)\left(z_{n}\right)_{n \geq 0} remain in [a,b][a, b].
The remainder of the interpolation polynomial PP in (1.3) is known that satisfies
where eta_(n)inI_(0,f(x_(n)),f(y_(n)),f(z_(n)))\eta_{n} \in I_{0, f\left(x_{n}\right), f\left(y_{n}\right), f\left(z_{n}\right)}. Since ff has derivatives up to order 3 on [a,b][a, b] and f^(')(x)!=0f^{\prime}(x) \neq 0, AA x in[a,b]\forall x \in[a, b], then f^(-1)f^{-1} is three times derivable and (see, e.g., [19])
where xi_(n)=f^(-1)(eta_(n))in[a,b]\xi_{n}=f^{-1}\left(\eta_{n}\right) \in[a, b]. One can show that in fact xi_(n)inI_(x^(**),x_(n),y_(n),z_(n))\xi_{n} \in I_{x^{*}, x_{n}, y_{n}, z_{n}}.
Combining the above relations, (2.8) leads to
The Lagrange Theorem ensures the existence of alpha_(n)inI_(x^(**),x_(n)),beta_(n)inI_(x^(**),y_(n)),gamma_(n)inI_(x^(**),z_(n))\alpha_{n} \in I_{x^{*}, x_{n}}, \beta_{n} \in I_{x^{*}, y_{n}}, \gamma_{n} \in I_{x^{*}, z_{n}} such that
From the first and second relation in (1.6) it follows that there exists theta_(n)inI_(x^(**),x_(n))\theta_{n} \in I_{x^{*}, x_{n}} and mu_(n)inI_(x^(**),y_(n))\mu_{n} \in I_{x^{*}, y_{n}} such that
These relations imply (2.7), as well as the rest of the statements.
It can be easily proved that the elements of all sequences (x_(n))_(n >= 0),(y_(n))_(n >= 0),(z_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0},\left(y_{n}\right)_{n \geq 0},\left(z_{n}\right)_{n \geq 0}, (mu_(n))_(n >= 0),(xi_(n))_(n >= 0),(theta_(n))_(n >= 0)\left(\mu_{n}\right)_{n \geq 0},\left(\xi_{n}\right)_{n \geq 0},\left(\theta_{n}\right)_{n \geq 0} are well defined if x_(0)x_{0} is chosen sufficiently close to x^(**)x^{*}.
The qq-convergence order pp of the method (see, e.g., [14] for definitions and properties) is at least 7 , and since at each iteration step one computes d=5d=5 function evaluations (f(x_(n)),f^(')(x_(n)),f(y_(n)),f^(')(y_(n)),f(z_(n)))\left(f\left(x_{n}\right), f^{\prime}\left(x_{n}\right), f\left(y_{n}\right), f^{\prime}\left(y_{n}\right), f\left(z_{n}\right)\right), the efficiency index of the method (see, e.g., [15] for definitions) is E=p^((1)/(d))=7^((1)/(5))~~1.47E=p^{\frac{1}{d}}=7^{\frac{1}{5}} \approx 1.47. This value is greater than 2^((1)/(2))~~1.412^{\frac{1}{2}} \approx 1.41 and 3^((1)/(3))~~1.443^{\frac{1}{3}} \approx 1.44 which correspond to the Newton method, resp. the generalized Steffensen method (see [19]).
3. Monotone convergence
In order to study the monotone convergence of the method, we consider the Fourier condition:
C) the initial approximation x_(0)in[a,b]x_{0} \in[a, b] verifies f(x_(0))*f^('')(x_(0)) > 0f\left(x_{0}\right) \cdot f^{\prime \prime}\left(x_{0}\right)>0.
We obtain the following results.
Theorem 3.2. If ff satisfies assumptions AA )- CC ) and, moreover i_(1).f^(')(x) > 0,AA x in[a,b];i_{1} . f^{\prime}(x)>0, \forall x \in[a, b] ; ii_(1)i i_{1}. f^('')(x) >= 0,AA x in[a,b]f^{\prime \prime}(x) \geq 0, \forall x \in[a, b];
iii _(1).E_(f)(x) >= 0,AA x in[a,b]_{1} . E_{f}(x) \geq 0, \forall x \in[a, b],
(1) then the elements of the sequences (x_(n))_(n >= 0),(y_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0},\left(y_{n}\right)_{n \geq 0} and (z_(n))_(n >= 0)\left(z_{n}\right)_{n \geq 0} generated by (1.6) remain in [a,b][a, b] and satisfy: j_(1).x_(n) > y_(n) > z_(n) > x_(n+1) > x^(**),n=0,1,dotsj_{1} . x_{n}>y_{n}>z_{n}>x_{n+1}>x^{*}, n=0,1, \ldots, jj_(1).limx_(n)=limy_(n)=limz_(n)=x^(**)j j_{1} . \lim x_{n}=\lim y_{n}=\lim z_{n}=x^{*}.
Moreover, as soon as the iterates (x_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0} become sufficiently close to the solution, they obey also the conclusions of Theorem 2.1.
Proof. By i_(1)i_{1} ) and BB ) it follows that the solution x^(**)x^{*} is unique in ]a,b[:}] a, b\left[\right.. Let x_(n)in[a,b]x_{n} \in[a, b] be an approximation for x^(**)x^{*}, which satisfies CC ). From ii_(1)i i_{1} ) we have f(x_(n)) > 0f\left(x_{n}\right)>0 and, by i_(1)i_{1} ), x_(n) > x^(**)x_{n}>x^{*}. Relations AA ), i_(1)i_{1} ), ii_(1)i i_{1} ) attract, by first relation in (1.6), that y_(n) < x_(n)y_{n}<x_{n} and y_(n) > x^(**)y_{n}>x^{*}. A similar reasoning leads to x^(**) < z_(n) < y_(n)x^{*}<z_{n}<y_{n} and f(z_(n)) > 0f\left(z_{n}\right)>0. The last relation in (1.6), together with (2.8), (2.9), i_(1)i_{1} ) and iii_(1)i i i_{1} ) imply that x_(n+1) > x^(**)x_{n+1}>x^{*}. The inequality x_(n+1) < z_(n)x_{n+1}<z_{n} is obtained from the equivalent writing of the interpolation polynomial, i.e.
By induction, we show j_(1)j_{1} ). It remains to show jj_(1)j j_{1} ), which follows by passing to limit in the first relation in (1.6), and taking into account j_(1)j_{1} ).
Remark 3.1. We notice that the above result allows a possibly larger convergence domain of the iterates, compared to conditions required by Theorem 2.1 (as is the case when we consider the Newton method and the Fourier condition). The same observation applies to the subsequent results.
Theorem 3.3. If ff obeys assumptions AA )- CC ), iii_(1)i i i_{1} ) and, moreover, i_(2).f^(')(x) < 0,AA x in[a,b];i_{2} . f^{\prime}(x)<0, \forall x \in[a, b] ; ii_(2).f^('')(x) <= 0,AA x in[a,b]i i_{2} . f^{\prime \prime}(x) \leq 0, \forall x \in[a, b],
(1) then the same conclusions hold as in Theorem 3.2.
Proof. Instead of (1.1) we consider h(x)=0h(x)=0, with h=-fh=-f, and we take into account that E_(f)=E_(-f)E_{f}=E_{-f}. ◻\square
In the case when f^(')f^{\prime} and f^('')f^{\prime \prime} have different signs, we obtain the following results.
Theorem 3.4. If ff satisfies A)-CA)-C ), iii_(1)i i i_{1} ) and, moreover, i_(3).f^(')(x) < 0,AA x in[a,b];i_{3} . f^{\prime}(x)<0, \forall x \in[a, b] ; ii_(3)i i_{3}. f^('')(x) >= 0,AA x in[a,b]f^{\prime \prime}(x) \geq 0, \forall x \in[a, b],
(1) then the elements of (x_(n))_(n >= 0),(y_(n))_(n >= 0),(z_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0},\left(y_{n}\right)_{n \geq 0},\left(z_{n}\right)_{n \geq 0} remain in [a,b][a, b] and obey j_(3).x_(n) < y_(n) < z_(n) < x_(n+1) < bar(x)^(**),n=0,1,dots;j_{3} . x_{n}<y_{n}<z_{n}<x_{n+1}<\bar{x}^{*}, n=0,1, \ldots ; jj_(3).limx_(n)=limy_(n)=limz_(n)=x^(**)j j_{3} . \lim x_{n}=\lim y_{n}=\lim z_{n}=x^{*}.
Moreover, as soon as the iterates (x_(n))_(n >= 0)\left(x_{n}\right)_{n \geq 0} become sufficiently close to the solution, they obey also the conclusions of Theorem 2.1.
The proof is similar to the proof of Theorem 3.2.
Theorem 3.5. If ff satisfies A)-CA)-C ), iii_(1)i i i_{1} ) and i_(4).f^(')(x) > 0,AA x in[a,b];i_{4} . f^{\prime}(x)>0, \forall x \in[a, b] ; ii_(4)i i_{4}. f^('')(x) <= 0,AA x in[a,b]f^{\prime \prime}(x) \leq 0, \forall x \in[a, b],
(1) then the same conclusions hold as in Theorem 3.4.
The proof is obtained as in the proof of Theorem 3.3.
4. Numerical examples
We present some examples, solved using Matlab in double precision, and we compare the studied method to other methods. In order to obtain smaller tables, we used the format short command in Matlab, and for better legibility we used the \numprint LaTeX command and package. It is worth mentioning that such choice may lead to results that appear integers, while they are not (e.g., the value of y_(4)y_{4} in Table 4 is shown to be 2 , while f(y_(4))f\left(y_{4}\right) should be 0 ); the explanation resides in the rounding made in the conversion process.
We shall consider the Aitken-Newton method introduced in [16]:
It has the qq-convergence order 8 , the efficiency index root(5)(8)~~1.51\sqrt[5]{8} \approx 1.51, and similar monotone convergence of the iterates as obtained in Theorems 3.2-3.5. However, the numerical examples performed in double precision arithmetic show a slight better convergence of this method over the Aitken-Steffensen-Newton method studied in this paper.
Example 4.1. Consider the following equation (see, e.g., [6])
The largest interval to study the monotone convergence of our method by Theorems 3.2-3.5 is [a,b]:=[x^(**),1.54 dots][a, b]:=\left[x^{*}, 1.54 \ldots\right], since f^('')f^{\prime \prime} vanishes at bb (being positive on [a,b][a, b] ). The Fourier
condition D) holds on [a,b][a, b] (and does not hold for x(:a),E_(f)(x) > 0x\langle a), E_{f}(x)>0 on [a,b][a, b], while the derivatives f^('),f^('')f^{\prime}, f^{\prime \prime} are positive on this interval. The conclusions of Theorem 3.2 apply.
The Aitken-Newton-Steffensen method leads to the following results, presented in Table 1.
The convergence may be not very fast for initial approximations away from the solution.
It is worth noting that the method converges for -0.3 <= x_(0) <= x_(1)^(**)-0.3 \leq x_{0} \leq x_{1}^{*} too (local convergence near x^(**)=0x^{*}=0 assured by Theorem 2.1), despite the Fourier condition does not hold. For x_(0)=-0.3x_{0}=-0.3 one obtains y_(0)=-2.4 dots < 0y_{0}=-2.4 \ldots<0, then z_(0)=-0.14 dots < 0,x_(1)=0.37 dotsz_{0}=-0.14 \ldots<0, x_{1}=0.37 \ldots and the rest of the iterates remain positive, converging monotonically to x_(1)^(**)x_{1}^{*}. For x_(0)=-0.4x_{0}=-0.4, the method converges to another solution of the equation, x_(2)^(**)=-0.603 dotsx_{2}^{*}=-0.603 \ldots
The optimal method introduced by Cordero, Torregrosa and Vassileva in [9] has a smaller convergence domain to the right of x_(1)^(**)x_{1}^{*}, since the iterates converge for x_(0)=1.48x_{0}=1.48 ( x_(4)=1.3741*10^(-32)x_{4}=1.3741 \cdot 10^{-32} ), while for x_(0)=1.49x_{0}=1.49 the iterates jump over x_(1)^(**)x_{1}^{*} and converge to x_(2)^(**)x_{2}^{*}; in fact, the initial approximation 1.442 does not lead to convergence (see [16]).
The Kou-Wang method (formula (25) in [12]) converges for x_(0)=1.48(x_(4)=0)x_{0}=1.48\left(x_{4}=0\right) and diverges for x_(0)=1.49x_{0}=1.49 (see [16]).
Example 4.2. Consider the following equation (see, e.g., [23])
The largest interval to study the monotone convergence of our method by Theorems 3.2-3.5 is [a,b]:=[x^(**),7.9 dots][a, b]:=\left[x^{*}, 7.9 \ldots\right], since f^('')f^{\prime \prime} vanishes at bb (being positive on [a,b][a, b] ).
The Fourier condition C) holds on [a,b][a, b] (and does not hold for x < ax<a ), E_(f)(x) > 0E_{f}(x)>0 on [a,b][a, b], while both the derivatives f^('),f^('')f^{\prime}, f^{\prime \prime} are positive on this interval. The conclusions of Theorem 3.2 apply.
It is interesting to note that in [23, Rem. 6] Petković observed that the methods studied there have a small convergence domain to the left of the solution: the choice of x_(0)=1.8x_{0}=1.8 caused a bad convergence behavior of those iterates. We believe that this behavior may be explained by the fact that the derivative of ff vanishes at x=1.78 dotsx=1.78 \ldots
The Aitken-Newton method leads to the results presented in Table 3. The iterates converge even for x_(0) > 7.9x_{0}>7.9 (and to the left of the solution as well, but for x_(0)x_{0} higher than 1.81). Of course the convergence may be not very fast when the initial approximations are away from the solution.
In Table 4 we present the iterates generated by the Aitken-Newton method 4.12.
The optimal method introduced by Cordero, Torregrosa and Vassileva in [9] converges to x^(**)x^{*} for x_(0)=6.46x_{0}=6.46 and it does not for x_(0)=6.47x_{0}=6.47, as shown in [16] .
The optimal method introduced by Liu and Wang (formula (18) in [13]) converges to x^(**)=2x^{*}=2 for x_(0)=2.359x_{0}=2.359 (it needs 5 iterates) but for x_(0)=2.36x_{0}=2.36 it converges to another solution, x_(1)^(**)=1512.626 dotsx_{1}^{*}=1512.626 \ldots. The results are presented in [16].
Among the optimal methods in [23] (the methods with convergence orders higher than 8 were corrected in a subsequent Corrigendum paper), the modified Ostrowski and Maheshwari methods behave very well for this example (we have studied the convergence only to the right of the solution). The modified Euler-like method has a small domain of convergence (in R\mathbb{R} ), since it converges to x^(**)x^{*} for x_(0)=2.15x_{0}=2.15, while for x_(0)=2.16x_{0}=2.16 it generates square roots of negative numbers. Matlab has the feature of implicitly dealing with complex numbers, and the iterates finally converge (in C\mathbb{C} ) to the solution (see [16]).
Conclusions. The sufficient conditions for guaranteed convergence of the method studied by us may theoretically lead to larger convergence domains (especially sided convergence intervals) than from estimates of attraction balls, while a few examples shown that these domains are larger than those corresponding to some optimal methods of order 8. The performances of the studied method are comparable to those of the Aitken-Steffensen-Newton method studied in [16].
References
[1] Amat, S. and Busquier, S., A two-step Steffensen's method under modified convergence conditions, J. Math. Anal. Appl., 324 (2006), 1084-1092
[2] Amat, S., Busquier, S. and Candela, V., A class of quasi-Newton generalized Steffensen methods on Banach space, J. Comput. Appl. Math., 149 (2002), 397-406
[3] Argyros, I. K. and Hilout, S., On the local convergence of fast two-step Newton-like methods for solving nonlinear equations J. Comput. Appl. Math., 245 (2013), 1-9
[4] Beltyukov, B. A., An analogue of the Aitken-Steffensen method with controlled step, URSS Comput. Math. Math. Phys., 27 (1987), No. 3, 103-112
[5] Cătinaș, E., On some Steffensen-type iterative methods for a class of nonlinear equations, Rev. Anal. Numér. Théor. Approx., 24 (1995), No. 1-2, 37-43
[6] Chun, C., Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence, Appl. Math. Comput., 189 (2007), No. 1, 597-601
[7] Chun, C., A geometric construction of iterative formulas of order three, Appl. Math. Lett., 23 (2010), 512-516
[8] Cordero, A. and Torregrosa, R. J., A class of Steffensen type method with optimal order of convergence, Appl. Math. Comput., 217 (2011), 7653-7659
[9] Cordero, A., Torregrosa, J. R. and Vassileva, M. P., A family of modified Ostrowski's methods with optimal eighth order of convergence, Appl. Math. Lett., 24 (2011), No. 12, 2082-2086
[10] Hongmin, R., Qinghio, W. and Weihong, B., A class of two-step Steffensen type methods with fourth order convergence, Appl. Math. Comput., 209 (2009), No. 2, 206-210
[11] Jain, P., Steffensen type method for solving nonlinear equations, Appl. Math. Comput., 194 (2007), 527-533
[12] Kou, J. and Wang, X., Some improvements of Ostrowski's method, Appl. Math. Lett., 23 (2010), No. 1, 92-96
[13] Liu, L. and Wang, X., Eighth-order methods with high efficiency index for solving nonlinear equations, Appl. Math. Comput., 215 (2010), No. 9, 3449-3454
[14] Ortega, J. M. and Rheinboldt, W. C., Iterative Solution on Nonlinear Equations in Several Variables, Academic Press, New York, 1970
[15] Ostrowski, M. A., Solution of equations and systems of equations, Academic Press, New York, 1982
[16] Păvăloiu, I., and Cătinaş, E., On a robust Aitken-Newton method based on the Hermite polynomial, Appl. Math. Comput., 287-288 (2016), 224-231
[17] Păvăloiu, I., Approximation of the root of equations by Aitken-Steffensen-type monotonic sequences, Calcolo 32 (1995), No. 1-2, 69-82
[18] Păvăloiu, I., Bilateral approximations of solutions of equations by order three Steffensen-type methods, Studia. Univ. "Babeş-Bolyai", Mathematica (Cluj), LI (2006), No. 3, 87-94
[19] Păvăloiu, I., and Cătinaş, E., On a Steffensen-Hermite method of order three, Appl. Math. Comput., 215 (2009), 2663-2672
[20] Păvăloiu, I. and Cătinaș, E., Bilateral approximations for some Aitken-Steffensen-Hermite type method of order three, Appl. Math. Comput., 217 (2011), 5838-5846
[21] Păvăloiu, I. and Cătinaș, E., On an Aitken-Newton-type method, Numer. Algor., 62 (2013), No. 2, 253-260
[22] Păvăloiu, I. and Cătinaș, E., On a Newton-Steffensen-type method, Appl. Math. Lett., 26 (2013), No. 6, 659-663
[23] Petković, M. S., On a general class of multipoint root-finding methods of high computational efficiency, SIAM J. Numer. Anal., 47 (2010), No. 6, 4402-4414
[24] Quan, Z., Peng, Z. and Wenchao, M., Variant of Steffensen secant method and applications, Appl. Math. Comput, 216 (2010), No. 12, 3486-3496
[25] Sharma, J. R., A composite third order Newton-Steffensen method for solving nonlinear equations, Appl. Math. Comput., 169 (2005), 342-346
[26] Zhanlav, T., Chuluunbaatar, O. and Ulziibayar, V., Two-sided approximation for some Newtons type methods, Appl. Math. Comp., 236 (2014), 239-246
Tiberiu Popoviciu Institute of Numerical Analysis
Romanian Academy
P.O. Box 68-1, Romania