Abstract

We introduce a new method for solving nonlinear equations in R, which uses three function evaluations at each step and has convergence order four, being, therefore, optimal in the sense of Kung and Traub:
\begin{align*}
y_{n} & =x_{n}-\frac{f\left( x_{n}\right) }{f^{\prime}\left( x_{n}\right)
}\label{f.1.10}\\
x_{n+1} & =y_{n}-\frac{\left[ x_{n},x_{n},y_{n};f\right] f^{2}\left(
x_{n}\right) }{\left[ x_{n},y_{n};f\right] ^{2}f^{\prime}\left(
x_{n}\right) },\ \; \; \; \;n=0,1,\ldots
\end{align*}

The method is based on the Hermite inverse interpolatory polynomial of degree two.

Under certain additional assumptions, we obtain convergence domains (sided intervals) larger than the usual ball convergence sets, and, moreover, with monotone convergence of the iterates. The method has larger convergence domains than of the methods which use intermediate points of the type \(y_n=x_n+f(x_n)\) (as the later may not yield convergent iterates when \(|f|\) grows fast near the solution).

Authors

Ion Păvăloiu
“Tiberiu Popoviciu” Institute of Numerical Analysis, Romanian Academy

Emil Cătinaș
“Tiberiu Popoviciu” Institute of Numerical Analysis, Romanian Academy

Keywords

Nonlinear equations in R; inverse interpolation; Hermite-Steffensen type method; computational convergence order.

Paper coordinates

I. Păvăloiu, E. Cătinaș, A new optimal method of order four of Hermite-Steffensen type, Mediterr. J. Math. 19 (2022), art. no. 147.
https://doi.org/10.1007/s00009-022-02030-5

PDF

??

About this paper

Journal

Mediterranean Journal Matematics

Publisher Name

Springer

Print ISSN

16605454

Online ISSN

16605446

google scholar link

A new optimal method of order four of Hermite-Steffensen type

A new optimal method of order four
of Hermite-Steffensen type

I. Păvăloiu and E. Cătinaş Corresponding author. Email: ecatinas@ictp.acad.ro”Tiberiu Popoviciu” Institute of Numerical Analysis, Romanian Academy, P.O. Box 68-1, Cluj-Napoca, Romania.
Abstract

We introduce a new method for solving nonlinear equations in , which uses three function evaluations at each step and has convergence order four, being therefore optimal in the sense of Kung and Traub. The method is based on the Hermite inverse interpolatory polynomial of degree two. Under certain additional assumptions, we obtain convergence domains (sided intervals) larger than the usual ball convergence sets, and, moreover, with monotone convergence of the iterates.

The method has larger convergence domains than of the methods which use intermediate points of the type yn=xn+f(xn) (as the later may not yield convergent iterates when |f| grows fast near the solution).

Keywords: nonlinear equations in ; inverse interpolation; Hermite-Steffensen type method; computational convergence order.

MSC: 65H05.

1 Introduction

In this paper we introduce and study a Steffensen-type iterative method for solving nonlinear equations in  

f(x)=0,

where f:[a,b], a,b, a<b.

Such methods consider an additional equation, equivalent to the above one,

g(x)x=0

with g:[a,b][a,b].

We assume the following usual hypothesis.

H1) there exists x]a,b[ such that f(x)=0 ( g(x)=x).

The Steffensen method and its variants use the inverse interpolation polynomial of degree 1 for the function f, and the nodes controlled with the aid of an auxiliary function g [1], [2], [3], [5], [9], [11][14], [19].

In this paper we consider a Hermite inverse interpolation polynomial of degree 2, which uses nodes generated with the aid of the function g. We shall obtain an iterative method of order 4, which uses at each step 3 function evaluations, which is therefore optimal in the sense of Kung and Traub. Under some reasonable additional assumptions on the function f, we shall show that the obtained iterates converge monotone to the solution.

We shall consider the following hypothesis on the smoothness of the nonlinear mapping:

H2) the function f is three times derivable on [a,b], and f(x)0, for all x[a,b].

This assumption implies that f is monotone and continuous on [a,b], and there exists f1:J[a,b], J=f([a,b]).

By H1), the solution may be written as

x=f1(0).

Given two approximations a1,a2[a,b], of the solution x, let b1=f(a1), b2=f(a2) and consider

a1 =f1(b1)
a2 =f1(b2)
[f1(y)]y=b1 =1f(a1)

The polynomial of degree 2 which satisfies

P(b1) =a1
P(b2) =a2
P(b1) =1f(a1)

has the expression given by

P(y)=a1+[b1,b1;f1](yb1)+[b1,b1,b2;f1](yb1)2

with the remainder

f1(y)P(y)=[y,b1,b1,b2;f1](yb1)2(yb2),

where, as usually, [a1,a2;f] denotes the divided difference of order one of f at the nodes a1,a2.

Taking y=0 in the above two relations, we get the next approximation by setting x3=P(0)

xP(0)=a1[b1,b1;f1]b1+[b1,b1,b2;f1]b12

with the error

xP(0)=[0,b1,b1,b2;f1]b12b2.

In order to express the two formulae above in terms of f instead of f1, we take into account the known relations regarding the divided differences of the inverse function (see, e.g., [13])

[b1,b1;f1] =1f(a1);
[b1,b1,b2;f1] =[a1,a1,a2;f][a1,a2,f]2f(a1),

the known mean value theorem (see, e.g., [13]) which yields

[0,b1,b1,b2;f1]=[f1(y)]y=η′′′6,ηint(J)

and the expression for the derivatives of the inverse function (see, e.g., [18])

[f1(y)]′′′=3f′′(x)2f(x)f′′′(x)f(x)5,y=f(x).

Therefore

xa3=a1f(a1)f(a1)[a1,a1,a2;f]f2(a1)[a1,a2;f]2f(a1) (1)

with the approximation error

xa3=Ef(ξ)6f(ξ)5f(a1)2f(a2)

where we denoted

Ef(x)=3f′′(x)2f(x)f′′′(x).

Formula (1) will be used in an iterative fashion, being led to the sequence

yn =xnf(xn)f(xn) (2)
xn+1 =yn[xn,xn,yn;f]f2(xn)[xn,yn;f]2f(xn),n=0,1,

As it can be easily seen, each iteration step requires three function evaluations: f(xn),f(xn),f(yn).

The error can be evaluated by:

xxn+1=Ef(ξn)6f(ξn)5f2(xn)f(yn). (3)

From this formula we can anticipate that the convergence order of the iterations is 4, i.e., this method is optimal in the sense of Kung and Traub. The efficiency index of the method is 431.587.

2 Local convergence and monotone convergence

The following local convergence holds.

Theorem 1.

Let assumptions H1), H2) hold. If Ef(x)f′′(x)0, then method (2) converges locally with C-order 4, and the following relations hold:

xxn+1=Q4(n)(xxn)4 (4)

with

Q4(n)=Ef(ξn)f′′(δn)f(μn)2f(υn)12f(ξn)5f(xn)

where ξn,δn,μn,υn]a,b[. The asymptotical constant is therefore (see [6] for notations)

Q4=limnQ4(n)=Ef(x)f′′(x)12f(x)3.

If Ef(x)f′′(x)=0, the method converges locally with Q-order at least 4.

Proof.

Hypothesis H2) and the first relation in (2) show there exists δn]a,b[ such that

xyn=f′′(δn)2f(xn)(xxn)2. (5)

On the other hand,

f(xn) =f(μn)(xnx);
f(yn) =f(υn)(ynx),μn,υn]a,b[.

These relations imply (4) and the other assertions. ∎

In order to obtain monotone convergence, we shall consider the following additional hypotheses on f:

H3) the expression Ef satisfies

Ef(x)=3f′′(x)2f(x)f′′′(x)>0,x[a,b];

H4) the initial approximation x0[a,b] obeys the Fourier condition:

f(x0)f′′(x0)>0.

The following results hold.

Theorem 2.

If the function f and the initial approximation obey the assumptions H1)-H4), and, moreover,

i)1

f(x)>(<)0, x[a,b];

ii)1

f′′(x)>(<)0, x[a,b],

then the iterates (2) converge to the solution and are monotone decreasing:

j)1

x<xn+1<yn<xn,n=0,1,

jj)1

limxn=limyn=x.

Proof.

We shall prove by induction (similarly as in [10], [12], [13]). Let xn[a,b] satisfy H4), i.e., f(xn)f′′(xn)>0, which implies that f(xn)>0 and, by i2), xn>x. We also have that yn<xn. Relation (5) and assumptions i1) and ii1) show that xyn<0, i.e., yn>x so f(yn)>0. By (3) and the hypotheses, it follows that xxn+1>0, i.e., xn+1>x. The second relation in (2) and the hypotheses imply that xn+1<yn, which prove j1).

Relation j1) shows that the sequences (xn)n0 and (yn)n0 are convergent: l=limxn=limyn. The first relation in (2) implies that l=lf(l)f(l), i.e., f(l)=0, so l=x.

When the inequalities in i1) and ii1) are satisfied with reverse signs, the proof follows in the same way, by taking the function h=f instead of f.

Theorem 3.

If the function f and the initial approximation obey the assumptions H1)-H4), and, moreover,

i)2

f(x)<(>)0,x[a,b]

ii)2

f′′(x)>(<)0,x[a,b],

then the method (2) is monotone increasing:

j)2

xn<yn<xn+1<x,n=0,1,

jj)2

limxn=limyn=x

The proof is obtained in a similar manner.

3 Numerical examples

In this section we shall consider some examples for which we run programs in Julia [4], in quadruple (digits128 which is implicit for the BigFloat type) or higher precision floating point arithmetic (using the setprecision command).

There are a lot of optimal iterative methods having the convergence order 4 (see, e.g., the monograph [15]). We shall present only a few of them, for which we have found a better convergence of the above iterates in certain situations.

Example 4.

Consider the following equation (see, e.g., [8])

f(x)=exsinx+ln(x2+1),x=0. (6)

The largest interval to study the monotone convergence of our method by Theorems 23 is [a,b]:=[x,1.54], since f′′ vanishes at b (being positive on [a,b]). The Fourier condition H4) holds on [a,b] (and does not hold for x<a), Ef(x)>0 on [a,b] (Ef(0)=463), while the derivatives f,f′′ are positive on this interval. The conclusions of Theorem 2 apply.

The Hermite-Steffensen iterates (2) lead to the following results, presented in Table 1. Here - as in the other tables - the mantissas are shown truncated.

n xn f(xn) yn f(yn)
0 1.54 5.877 5.123324e-1 1.051
1 2.397156e-1 3.576e-1 5.997938e-2 6.723e-2
2 8.721737e-3 8.874e-3 1.474170e-4 1.474e-4
3 8.200791e-8 8.200e-8 1.345059e-14 1.345e-14
4 6.935204e-28 6.935e-28 9.619411e-55 9.619e-55
5 4.660021e-105 4.660e-105 0 0
Table 1: The Hermite-Steffensen iterates (2) for problem (6), quadruple precision.

In order to verify the C-convergence order, we use the formulae:

QL(n)= ln|xnx|ln|xn1x|,QL(n)=ln|xnxn1|ln|xn1xn2|,
QΛ(n)= ln|xnx||xn1x|ln|xn1x||xn2x|,QΛ(n)=ln|xnxn1||xn1xn2|ln|xn1xn2||xn2xn3|,

which should converge here to 4 (see, e.g., [6], [7]; here we denote the quantities computable at the step n).

We obtain QL(5)=3.84, QL(5)=3.83, QΛ(5)=3.84, QΛ(5)=3.99 (QΛ(4)=3.99 is better at the previous step).

Using setprecision(1000), we get an additional nonzero iterate xn, and QL(6)=3.79, QL(6)=3.95, QΛ(6)=3.74, QΛ(6)=3.99999998 while QL(5)=3.95, QL(5)=3.83, QΛ(5)=3.99999998, QΛ(5)=3.993. The asymptotic constant Q4(n) is well approximated apart of the last step, when its computed value is 4.8e+21 (instead of 463=15.33).

It is worth noting that the method converges for 0.2x0<x1 too (despite the Fourier condition does not hold), as the local convergence near x1=0 is assured by Theorem 1). For x0=0.3, the method converges to another solution of the equation, x2=0.603

Now we consider the optimal method of Kanwar et. all in [9, (5.9), β=1]

yn+1 =xnf(xn)f(xn)f2(xn)+f2(xn)
xn+1 =xnf(xn)f(xn)f2(xn)+f2(xn)[1+f(yn+1){f(xn)2f(yn+1)}{f2(xn)+f2(xn)}2{f2(xn)+f2(xn)}2{f(xn)2f(yn+1)}2+αf(yn+1)f2(xn)],

and we took α=0.5, resp. α=1, for which the method has a smaller domain of convergence, as shown in Table 2 (the iterations converge with order four as x0 is chosen closer to x=0). For α=1, resp. α=10 (values also considered by the authors), the iterations converged for x0=1.54.

n xn(α=0.5) f(xn) n xn(α=1) f(xn)
0 1.45 5.3642 0 1.5 5.6491
1 1.4682 5.4682 1 1.5521 5.9474
228 2.4111 9.3560 8 2.2999 9.2766
229 2.4111 9.3560 9 2.3069 9.2869
Table 2: The Kanwar et. all iterates (2) for problem (6).
Example 5.

Consider the following equation (see, e.g., [14])

f(x)=(x2)(x10+x+1)ex1,x=2. (7)

The largest radius of attraction for the Newton-type iterative methods for computing x=2 must be smaller than 0.22, as f(1.78)=0 (see [13]).

The largest interval to study the monotone convergence of our method by Theorems 23 is [a,b]:=[x,7.9], since f′′ vanishes at b (being positive on [a,b]).

The Fourier condition H4) holds on [a,b] (and does not hold for x<a), Ef(x)>0 on [a,b], while both the derivatives f,f′′ are positive on this interval. The conclusions of Theorem 2 apply.

The iterates (2) leads to the results presented in Table 3 (setprecision(500) was used instead of the implicit quadruple precision). The iterates converge even for x0>7.9 (and to the left of the solution as well, but for x0 higher than 1.72). Of course the convergence may be not very fast when the initial approximations are away from the solution.

For the convergence orders we obtain QL(9)=3.91, QL(9)=3.69, QΛ(9)=3.9999998, QΛ(9)=3.990.

n xn2 f(xn) yn2 f(yn)
0 7.9 761907.13 3.602809 148982.78
1 2.908710 64158.53 2.184591 20149.42
2 1.701263 7456.63 1.264497 2443.69
3 0.947793 906.17 0.657702 298.30
4 0.445481 108.72 0.257942 34.21
5 1.323053e-1 11.23 4.334529e-2 2.628
6 7.861441e-3 4.147e-1 2.377742e-4 1.216e-2
7 3.481418e-7 1.780e-5 4.831580e-13 2.470e-11
8 1.467014e-24 7.501e-23 8.579185e-48 4.386e-46
9 4.625388e-94 2.365e-92 0 0
Table 3: The Hermite-Steffensen iterates (2) for problem (7).

Let us consider the optimal method of H. Ren, Q. Wu, W. Bi [16]

zn =xn+f(xn) (8)
yn =xnf(xn)[xn,zn;f]
xn+1 =ynf(yn)[xn,yn;f]+[yn,zn;f][xn,zn;f]+a(ynxn)(ynzn)

for the parameter a having two given values: 1, 1 (when a=0 the iterates behave similarly). The convergence domain is smaller, as the iterates do not converge to x=2 for x0=2.3, as it can be seen in Table 4. The explanation resides in the fact that the values of zn may increase with the values of f.

n xn (a=1) f(xn) xn (a=1) f(xn)
0 2.3 45.8747 2.3 45.8747
1 48.1539 1.3906e-3 48.1975 1.3447e-3
2 49.4519 5.0943e-4 49.4957 4.9239e-4
3 50.7395 1.8669e-4 50.7832 1.8042e-4
4 52.0177 6.8443e-5 52.0611 6.6140e-5
Table 4: The Ren-Wu-Bi iterates (8) for problem (7).

Next we consider the optimal method of Zhongli Liu, Quan Zheng, Peng Zhao [19]

zn =xn+f(xn) (9)
yn =xnf(xn)[xn,zn;f]
xn+1 =yn[xn,yn;f][yn,zn;f]+[xn,zn;f][xn,yn;f]2f(yn).

This method also has a small convergence domain, as the iterates do not converge to x=2 for x0=2.3 (see Table 5).

n xn f(xn)
0 2.3 45.8747
1 48.1788 1.3642e-3
2 50.6609 1.9854e-4
3 53.1081 2.8922e-5
4 55.5250 4.2161e-6
Table 5: The Liu-Zheng-Zhao iterates (9) for problem (7).

We end with some interesting results given by the optimal method of Sharma-Guha [17]:

yn =xnf(xn)f(xn),xn+1=xn21+14f(yn)/f(xn)f(xn)f(xn). (10)

This method has a pretty large domain of convergence (we took x=7.9 and beyond), but the iterates away from the solution (x02.2) are complex numbers. We illustrate some results in Table 6.

n xn f(xn) n xn f(xn)
0 7.9 761907.13 0 2.2 21.6789
1 4.76 + 0.0i 52513.99 + 0.0i 1 1.98-6.12e-2i -1.50-2.65i
2 3.68-0.558i 2544.21-8056.48i 2 1.99+3.25e-4i -1.42e-2+1.66e-2i
3 2.00-2.78e-13i 4.38e-11-1.42e-11i
9 1.99-3.76e-26 -1.52e-25-1.92e-24 4 1.99+1.68e-47i -2.84e-46 + 8.64e-46i
10 2.0+1.78e-101i -6.50e-200+9.13e-100i 5 2.0 + 3.78e-124 -2.91e-245 + 1.93e-122
11 2.0 + 0.0i 0.0 + 0.0i 6 2.0 + 0.0i 2.0 + 0.0i
Table 6: The Sharma-Guha iterates (10) for problem (7).

Conclusions. The method studied in this paper present, under certain circumstances, some advantages over other optimal methods (specifically, for those which use intermediate points of the type yn=xn+f(xn)). The obtained sufficient conditions for guaranteed convergence may theoretically lead to larger convergence domains (especially sided convergence intervals) than from estimates of attraction balls, while certain examples shown that the attraction domain of the method is larger than for some optimal methods.

The test problems presented seem to be suitable for testing the iterative methods on their convergence domains.

References

  • [1] S. Amat, J. Blanda, S. Busquier, A Steffensen type method with modified functions, Riv. Mat. Univ. Parma, 7 (2007), 125–133.
  • [2] I.K. Argyros, A new convergence theorem for the Steffensen method in Banach space and applications, Rev. Anal. Numér. Théor. Approx., 29 (2000) no. 1, 119–127. accessed online: http://ictp.acad.ro/jnaat/ on May 1st, 2017.
  • [3] I.K. Argyros, Á. Alberto Magreñán, On the convergence of an optimal fourth-order family of methods and its dynamics, Appl. Math. Comput., 252 (2015), 336–346, doi: 10.1016/j.amc.2014.11.074
  • [4] J. Bezanson, A. Edelman, S. Karpinski, V.B. Shah, Julia: A Fresh Approach to Numerical Computing, SIAM Review, 59 (2017), 65–98, doi: 10.1137/141000671.
  • [5] E. Cătinaş, On some Steffensen-type iterative method for a class of nonlinear equations, Rev. Anal. Numér. Théor. Approx., 24 (1995) no. 1-2, 37–43. accessed online: http://ictp.acad.ro/jnaat/ on May 1st, 2017.
  • [6] E. Cătinaş, A survey on the high convergence orders and computational convergence orders of sequences, Appl. Math. Comput., 343 (2019), 1–20, doi: 10.1016/j.amc.2018.08.006.
  • [7] E. Cătinaş, How many steps still left to x*?, submitted.
  • [8] C. Chun, Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence, Appl. Math. Comput., 189 (2007) no. 1, 597-601, doi: 10.1016/j.amc.2006.11.118
  • [9] V. Kanwar, Ramandeep Behl, Kapil K. Sharma, Simply constructed family of a Ostrowski’s method with optimal order of convergence, Comput. Math. Appl., 62 (2011), pp. 4021-4027. doi: 10.1016/j.camwa.2011.09.039
  • [10] I. Păvăloiu, Approximation of the root of equations by Aitken-Steffensen-type monotonic sequences, Calcolo, 32 (1995) no. 1-2, 69–82. doi: 10.1007/bf02576543
  • [11] I. Păvăloiu, E. Cătinaş, Bilateral approximation for some Aitken-Steffensen-Hermite type method of order three, Appl. Math. Comput., 217 (2011) no. 12, 5838–5846. doi: 10.1016/j.amc.2010.12.067
  • [12] I. Păvăloiu, E. Cătinaş, On an Aitken-Newton type method, Numer. Algor., 62 (2013) no. 2, 253-260. doi: 10.1007/s11075-012-9577-7
  • [13] I. Păvăloiu, E. Cătinaş, On a robust Aitken–Newton method based on the Hermite polynomial, Appl. Math. Comput., 287-288 (2016), pp. 224-231. doi: 10.1016/j.amc.2016.03.036
  • [14] M.S. Petković, On a general class of multipoint root-finding methods of high computational efficiency, SIAM J. Numer. Anal., 47 (2010) no. 6, pp. 4402–4414. doi: 10.1137/090758763
  • [15] M.S. Petković, B. Neta, L.D. Petković, J. Džunić, Multipoint Methods For Solving Nonlinear Equations, Academic Press, 2013.
  • [16] H. Ren, Q. Wu, W. Bi, A class of two-step Steffensen type methods with fourth-order convergence, Appl. Math. Comput., 209 (2009), pp. 206–210. doi: 10.1016/j.amc.2008.12.039
  • [17] J.R. Sharma, R.K. Guha, Second-derivative free methods of third and fourth order for solving nonlinear equations, Intl. J. Computer Math., 88 (2011) no. 1, pp. 163–170. doi: 10.1080/00207160903365875
  • [18] B.A. Turowicz, Sur le derivées d’ordre superieur d’une fonction inverse, Ann. Polon. Math. 8 (1960), pp. 265–269.
  • [19] Zhongli Liu, Quan Zheng, Peng Zhao, A variant of Steffensen’s method of fourth-order convergence and its applications, Appl. Math. Comput., 216 (2010), pp. 1978–1983. doi: 10.1016/j.amc.2010.03.028

[1] Amat, S., Blanda, J., Busquier, S., A Steffensen type method with modified functions. Riv. Mat. Univ. Parma 7, 125–133 (2007)
[2] Argyros, I.K., A new convergence theorem for the Steffensen method in Banach space and applications, Rev. Anal. Num´er. Th´eor. Approx., 29 (2000) no. 1, 119–127. accessed online: https://ictp.acad.ro/jnaat/journal/article/view/2000-vol29-no2-art1 on May 1st, 2017
[3] Argyros, I.K., A. Alberto Magrenan, On the convergence of an optimal fourth order family of methods and its dynamics. Appl. Math. Comput. 252, 336–346 (2015). https://doi.org/10.1016/j.amc.2014.11.074
[4] Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: A fresh approach to numerical computing. SIAM Rev. 59, 65–98 (2017). https://doi.org/10.1137/141000671
[5] Catinas, E., On some Steffensen-type iterative method for a class of nonlinear equations, Rev. Anal. Num´er. Th´eor. Approx., 24 (1995) no. 1-2, 37–43. accessed online: https://ictp.acad.ro/jnaat/journal/article/view/1994-vol23-no1-art4 on May 1st, 2017
[6] Catinas, E., A survey on the high convergence orders and computational convergence orders of sequences. Appl. Math. Comput. 343, 1–20 (2019). https://doi.org/10.1016/j.amc.2018.08.006
[7] Catinas, E., How many steps still left to x∗? SIAM Rev. 63(3), 585–624 (2021), https://doi.org/10.1137/19M1244858
[8] Chun, C., Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence. Appl. Math. Comput. 189(1), 597–601 (2007). https://doi.org/10.1016/j.amc.2006.11.118
[9] Kanwar, V., Ramandeep Behl, Kapil K. Sharma .: Simply constructed family of a Ostrowski’s method with optimal order of convergence. Comput. Math. Appl. 62, 4021–4027 (2011). https://doi.org/10.1016/j.camwa.2011.09.039
[10] Pavaloiu, I., Approximation of the root of equations by Aitken-Steffensen-type monotonic sequences. Calcolo 32(1–2), 69–82 (1995). https://doi.org/10.1007/bf02576543
[11] Pavaloiu, I., Catinas, E., Bilateral approximation for some Aitken-Steffensen-Hermite type method of order three. Appl. Math. Comput. 217(12), 5838–5846 (2011). https://doi.org/10.1016/j.amc.2010.12.067
[12] Pavaloiu, I., Catinas, E., On an Aitken-Newton type method. Numer. Algor. 62(2), 253–260 (2013). https://doi.org/10.1007/s11075-012-9577-7
[13] Pavaloiu, I., Catinas, E., On a robust Aitken-Newton method based on the Hermite polynomial. Appl. Math. Comput. 287–288, 224–231 (2016). https://doi.org/10.1016/j.amc.2016.03.036
[14] Petkovic, M.S., On a general class of multipoint root-finding methods of high computational efficiency. SIAM J. Numer. Anal. 47(6), 4402–4414 (2010). https://doi.org/10.1137/090758763
[15] Petkovic, M.S., Neta, B., Petkovic, L.D., Dzunic, J.: Multipoint Methods For Solving Nonlinear Equations. Academic Press, New York (2013)
[16] Ren, H., Wu, Q., Bi, W., A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 209, 206–210 (2009). https://doi.org/10.1016/j.amc.2008.12.039
[17] Sharma, J.R., Guha, R.K., Second-derivative free methods of third and fourth order for solving nonlinear equations. Int. J. Comput. Math. 88(1), 163–170 (2011). https://doi.org/10.1080/00207160903365875
[18] Turowicz, B.A.: Sur le derivees d’ordre superieur d’une fonction inverse. Ann. Polon. Math. 8, 265–269 (1960)
[19] Liu, Zhongli, Zheng, Quan, Zhao, Peng, A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 216, 1978–1983 (2010). https://doi.org/10.1016/j.amc.2010.03.028

2022

Related Posts