Abstract
We introduce a new method for solving nonlinear equations in R, which uses three function evaluations at each step and has convergence order four, being, therefore, optimal in the sense of Kung and Traub:
\begin{align*}
y_{n} & =x_{n}\frac{f\left( x_{n}\right) }{f^{\prime}\left( x_{n}\right)
}\label{f.1.10}\\
x_{n+1} & =y_{n}\frac{\left[ x_{n},x_{n},y_{n};f\right] f^{2}\left(
x_{n}\right) }{\left[ x_{n},y_{n};f\right] ^{2}f^{\prime}\left(
x_{n}\right) },\ \; \; \; \;n=0,1,\ldots
\end{align*}
The method is based on the Hermite inverse interpolatory polynomial of degree two.
Under certain additional assumptions, we obtain convergence domains (sided intervals) larger than the usual ball convergence sets, and, moreover, with monotone convergence of the iterates. The method has larger convergence domains than of the methods which use intermediate points of the type \(y_n=x_n+f(x_n)\) (as the later may not yield convergent iterates when \(f\) grows fast near the solution).
Authors
Ion Păvăloiu
“Tiberiu Popoviciu” Institute of Numerical Analysis, Romanian Academy
Emil Cătinaș
“Tiberiu Popoviciu” Institute of Numerical Analysis, Romanian Academy
Keywords
Nonlinear equations in R; inverse interpolation; HermiteSteffensen type method; computational convergence order.
Paper coordinates
I. Păvăloiu, E. Cătinaș, A new optimal method of order four of HermiteSteffensen type, Mediterr. J. Math. 19 (2022), art. no. 147.
https://doi.org/10.1007/s00009022020305
??
About this paper
Journal
Mediterranean Journal Matematics
Publisher Name
Springer
Print ISSN
16605454
Online ISSN
16605446
google scholar link
A new optimal method of order four
of HermiteSteffensen type
Abstract
We introduce a new method for solving nonlinear equations in $\mathbb{R},$ which uses three function evaluations at each step and has convergence order four, being therefore optimal in the sense of Kung and Traub. The method is based on the Hermite inverse interpolatory polynomial of degree two. Under certain additional assumptions, we obtain convergence domains (sided intervals) larger than the usual ball convergence sets, and, moreover, with monotone convergence of the iterates.
The method has larger convergence domains than of the methods which use intermediate points of the type ${y}_{n}={x}_{n}+f({x}_{n})$ (as the later may not yield convergent iterates when $f$ grows fast near the solution).
Keywords: nonlinear equations in $\mathbb{R}$; inverse interpolation; HermiteSteffensen type method; computational convergence order.
MSC: 65H05.
1 Introduction
In this paper we introduce and study a Steffensentype iterative method for solving nonlinear equations in $\mathbb{R}$
$$f\left(x\right)=0,$$ 
where $f:[a,b]\to \mathbb{R}$, $a,b\in \mathbb{R}$, $$
Such methods consider an additional equation, equivalent to the above one,
$$g\left(x\right)x=0$$ 
with $g:[a,b]\to [a,b].$
We assume the following usual hypothesis.
H1) there exists ${x}^{\ast}\in ]a,b[$ such that $f\left({x}^{\ast}\right)=0$ $(\iff $ $g\left({x}^{\ast}\right)={x}^{\ast}).$
The Steffensen method and its variants use the inverse interpolation polynomial of degree 1 for the function $f$, and the nodes controlled with the aid of an auxiliary function $g$ [1], [2], [3], [5], [9], [11]–[14], [19].
In this paper we consider a Hermite inverse interpolation polynomial of degree 2, which uses nodes generated with the aid of the function $g.$ We shall obtain an iterative method of order 4, which uses at each step 3 function evaluations, which is therefore optimal in the sense of Kung and Traub. Under some reasonable additional assumptions on the function $f,$ we shall show that the obtained iterates converge monotone to the solution.
We shall consider the following hypothesis on the smoothness of the nonlinear mapping:
H2) the function $f$ is three times derivable on $[a,b],$ and ${f}^{\prime}\left(x\right)\ne 0$, for all $x\in [a,b].$
This assumption implies that $f$ is monotone and continuous on $[a,b],$ and there exists ${f}^{1}:J\to [a,b],$ $J=f\left([a,b]\right).$
By H1), the solution may be written as
$${x}^{\ast}={f}^{1}\left(0\right).$$ 
Given two approximations ${a}_{1},{a}_{2}\in [a,b],$ of the solution ${x}^{\ast}$, let ${b}_{1}=f\left({a}_{1}\right),$ ${b}_{2}=f\left({a}_{2}\right)$ and consider
${a}_{1}$  $={f}^{1}\left({b}_{1}\right)$  
${a}_{2}$  $={f}^{1}\left({b}_{2}\right)$  
${\left[{f}^{1}\left(y\right)\right]}_{y={b}_{1}}^{\prime}$  $=\frac{1}{{f}^{\prime}\left({a}_{1}\right)}$ 
The polynomial of degree 2 which satisfies
$P\left({b}_{1}\right)$  $={a}_{1}$  
$P\left({b}_{2}\right)$  $={a}_{2}$  
${P}^{\prime}\left({b}_{1}\right)$  $=\frac{1}{{f}^{\prime}\left({a}_{1}\right)}$ 
has the expression given by
$$P\left(y\right)={a}_{1}+[{b}_{1},{b}_{1};{f}^{1}]\left(y{b}_{1}\right)+[{b}_{1},{b}_{1},{b}_{2};{f}^{1}]{\left(y{b}_{1}\right)}^{2}$$ 
with the remainder
$${f}^{1}\left(y\right)P\left(y\right)=[y,{b}_{1},{b}_{1},{b}_{2};{f}^{1}]{\left(y{b}_{1}\right)}^{2}\left(y{b}_{2}\right),$$ 
where, as usually, $[{a}_{1},{a}_{2};f]$ denotes the divided difference of order one of $f$ at the nodes ${a}_{1},{a}_{2}.$
Taking $y=0$ in the above two relations, we get the next approximation by setting ${x}_{3}=P\left(0\right)$
$${x}^{\ast}\simeq P\left(0\right)={a}_{1}[{b}_{1},{b}_{1};{f}^{1}]{b}_{1}+[{b}_{1},{b}_{1},{b}_{2};{f}^{1}]{b}_{1}^{2}$$ 
with the error
$${x}^{\ast}P\left(0\right)=[0,{b}_{1},{b}_{1},{b}_{2};{f}^{1}]{b}_{1}^{2}{b}_{2}.$$ 
In order to express the two formulae above in terms of $f$ instead of ${f}^{1}$, we take into account the known relations regarding the divided differences of the inverse function (see, e.g., [13])
$[{b}_{1},{b}_{1};{f}^{1}]$  $={\displaystyle \frac{1}{{f}^{\prime}\left({a}_{1}\right)}};$  
$[{b}_{1},{b}_{1},{b}_{2};{f}^{1}]$  $={\displaystyle \frac{[{a}_{1},{a}_{1},{a}_{2};f]}{{[{a}_{1},{a}_{2},f]}^{2}{f}^{\prime}\left({a}_{1}\right)}},$ 
the known mean value theorem (see, e.g., [13]) which yields
$$[0,{b}_{1},{b}_{1},{b}_{2};{f}^{1}]=\frac{{\left[{f}^{1}\left(y\right)\right]}_{y=\eta}^{\prime \prime \prime}}{6},\eta \in int\left(J\right)$$ 
and the expression for the derivatives of the inverse function (see, e.g., [18])
$${\left[{f}^{1}\left(y\right)\right]}^{\prime \prime \prime}=\frac{3{f}^{\prime \prime}{\left(x\right)}^{2}{f}^{\prime}\left(x\right){f}^{\prime \prime \prime}\left(x\right)}{{f}^{\prime}{\left(x\right)}^{5}},y=f\left(x\right).$$ 
Therefore
$${x}^{\ast}\simeq {a}_{3}={a}_{1}\frac{f\left({a}_{1}\right)}{{f}^{\prime}\left({a}_{1}\right)}\frac{[{a}_{1},{a}_{1},{a}_{2};f]{f}^{2}\left({a}_{1}\right)}{{[{a}_{1},{a}_{2};f]}^{2}{f}^{\prime}\left({a}_{1}\right)}$$  (1) 
with the approximation error
$${x}^{\ast}{a}_{3}=\frac{{E}_{f}\left(\xi \right)}{6{f}^{\prime}{\left(\xi \right)}^{5}}f{\left({a}_{1}\right)}^{2}f\left({a}_{2}\right)$$ 
where we denoted
$${E}_{f}\left(x\right)=3{f}^{\prime \prime}{\left(x\right)}^{2}{f}^{\prime}\left(x\right){f}^{\prime \prime \prime}\left(x\right).$$ 
Formula (1) will be used in an iterative fashion, being led to the sequence
${y}_{n}$  $={x}_{n}{\displaystyle \frac{f\left({x}_{n}\right)}{{f}^{\prime}\left({x}_{n}\right)}}$  (2)  
${x}_{n+1}$  $={y}_{n}{\displaystyle \frac{[{x}_{n},{x}_{n},{y}_{n};f]{f}^{2}\left({x}_{n}\right)}{{[{x}_{n},{y}_{n};f]}^{2}{f}^{\prime}\left({x}_{n}\right)}},n=0,1,\mathrm{\dots}$ 
As it can be easily seen, each iteration step requires three function evaluations: $f\left({x}_{n}\right),{f}^{\prime}\left({x}_{n}\right)$,$f\left({y}_{n}\right).$
The error can be evaluated by:
$${x}^{\ast}{x}_{n+1}=\frac{{E}_{f}\left({\xi}_{n}\right)}{6{f}^{\prime}{\left({\xi}_{n}\right)}^{5}}{f}^{2}\left({x}_{n}\right)f\left({y}_{n}\right).$$  (3) 
From this formula we can anticipate that the convergence order of the iterations is 4, i.e., this method is optimal in the sense of Kung and Traub. The efficiency index of the method is $\sqrt[3]{4}\approx 1.587.$
2 Local convergence and monotone convergence
The following local convergence holds.
Theorem 1.
Let assumptions H1), H2) hold. If ${E}_{f}\left({x}^{\ast}\right)\cdot {f}^{\prime \prime}\left({x}^{\ast}\right)\ne 0$, then method (2) converges locally with Corder $4$, and the following relations hold:
$${x}^{\ast}{x}_{n+1}={Q}_{4}(n){\left({x}^{\ast}{x}_{n}\right)}^{4}$$  (4) 
with
$${Q}_{4}(n)=\frac{{E}_{f}\left({\xi}_{n}\right){f}^{\prime \prime}\left({\delta}_{n}\right){f}^{\prime}{\left({\mu}_{n}\right)}^{2}{f}^{\prime}\left({\upsilon}_{n}\right)}{12{f}^{\prime}{\left({\xi}_{n}\right)}^{5}{f}^{\prime}\left({x}_{n}\right)}$$ 
where ${\xi}_{n},{\delta}_{n},{\mu}_{n},{\upsilon}_{n}\in ]a,b[$. The asymptotical constant is therefore (see [6] for notations)
$${Q}_{4}=\underset{n\to \mathrm{\infty}}{lim}{Q}_{4}(n)=\frac{{E}_{f}\left({x}^{\ast}\right){f}^{\prime \prime}\left({x}^{\ast}\right)}{12{f}^{\prime}{\left({x}^{\ast}\right)}^{3}}.$$ 
If ${E}_{f}\left({x}^{\ast}\right){f}^{\prime \prime}\left({x}^{\ast}\right)=0$, the method converges locally with Qorder at least $4$.
Proof.
Hypothesis H2) and the first relation in (2) show there exists ${\delta}_{n}\in ]a,b[$ such that
$${x}^{\ast}{y}_{n}=\frac{{f}^{\prime \prime}\left({\delta}_{n}\right)}{2{f}^{\prime}\left({x}_{n}\right)}{\left({x}^{\ast}{x}_{n}\right)}^{2}.$$  (5) 
On the other hand,
$f\left({x}_{n}\right)$  $={f}^{\prime}\left({\mu}_{n}\right)\left({x}_{n}{x}^{\ast}\right);$  
$f\left({y}_{n}\right)$  $={f}^{\prime}\left({\upsilon}_{n}\right)({y}_{n}{x}^{\ast}),{\mu}_{n},{\upsilon}_{n}\in ]a,b[.$ 
These relations imply (4) and the other assertions. ∎
In order to obtain monotone convergence, we shall consider the following additional hypotheses on $f$:
H3) the expression ${E}_{f}$ satisfies
$${E}_{f}\left(x\right)=3{f}^{\prime \prime}{\left(x\right)}^{2}{f}^{\prime}\left(x\right){f}^{\prime \prime \prime}\left(x\right)>0,\forall x\in [a,b];$$ 
H4) the initial approximation ${x}_{0}\in [a,b]$ obeys the Fourier condition:
$$f\left({x}_{0}\right){f}^{\prime \prime}\left({x}_{0}\right)>0.$$ 
The following results hold.
Theorem 2.
If the function $f$ and the initial approximation obey the assumptions H1)H4), and, moreover,
 i${}_{1}{}^{})$

$$ $\forall x\in [a,b];$
 ii${}_{1}{}^{})$

$$ $\forall x\in [a,b],$
then the iterates (2) converge to the solution and are monotone decreasing:
 j${}_{1}{}^{})$

$$
 jj${}_{1}{}^{})$

$lim{x}_{n}=lim{y}_{n}={x}^{\ast}$.
Proof.
We shall prove by induction (similarly as in [10], [12], [13]). Let ${x}_{n}\in [a,b]$ satisfy H4), i.e., $f\left({x}_{n}\right){f}^{\prime \prime}\left({x}_{n}\right)>0,$ which implies that $f\left({x}_{n}\right)>0$ and, by ${i}_{2})$, ${x}_{n}>{x}^{\ast}$. We also have that $$. Relation (5) and assumptions ${i}_{1})$ and $i{i}_{1})$ show that $$, i.e., ${y}_{n}>{x}^{\ast}$ so $f\left({y}_{n}\right)>0$. By (3) and the hypotheses, it follows that ${x}^{\ast}{x}_{n+1}>0,$ i.e., ${x}_{n+1}>{x}^{\ast}$. The second relation in (2) and the hypotheses imply that $$ which prove ${j}_{1}).$
Relation ${j}_{1})$ shows that the sequences ${\left({x}_{n}\right)}_{n\ge 0}$ and ${\left({y}_{n}\right)}_{n\ge 0}$ are convergent: $l=lim{x}_{n}=lim{y}_{n}$. The first relation in (2) implies that $l=l\frac{f\left(l\right)}{{f}^{\prime}\left(l\right)}$, i.e., $f\left(l\right)=0$, so $l={x}^{\ast}$.
When the inequalities in ${i}_{1})$ and $i{i}_{1})$ are satisfied with reverse signs, the proof follows in the same way, by taking the function $h=f$ instead of $f.$ ∎
Theorem 3.
If the function $f$ and the initial approximation obey the assumptions H1)H4), and, moreover,
 i${}_{2}{}^{})$

$$
 ii${}_{2}{}^{})$

$$
then the method (2) is monotone increasing:
 j${}_{2}{}^{})$

$$
 jj${}_{2}{}^{})$

$lim{x}_{n}=lim{y}_{n}={x}^{\ast}$
The proof is obtained in a similar manner.
3 Numerical examples
In this section we shall consider some examples for which we run programs in Julia [4], in quadruple (digits128 which is implicit for the BigFloat type) or higher precision floating point arithmetic (using the setprecision command).
There are a lot of optimal iterative methods having the convergence order 4 (see, e.g., the monograph [15]). We shall present only a few of them, for which we have found a better convergence of the above iterates in certain situations.
Example 4.
Consider the following equation (see, e.g., [8])
$$f\left(x\right)={e}^{x}\mathrm{sin}x+\mathrm{ln}({x}^{2}+1),{x}^{\ast}=0.$$  (6) 
The largest interval to study the monotone convergence of our method by Theorems 2–3 is $[a,b]:=[{x}^{\ast},1.54\mathrm{\dots}]$, since ${f}^{\prime \prime}$ vanishes at $b$ (being positive on $[a,b]$). The Fourier condition H4) holds on $[a,b]$ (and does not hold for $$), ${E}_{f}(x)>0$ on $[a,b]$ (${E}_{f}(0)=\frac{46}{3}$), while the derivatives ${f}^{\prime},{f}^{\prime \prime}$ are positive on this interval. The conclusions of Theorem 2 apply.
The HermiteSteffensen iterates (2) lead to the following results, presented in Table 1. Here  as in the other tables  the mantissas are shown truncated.
$n$  ${x}_{n}$  $f({x}_{n})$  ${y}_{n}$  $f({y}_{n})$ 

0  1.54  5.877  5.123324e1  1.051 
1  2.397156e1  3.576e1  5.997938e2  6.723e2 
2  8.721737e3  8.874e3  1.474170e4  1.474e4 
3  8.200791e8  8.200e8  1.345059e14  1.345e14 
4  6.935204e28  6.935e28  9.619411e55  9.619e55 
5  4.660021e105  4.660e105  0  0 
In order to verify the Cconvergence order, we use the formulae:
${Q}_{L}(n)=$  $\frac{\mathrm{ln}{x}_{n}{x}^{\ast}}{\mathrm{ln}{x}_{n1}{x}^{\ast}},{Q}_{L}^{\prime}(n)=\frac{\mathrm{ln}{x}_{n}{x}_{n1}}{\mathrm{ln}{x}_{n1}{x}_{n2}},$  
${Q}_{\mathrm{\Lambda}}(n)=$  $\frac{\mathrm{ln}\frac{{x}_{n}{x}^{\ast}}{{x}_{n1}{x}^{\ast}}}{\mathrm{ln}\frac{{x}_{n1}{x}^{\ast}}{{x}_{n2}{x}^{\ast}}},{Q}_{\mathrm{\Lambda}}^{\prime}(n)=\frac{\mathrm{ln}\frac{{x}_{n}{x}_{n1}}{{x}_{n1}{x}_{n2}}}{\mathrm{ln}\frac{{x}_{n1}{x}_{n2}}{{x}_{n2}{x}_{n3}}},$ 
which should converge here to $4$ (see, e.g., [6], [7]; here we denote the quantities computable at the step $n$).
We obtain ${Q}_{L}(5)=3.84$, ${Q}_{L}^{\prime}(5)=3.83$, ${Q}_{\mathrm{\Lambda}}(5)=3.84$, ${Q}_{\mathrm{\Lambda}}^{\prime}(5)=3.99$ (${Q}_{\mathrm{\Lambda}}(4)=3.99$ is better at the previous step).
Using setprecision(1000), we get an additional nonzero iterate ${x}_{n}$, and ${Q}_{L}(6)=3.79$, ${Q}_{L}^{\prime}(6)=3.95$, ${Q}_{\mathrm{\Lambda}}(6)=3.74$, ${Q}_{\mathrm{\Lambda}}^{\prime}(6)=3.99999998$ while ${Q}_{L}(5)=3.95$, ${Q}_{L}^{\prime}(5)=3.83$, ${Q}_{\mathrm{\Lambda}}(5)=3.99999998$, ${Q}_{\mathrm{\Lambda}}^{\prime}(5)=3.993$. The asymptotic constant ${Q}_{4}(n)$ is well approximated apart of the last step, when its computed value is $4.8e+21$ (instead of $\frac{46}{3}=15.33\mathrm{\dots}$).
It is worth noting that the method converges for $$ too (despite the Fourier condition does not hold), as the local convergence near ${x}_{1}^{\ast}=0$ is assured by Theorem 1). For ${x}_{0}=0.3$, the method converges to another solution of the equation, ${x}_{2}^{\ast}=0.603\mathrm{\dots}$
Now we consider the optimal method of Kanwar et. all in [9, (5.9), $\beta =1$]
${y}_{n+1}$  $={x}_{n}\frac{f\left({x}_{n}\right){f}^{\prime}\left({x}_{n}\right)}{{f}^{2}\left({x}_{n}\right)+{f}^{\prime 2}\left({x}_{n}\right)}$  
${x}_{n+1}$  $={x}_{n}\frac{f\left({x}_{n}\right){f}^{\prime}\left({x}_{n}\right)}{{f}^{2}\left({x}_{n}\right)+{f}^{\prime 2}\left({x}_{n}\right)}\left[1+\frac{f\left({y}_{n+1}\right)\{f\left({x}_{n}\right)2f\left({y}_{n+1}\right)\}{\{{f}^{2}\left({x}_{n}\right)+{f}^{\prime 2}\left({x}_{n}\right)\}}^{2}}{{\{{f}^{2}\left({x}_{n}\right)+{f}^{\prime 2}\left({x}_{n}\right)\}}^{2}{\{f\left({x}_{n}\right)2f\left({y}_{n+1}\right)\}}^{2}+\alpha f\left({y}_{n+1}\right){f}^{\prime 2}\left({x}_{n}\right)}\right],$ 
and we took $\alpha =0.5,$ resp. $\alpha =1,$ for which the method has a smaller domain of convergence, as shown in Table 2 (the iterations converge with order four as ${x}_{0}$ is chosen closer to ${x}^{\ast}=0$). For $\alpha =1,$ resp. $\alpha =10$ (values also considered by the authors), the iterations converged for ${x}_{0}=1.54.$
Example 5.
Consider the following equation (see, e.g., [14])
$$f\left(x\right)=(x2)({x}^{10}+x+1){e}^{x1},{x}^{\ast}=2.$$  (7) 
The largest radius of attraction for the Newtontype iterative methods for computing ${x}^{\ast}=2$ must be smaller than $0.22,$ as ${f}^{\prime}\left(1.78\mathrm{\dots}\right)=0$ (see [13]).
The largest interval to study the monotone convergence of our method by Theorems 2–3 is $[a,b]:=[{x}^{\ast},7.9\mathrm{\dots}]$, since ${f}^{\prime \prime}$ vanishes at $b$ (being positive on $[a,b]$).
The Fourier condition H4) holds on $[a,b]$ (and does not hold for $$), ${E}_{f}(x)>0$ on $[a,b]$, while both the derivatives ${f}^{\prime},{f}^{\prime \prime}$ are positive on this interval. The conclusions of Theorem 2 apply.
The iterates (2) leads to the results presented in Table 3 (setprecision(500) was used instead of the implicit quadruple precision). The iterates converge even for ${x}_{0}>7.9$ (and to the left of the solution as well, but for ${x}_{0}$ higher than $1.72$). Of course the convergence may be not very fast when the initial approximations are away from the solution.
For the convergence orders we obtain ${Q}_{L}(9)=3.91$, ${Q}_{L}^{\prime}(9)=3.69$, ${Q}_{\mathrm{\Lambda}}(9)=3.9999998$, ${Q}_{\mathrm{\Lambda}}^{\prime}(9)=3.990$.
$n$  ${x}_{n}2$  $f({x}_{n})$  ${y}_{n}2$  $f({y}_{n})$ 

0  7.9  761907.13  3.602809  148982.78 
1  2.908710  64158.53  2.184591  20149.42 
2  1.701263  7456.63  1.264497  2443.69 
3  0.947793  906.17  0.657702  298.30 
4  0.445481  108.72  0.257942  34.21 
5  1.323053e1  11.23  4.334529e2  2.628 
6  7.861441e3  4.147e1  2.377742e4  1.216e2 
7  3.481418e7  1.780e5  4.831580e13  2.470e11 
8  1.467014e24  7.501e23  8.579185e48  4.386e46 
9  4.625388e94  2.365e92  0  0 
Let us consider the optimal method of H. Ren, Q. Wu, W. Bi [16]
${z}_{n}$  $={x}_{n}+f({x}_{n})$  (8)  
${y}_{n}$  $={x}_{n}\frac{f({x}_{n})}{[{x}_{n},{z}_{n};f]}$  
${x}_{n+1}$  $={y}_{n}\frac{f({y}_{n})}{[{x}_{n},{y}_{n};f]+[{y}_{n},{z}_{n};f][{x}_{n},{z}_{n};f]+a({y}_{n}{x}_{n})({y}_{n}{z}_{n})}$ 
for the parameter $a$ having two given values: $1$, $1$ (when $a=$0 the iterates behave similarly). The convergence domain is smaller, as the iterates do not converge to ${x}^{\ast}=2$ for ${x}_{0}=2.3,$ as it can be seen in Table 4. The explanation resides in the fact that the values of ${z}_{n}$ may increase with the values of $f$.
$n$  ${x}_{n}$ $(a=1)$  $f({x}_{n})$  ${x}_{n}$ $(a=1)$  $f({x}_{n})$ 

0  2.3  45.8747  2.3  45.8747 
1  48.1539  1.3906e3  48.1975  1.3447e3 
2  49.4519  5.0943e4  49.4957  4.9239e4 
3  50.7395  1.8669e4  50.7832  1.8042e4 
4  52.0177  6.8443e5  52.0611  6.6140e5 
Next we consider the optimal method of Zhongli Liu, Quan Zheng, Peng Zhao [19]
${z}_{n}$  $={x}_{n}+f({x}_{n})$  (9)  
${y}_{n}$  $={x}_{n}{\displaystyle \frac{f({x}_{n})}{[{x}_{n},{z}_{n};f]}}$  
${x}_{n+1}$  $={y}_{n}{\displaystyle \frac{[{x}_{n},{y}_{n};f][{y}_{n},{z}_{n};f]+[{x}_{n},{z}_{n};f]}{{[{x}_{n},{y}_{n};f]}^{2}}}f({y}_{n}).$ 
This method also has a small convergence domain, as the iterates do not converge to ${x}^{\ast}=2$ for ${x}_{0}=2.3$ (see Table 5).
$n$  ${x}_{n}$  $f({x}_{n})$ 

0  2.3  45.8747 
1  48.1788  1.3642e3 
2  50.6609  1.9854e4 
3  53.1081  2.8922e5 
4  55.5250  4.2161e6 
We end with some interesting results given by the optimal method of SharmaGuha [17]:
${y}_{n}$  $={x}_{n}{\displaystyle \frac{f({x}_{n})}{{f}^{\prime}({x}_{n})}},{x}_{n+1}={x}_{n}{\displaystyle \frac{2}{1+\sqrt{14f({y}_{n})/f({x}_{n})}}}{\displaystyle \frac{f({x}_{n})}{{f}^{\prime}({x}_{n})}}.$  (10) 
This method has a pretty large domain of convergence (we took ${x}^{\ast}=7.9$ and beyond), but the iterates away from the solution (${x}_{0}\ge 2.2$) are complex numbers. We illustrate some results in Table 6.
$n$  ${x}_{n}$  $f({x}_{n})$  $n$  ${x}_{n}$  $f({x}_{n})$ 

0  7.9  761907.13  0  2.2  21.6789 
1  4.76 + 0.0i  52513.99 + 0.0i  1  1.986.12e2i  1.502.65i 
2  3.680.558i  2544.218056.48i  2  1.99+3.25e4i  1.42e2+1.66e2i 
$\mathrm{\dots}$  3  2.002.78e13i  4.38e111.42e11i  
9  1.993.76e26  1.52e251.92e24  4  1.99+1.68e47i  2.84e46 + 8.64e46i 
10  2.0+1.78e101i  6.50e200+9.13e100i  5  2.0 + 3.78e124  2.91e245 + 1.93e122 
11  2.0 + 0.0i  0.0 + 0.0i  6  2.0 + 0.0i  2.0 + 0.0i 
Conclusions. The method studied in this paper present, under certain circumstances, some advantages over other optimal methods (specifically, for those which use intermediate points of the type ${y}_{n}={x}_{n}+f({x}_{n})$). The obtained sufficient conditions for guaranteed convergence may theoretically lead to larger convergence domains (especially sided convergence intervals) than from estimates of attraction balls, while certain examples shown that the attraction domain of the method is larger than for some optimal methods.
The test problems presented seem to be suitable for testing the iterative methods on their convergence domains.
References
 [1] S. Amat, J. Blanda, S. Busquier, A Steffensen type method with modified functions, Riv. Mat. Univ. Parma, 7 (2007), 125–133.
 [2] I.K. Argyros, A new convergence theorem for the Steffensen method in Banach space and applications, Rev. Anal. Numér. Théor. Approx., 29 (2000) no. 1, 119–127. accessed online: http://ictp.acad.ro/jnaat/ on May 1st, 2017.
 [3] I.K. Argyros, Á. Alberto Magreñán, On the convergence of an optimal fourthorder family of methods and its dynamics, Appl. Math. Comput., 252 (2015), 336–346, doi: 10.1016/j.amc.2014.11.074
 [4] J. Bezanson, A. Edelman, S. Karpinski, V.B. Shah, Julia: A Fresh Approach to Numerical Computing, SIAM Review, 59 (2017), 65–98, doi: 10.1137/141000671.
 [5] E. Cătinaş, On some Steffensentype iterative method for a class of nonlinear equations, Rev. Anal. Numér. Théor. Approx., 24 (1995) no. 12, 37–43. accessed online: http://ictp.acad.ro/jnaat/ on May 1st, 2017.
 [6] E. Cătinaş, A survey on the high convergence orders and computational convergence orders of sequences, Appl. Math. Comput., 343 (2019), 1–20, doi: 10.1016/j.amc.2018.08.006.
 [7] E. Cătinaş, How many steps still left to x*?, submitted.
 [8] C. Chun, Certain improvements of ChebyshevHalley methods with accelerated fourthorder convergence, Appl. Math. Comput., 189 (2007) no. 1, 597601, doi: 10.1016/j.amc.2006.11.118
 [9] V. Kanwar, Ramandeep Behl, Kapil K. Sharma, Simply constructed family of a Ostrowski’s method with optimal order of convergence, Comput. Math. Appl., 62 (2011), pp. 40214027. doi: 10.1016/j.camwa.2011.09.039
 [10] I. Păvăloiu, Approximation of the root of equations by AitkenSteffensentype monotonic sequences, Calcolo, 32 (1995) no. 12, 69–82. doi: 10.1007/bf02576543
 [11] I. Păvăloiu, E. Cătinaş, Bilateral approximation for some AitkenSteffensenHermite type method of order three, Appl. Math. Comput., 217 (2011) no. 12, 5838–5846. doi: 10.1016/j.amc.2010.12.067
 [12] I. Păvăloiu, E. Cătinaş, On an AitkenNewton type method, Numer. Algor., 62 (2013) no. 2, 253260. doi: 10.1007/s1107501295777
 [13] I. Păvăloiu, E. Cătinaş, On a robust Aitken–Newton method based on the Hermite polynomial, Appl. Math. Comput., 287288 (2016), pp. 224231. doi: 10.1016/j.amc.2016.03.036
 [14] M.S. Petković, On a general class of multipoint rootfinding methods of high computational efficiency, SIAM J. Numer. Anal., 47 (2010) no. 6, pp. 4402–4414. doi: 10.1137/090758763
 [15] M.S. Petković, B. Neta, L.D. Petković, J. Džunić, Multipoint Methods For Solving Nonlinear Equations, Academic Press, 2013.
 [16] H. Ren, Q. Wu, W. Bi, A class of twostep Steffensen type methods with fourthorder convergence, Appl. Math. Comput., 209 (2009), pp. 206–210. doi: 10.1016/j.amc.2008.12.039
 [17] J.R. Sharma, R.K. Guha, Secondderivative free methods of third and fourth order for solving nonlinear equations, Intl. J. Computer Math., 88 (2011) no. 1, pp. 163–170. doi: 10.1080/00207160903365875
 [18] B.A. Turowicz, Sur le derivées d’ordre superieur d’une fonction inverse, Ann. Polon. Math. 8 (1960), pp. 265–269.
 [19] Zhongli Liu, Quan Zheng, Peng Zhao, A variant of Steffensen’s method of fourthorder convergence and its applications, Appl. Math. Comput., 216 (2010), pp. 1978–1983. doi: 10.1016/j.amc.2010.03.028
[1] Amat, S., Blanda, J., Busquier, S., A Steffensen type method with modified functions. Riv. Mat. Univ. Parma 7, 125–133 (2007)
[2] Argyros, I.K., A new convergence theorem for the Steffensen method in Banach space and applications, Rev. Anal. Num´er. Th´eor. Approx., 29 (2000) no. 1, 119–127. accessed online: https://ictp.acad.ro/jnaat/journal/article/view/2000vol29no2art1 on May 1st, 2017
[3] Argyros, I.K., A. Alberto Magrenan, On the convergence of an optimal fourth order family of methods and its dynamics. Appl. Math. Comput. 252, 336–346 (2015). https://doi.org/10.1016/j.amc.2014.11.074
[4] Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: A fresh approach to numerical computing. SIAM Rev. 59, 65–98 (2017). https://doi.org/10.1137/141000671
[5] Catinas, E., On some Steffensentype iterative method for a class of nonlinear equations, Rev. Anal. Num´er. Th´eor. Approx., 24 (1995) no. 12, 37–43. accessed online: https://ictp.acad.ro/jnaat/journal/article/view/1994vol23no1art4 on May 1st, 2017
[6] Catinas, E., A survey on the high convergence orders and computational convergence orders of sequences. Appl. Math. Comput. 343, 1–20 (2019). https://doi.org/10.1016/j.amc.2018.08.006
[7] Catinas, E., How many steps still left to x∗? SIAM Rev. 63(3), 585–624 (2021), https://doi.org/10.1137/19M1244858
[8] Chun, C., Certain improvements of ChebyshevHalley methods with accelerated fourthorder convergence. Appl. Math. Comput. 189(1), 597–601 (2007). https://doi.org/10.1016/j.amc.2006.11.118
[9] Kanwar, V., Ramandeep Behl, Kapil K. Sharma .: Simply constructed family of a Ostrowski’s method with optimal order of convergence. Comput. Math. Appl. 62, 4021–4027 (2011). https://doi.org/10.1016/j.camwa.2011.09.039
[10] Pavaloiu, I., Approximation of the root of equations by AitkenSteffensentype monotonic sequences. Calcolo 32(1–2), 69–82 (1995). https://doi.org/10.1007/bf02576543
[11] Pavaloiu, I., Catinas, E., Bilateral approximation for some AitkenSteffensenHermite type method of order three. Appl. Math. Comput. 217(12), 5838–5846 (2011). https://doi.org/10.1016/j.amc.2010.12.067
[12] Pavaloiu, I., Catinas, E., On an AitkenNewton type method. Numer. Algor. 62(2), 253–260 (2013). https://doi.org/10.1007/s1107501295777
[13] Pavaloiu, I., Catinas, E., On a robust AitkenNewton method based on the Hermite polynomial. Appl. Math. Comput. 287–288, 224–231 (2016). https://doi.org/10.1016/j.amc.2016.03.036
[14] Petkovic, M.S., On a general class of multipoint rootfinding methods of high computational efficiency. SIAM J. Numer. Anal. 47(6), 4402–4414 (2010). https://doi.org/10.1137/090758763
[15] Petkovic, M.S., Neta, B., Petkovic, L.D., Dzunic, J.: Multipoint Methods For Solving Nonlinear Equations. Academic Press, New York (2013)
[16] Ren, H., Wu, Q., Bi, W., A class of twostep Steffensen type methods with fourthorder convergence. Appl. Math. Comput. 209, 206–210 (2009). https://doi.org/10.1016/j.amc.2008.12.039
[17] Sharma, J.R., Guha, R.K., Secondderivative free methods of third and fourth order for solving nonlinear equations. Int. J. Comput. Math. 88(1), 163–170 (2011). https://doi.org/10.1080/00207160903365875
[18] Turowicz, B.A.: Sur le derivees d’ordre superieur d’une fonction inverse. Ann. Polon. Math. 8, 265–269 (1960)
[19] Liu, Zhongli, Zheng, Quan, Zhao, Peng, A variant of Steffensen’s method of fourthorder convergence and its applications. Appl. Math. Comput. 216, 1978–1983 (2010). https://doi.org/10.1016/j.amc.2010.03.028