Local convergence analysis
of frozen Steffensen-type methods
under generalized conditions

Ioannis K. Argyros\(^\ast \) Santhosh George\(^\dag \)

September 15, 2017. Accepted: September 22, 2021; Published online: December 22, 2023.

\(^\ast \)Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA, e-mail: iargyros@cameron.edu.
\(^\dag \)Department of Mathematical and Computational Sciences, NIT Karnataka, India-575 025, e-mail: sgeorge@nitk.ac.in.

The goal in this study is to present a unified local convergence analysis of frozen Steffensen-type methods under generalized Lipschitz-type conditions for Banach space valued operators. We also use our new idea of restricted convergence domains, where we find a more precise location, where the iterates lie leading to at least as tight majorizing functions. Consequently, the new convergence criteria are weaker than in earlier works resulting to the expansion of the applicability of these methods. The conditions do not necessarily imply the differentiability of the operator involved. This way our method is suitable for solving equations and systems of equations.

MSC. 49D15, 65G19, 47H17, 65H10.

Keywords. Banach space, frozen Steffensen-type method, local convergence, generalized Lipschitz conditions.

1 Introduction

Let \(\mathcal{E}\) stand for a Banach space and \(S\) be a nonempty open subset of \(\mathcal{E}.\) By \(\mathcal{L}(\mathcal{E})\) we denote the space of bounded linear operators from \(\mathcal{E}\) into \(\mathcal{E}.\) Let also \(F: S\longrightarrow \mathcal{E}\) be a continuous operator. The problem of locating a zero \(x_*\) of operator \(F\) is very important in many diverse areas such as inverse theory, optimization, control theory, Mathematical Physics, Chemistry, Biology, Ecconomics, Computational Sciences and also in Engineering. A plethora of problems from the aforementioned disciplines can be formulated to finding a zero of \(F\) using Mathematical modeling [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 ] . The exact zero of \(F\) is desirable but this goal can be achieved only in some special cases. That is why researchers and practitioners generate a sequence converging to \(x_*\) under some conditions on operator \(F.\)

In this study, we introduce the method defined for each \(n=0,1,2,\ldots \) by

\begin{equation} \label{1.1} x_{n+1}=x_n-A_n^{-1}F(x_n), \end{equation}

where \(A_n=A(h_1(t_{t_n}), h_2(y_{t_n})), x_0\in S\) is an initial point; \(A(.,.): S\times S\longrightarrow \mathcal{L}(\mathcal{E}); \{ t_n\} \) is a nondecreasing sequence of integers such that \(t_0=0, t_n \leq n\) for each \(n=0,1,2,\ldots , h_1:S\longrightarrow S, h_2:S\longrightarrow S\) are continuous data operators and \(y_{t_n}\) stands for the highest indexed point \(x_0, x_1, \ldots , x_{t_n}\) for which \(A_n^{-1}\) exists provided \(A(h_1(x_0), h_2(x_0))^{-1}\in \mathcal{L}(\mathcal{E}).\) It is well established that it is not advantageous to replace the operator \(A_n^{-1} \) at each step of the iterative method (from the numerical efficiency point of view). If this operator is kept piece wise constant, then we obtain more efficient iterative methods. It is also well known that optimal methods can be obtained based on the dimension of the space. Many widely used iterative methods can be obtained as special cases of (1) say, if \(t_n=n,\) so \(y_{t_n}=x_n:\)

Steffensen-type method [ 10 ] :

\begin{equation} \label{1.2} x_{n+1}=x_n-[h_1(x_n), h_2(x_n);F]^{-1}F(x_n)\, \, \textnormal{for each}\, \, n=0,1,2,\ldots , \end{equation}

where \([.,.;F]:S\times S\longrightarrow \mathcal{L}(\mathcal{E})\) and for each \(x, y\in \mathcal{E}\) with \(x\neq y\) \([x,y;F](x-y)=F(x)-F(y).\)

Steffensen method:

\begin{equation} \label{1.3} x_{n+1}=x_n-[x_n,x_n+F(x_n);F]^{-1}F(x_n)\, \, \textnormal{for each}\, \, n=0,1,2,\ldots . \end{equation}

That is we specialize (2) by setting setting \(h_1(x)=x\) and \(h_2(x)=x+F(x).\)

Backward-Steffensen method:

\begin{equation} \label{1.5} x_{n+1}=x_n-[x_n-F(x_n),x_n;F]^{-1}F(x_n) \, \, \textnormal{for each}\, \, n=0,1,2,\ldots \end{equation}

Method (2) reduces to (4), if \(h_1(x)=x-F(x)\) and \(h_2(x)=x.\)

Central-Steffensen method

\begin{equation} \label{1.6} x_{n+1}=x_n-[x_n-F(x_n), x_n+F(x_n);F]^{-1}F(x_n) \, \, \textnormal{for each}\, \, n=0,1,2,\ldots . \end{equation}

This method is obtained from (2), if \(h_1(x)=x-F(x)\) and \(h_2(x)=x+F(x).\)

Generalized Steffensen-type method

\begin{equation} \label{1.7} x_{n+1}=x_n-[x_n-\theta (x_n)F(x_n), x_n+\tau (x_n)F(x_n);F]^{-1}F(x_n)\, \, \textnormal{for each}\, \, n=0,1,\ldots , \end{equation}

where \(\theta ,\tau :S\longrightarrow \mathbb {R}_+\cap \{ 0\} \) are real functions such that sequences \(\{ \theta (x_n)\} \) and \(\{ \tau (x_n)\} \) are convergenct sequences. Method (6) is a specialization of (2), if \(h_1(x)=x-\theta (x)F(x)\) and \(h_2(x)=x+\tau (x)F(x).\) Many other choices of \(t_n\) and \(A_n\) are possible [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 ] .

The motivation and novelty of method (1) are listed below:

  1. Method (1) is always well defined, since we can choose \(y_{t_n}=x_n\) for each \(n=0,1,2,\ldots .\)

  2. Many methods are special cases of method (1), so it is important to unify their convergence analysis.

  3. The local convergence analysis uses generalized Lipschitz-type conditions and the continuity of operator \(F.\) The differentiability of operator \(F\) is not assumed or implied by the conditions. This way (1) is suitable for solving non-differentiable equations. It is worth noticing that method (2) or its special aforementioned cases or other similar methods using divided differences cannot be used to solve non-differentiable equations, e.g., when \(h_1(x_i)=h_2(x_i)\) for some \(i=0,1,2,\ldots .\) Another possibility is when \(E=\mathbb {R}^i\) (\(i\) a natural number) and vectors \(h_1(x_i), h_2(x_i)\) are not the same but have at least one entry equal. Then, the classical divided difference cannot be defined. It is also worth noticing that local convergence results are important since they reveal the degree of difficulty in choosing initial points.

  4. Another problem appearing when we study the convergence of iterative methods is the fact that the ball of convergence is small in general and the error bounds on the distances \(\| x_n-x_*\| \) are pesimistic. We address these problems by using center-Lipschitz-type condition that help us determine a subset \(S_0\) of \(S\) also containing the iterates. By concentrating on \(S_0\) instead of \(S\) the Lipschitz functions are tighter than the ones depending on \(S.\) This way the convergence ball is at least as large (i.e., we obtain at least as many initial points), the error bounds at least as tight (i.e., at least as few iterations are needed to obtained an error tolerance \(\varepsilon \)) and the information on the location of the solution is at least as precise. These improvements are obtained under the same computational effort, since in practice the computation of the old Lipschitz functions requires the computation of the new Lipschitz functions as special cases.

The rest of the study is developed as follows: section 2 contain the local convergence analysis of method (1) whereas in section 3, we present the conclusion.

2 Semi-local convergence analysis

Let \(U(x,\delta )\) stand for the open ball centered at \(x\in \mathcal{E}\) and of radius \(\delta {\gt} 0.\) Moreover, we denote its closure by \(\bar{U}(x, \delta ).\) Define parameter \(\rho _1\) by \(\rho _1=\sup \{ t\geq 0: {U}(x_*,t)\subseteq S\} .\) The local convergence analysis is based on the conditions (\(\mathcal{A}\)):

  1. There exist \(x_*\in S,\) with \(F(x_*)=0, \alpha {\gt} 0\) and \(\bar{x}\in S\) with \(\| \bar{x}-x_*\| =\alpha ,\) such that \(A_*^{-1}:=A(x_*, \bar{x})^{-1}\in \mathcal{L}(\mathcal{E}).\)

  2. \(\| A_*^{-1}(A(h_(x),h_2(x))-A_*)\| \leq \varphi _0(\| h_1(x)-x_*\| , \| h_2(x)-\bar{x}\| ), \) for each \(x\in S\) where \(h_m:S\longrightarrow S, m=1,2\) are continuous operators, \(\varphi _0:I\times I\longrightarrow I\) is a continuous, nondecreasing function in both variables and \(I=\mathbb {R}_+\cap \{ 0\} .\)

  3. \(\| h_m(x)-x_*\| \leq \psi _m(\| x-x_*\| )\) for each \(x\in S,\) for some functions \(\psi _m: I\longrightarrow I\) which are continuous and nondecreasing.

  4. Equation

    \[ \varphi _0(\psi _1(t),\alpha +\psi _2(t))=1 \]

    has at least one positive solution. Denote by \(\rho _2\) the smallest such solution. Set \(S_0=U(x_*,\rho ),\) where \(\rho =\min \{ \rho _1, \rho _2\} .\)

  5. \(\| A_*^{-1}(A(h_1(y),h_2(y))(x-x_*)-F(x))\| \leq \varphi (\| h_1(y)-x\| , \| h_2(y)-x_*\| )\| x-x_*\| \) for each \(x,y\in S_0,\) where \(\varphi : I_0\times I_0\longrightarrow I\) is some continuous non-decreasing function and \(I_0=[0, \rho ].\)

  6. Equation \(\varphi (t+\psi _1(t), \psi _2(t))+\varphi _0(\psi _1(t), \alpha +\psi _2(t))=1\) has at least one solution in \((0, \rho _2).\) Denote by \(\rho _*\) the smallest such solution.

Next, we show the local convergence result for method (1) using the conditions (\(\mathcal{A}\)) and the preceding notation.

Theorem 1

Suppose that the conditions (\(\mathcal{A}\)) hold. Then, sequence \(\{ x_n\} \) generated by method (1) for starter \(x_0\in U(x_*,\rho _*)-\{ x_*\} \) is well defined in \(U(x_*, \rho _*),\) stays in \(U(x_*, \rho _*)\) for each \(n=0,1,2,\ldots \) and \(\lim _{n\longrightarrow \infty }x_n=x_*.\)

Proof â–¼
Let \(x\in U(x_*,\rho _*).\) Using (a1)–(a4) and (a6), we have in turn that

\begin{eqnarray} \nonumber \| A_*^{-1}(A(h_1(x), h_2(x))-A_*)\| & \leq & \varphi _0(\| h_1(x)-x_*\| , \| h_2(x)-\bar{x}\| )\\ \nonumber & \leq & \varphi _0(\psi _1(\| x-x_*\| ), \| h_2(x)-x_*\| +\| x_*-\bar{x}\| )\\ \label{2.1} & \leq & \varphi _0(\psi _1(\rho _*),\alpha +\psi _2(\rho _*)) {\lt}1. \end{eqnarray}

It follows from (7) and the Banach lemma on invertible operators [ 5 , 6 ] that \(A(h_1(x), h_2(x))\) is invertible and

\begin{equation} \label{2.2} \| A(h_1(x), h_2(x))^{-1}A_*\| \leq \tfrac {1}{1-\varphi _0(\psi _1(\| x-x_*\| ),\alpha +\psi _2(\| x-x_*\| ))}. \end{equation}

Moreover, \(x_1\) is well defined by (1) by (8) for \(x=x_0.\) We can write by method (1) that

\begin{eqnarray} \nonumber x_1-x_*& =& x_0-x_*-A_0^{-1}F(x_0)\\ \nonumber & =& A_0^{-1}[A_0(x_0-x_*)-F(x_0)]\\ \label{2.3} & =& [A_0^{-1}A_*]A_*^{-1}[A_0(x_0-x_*)-F(x_0)]. \end{eqnarray}

By (8), (9), (a3) and (a4), we obtain in turn that

\begin{eqnarray} \nonumber \| x_1-x_*\| & \leq & \| A_0^{-1}A_*\| \| A_*^{-1}[A_0(x_0-x_*)-F(x_0)]\| \\ \nonumber & \leq & \tfrac {\varphi (\| h_1(x_0)-x_0\| ,\| h_2(x_0)-x_*\| )\| x_0-x_*\| }{1-\varphi _0(\psi _1(\| x_0-x_*\| ), \alpha +\psi _2(\| x_0-x_*\| ))}\\ \nonumber & \leq & \tfrac {\varphi (\| h_1(x_0)-x_0\| +\| x_0-x_*\| ,\| h_2(x_0)-x_*\| )\| x_0-x_*\| }{1-\varphi _0(\psi _1(\| x_0-x_*\| ), \alpha +\psi _2(\| x_0-x_*\| ))}\\ \label{2.4} & \leq & c\| x_0-x_*\| {\lt} \rho _*, \end{eqnarray}

which shows that \(x_1\in U(x_*,\rho _*),\) where

\begin{equation} \label{2.5*} c=\tfrac {\varphi (\| h_1(x_0)-x_*\| +\| x_0-x_*\| ,\| h_2(x_0)-x_*\| )}{1-\varphi _0(\psi _1(\| x_0-x_*\| ), \alpha +\psi _2(\| x_0-x_*\| ))}\in [0,1). \end{equation}

Suppose that \(y_{t_m}\in U(x_*,\rho _*)\) for each \(m=0,1,2,\ldots k.\) Then, as in (8)-(10) with \(x=y_{t_{k}},\) and \(x\) replaced by \(x_{k+1}\) we get

\begin{equation} \label{2.6} \| A(h_1(y){t_k}), h_2(y_{t_k}))^{-1}A_*\| \leq \tfrac {1}{1-\varphi _0(\psi _1(\| y_{t_k}-x_*\| , \alpha +\psi _2(\| y_{t_k}-x_*\| )} \end{equation}


\begin{equation} \label{2.7} \| A_*^{-1}[A_*(x_k-x_*)-F(x_*)]\| \leq \varphi (\| h_1(y_{t_k})-x_k\| , h_2(y_{t_k})-x_*\| )\| x_k-x_*\| , \end{equation}

so again

\begin{equation} \label{2.8} \| x_{k+1}-x_*\| \leq \| A_k^{-1}A_*\| \| A_*^{-1}[A_k(x_k-x_*)-F(x_k)]\| \leq c\| x_k-x_*\| < \rho _*, \end{equation}

which implies \(x_{k+1}\in U(x_*,\rho _*)\) and \(\lim _{k\longrightarrow \infty }x_k=x_*.\) \(\Box \)

Concerning the uniqueness of the solution \(x_*\) suppose
(a6)’ Equation

\[ \varphi (t+\psi _1(t), t+\psi _2(t))+\varphi _0(\psi _1(t), \alpha +\psi _2(t))=1 \]

has at least one solution in \((0, \rho _2).\) Denote by \(\bar{\rho }_*\) the smallest such solution. Replace (a6) by (a6)’ in conditions (\(\mathcal{A}\)) and (a1)-(a5) and (a6)’ conditions (\(\mathcal{A}\))’. Then, we have:

Proposition 2

Suppose that the conditions (\(\mathcal{A}\))’ hold. Then, \(x_*\) is the only solution of equation \(F(x)=0\) in \(U(x_*,\bar{\rho }_*)\) provided that \(x_0\in U(x_*, \bar{\rho }_*)-\{ x_*\} .\)

Proof â–¼
Let \(y_*\in U(x_*, \bar{\rho }_*)\) be such that \(F(y_*)=0.\) We have instead of (14) the estimate

\begin{equation} \label{2.9} \| x_{k+1}-y_*\| \leq \bar{c}\| x_k-y_*\| , \end{equation}

where \(\bar{c}=\tfrac {\varphi (\| h_1(x_0)-x_*\| +\| x_0-x_*\| ,\| h_2(x_0)-x_*\| +\| x_*-y_*\| )}{1-\varphi _0(\psi _1(\| x_0-x_*\| ), \alpha +\psi _2(\| x_0-x_*\| ))}\in [0,1),\) (by (a6)’). Then, it follows from (15) that \(\lim _{k\longrightarrow \infty }x_k=y_*.\) But we showed in Theorem 1 that \(\lim _{k\longrightarrow \infty }x_k=x_*.\) Hence, we conclude that \(x_*=y_*.\)

Remark 3

Let us look again at method (2) and consider the conditions given in [ 10 ] , so we can compare the results. Their conditions were given in nonaffine invariant form but we present them here in affine invariant form. The advantages of affine invariant results over non affine invariant results are well known [ 5 , 6 , 11 , 12 ] .

  1. (a1)

  2. \(\| A_*^{-1}([x,y;F]-A_*)\| \leq \bar{\varphi }_0(\| x-x_*\| , \| y-x_*\| )\) for each \(x,y\in S\) and \(x\neq y.\)

  3. \(\| A_*^{-1}([x,y;F]-[z,w;F])\| \leq \bar{\varphi }(\| x-z\| , \| y-w\| )\) for each \(x,y,z,w\in S\) and \((x,y), (z,w)\) different pairs.

  4. \(\| h_m(x)-h_m(x_*)\| \leq \bar{\psi }_m(\| x-x_*\| )\)

  5. \(h_m(x_*)=x_*, h_1'(x_*)\neq h_2'(x_*), h_1(x)=h_2(x)\Leftrightarrow x=x_*.\)

  6. \(\bar{\varphi }(\bar{\psi }_1(t)+t, \bar{\psi }_2(t))+\bar{\varphi }_0(\bar{\psi }_1, \alpha +\bar{\psi }_2(t))=1\) has at least one positive solution. Denote by \(r_*\) the smallest such solution.

Restrictive conditions (c4) and (c5) were not used in our study. Conditions (a2), (a3) are weaker than (c3), (c4), respectively. Condition (a2) helps us determine a more precise domain (i.e., \(S_0\)) containing the iterates than \(S\) and also helps us define function \(\psi .\) Moreover, we have

\begin{equation} \label{2.10} \varphi _0(t)\leq \bar{\varphi }_0(t) \end{equation}

\begin{equation} \label{2.11} \varphi (t)\leq \bar{\varphi }(t), \end{equation}

\begin{equation} \label{2.12} c\leq \bar{c}, \end{equation}

\begin{equation} \label{2.13} r_*\leq \rho _* \end{equation}


\begin{equation} \label{2.14} S_0\subseteq S. \end{equation}

Estimates (16)-(19) justify the improvements stated in the introduction. Examples, where (16)-(19) hold as strict inequalities can be found in [ 4 , 5 , 6 ] .

3 Conclusion

We presented a local convergence analysis of frozen Steffensen-type methods for generating a sequence approximating Banach space valued equations. This method specializes to many popular methods. If the starting inverse exists, then the method is always well defined. Using our idea of the restricted convergence domain, we provide a more precise domain where the iterates lie. Hence, the majorant functions involved are at least as tight as in previous studies. This way, the convergence criteria are at least as weak; the convergence domain is enlarged; the error bounds on the distance \(\| x_n-x_*\| \) is tighter and the information on the location of the solution is at least as precise. The results reduce to earlier ones, if only divided diferences are used. Moreover, the differentiability of operator \(F\) is not assumed or implied as in previous works, making method (1) suitable for solving systems of equations.


S. Amat, S. Busquier, A. Grau, M. Grau-Sánchez, Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications, Appl. Math. Comput., 219 (2013), pp. 7954–7963, https://doi.org/10.1016/j.amc.2013.01.047.


S. Amat, I. K. Argyros, S. Busquier, M. A. Hernández, On two high-order families of frozen Newton-type methods, Numer. Linear Algebra Appl., 25 (2018), e2126, pp. 1–13, https://doi.org/10.1002/nla.2126.


I. K. Argyros, A unifying local semi-local convergence analysis and applications for two-point Newton-like methods in Banach space, J. Math. Anal. Appl., 298 (2004) no. 2, pp. 374–397, https://doi.org/10.1016/j.jmaa.2004.04.008.


I.K. Argyros, S. Hilout, Weaker conditions for the convergence of Newton’s method, J. Complexity, 28 (2012), pp. 364–387, https://doi.org/10.1016/j.jco.2011.12.003.


I.K. Argyros, A.A. Magreñán, Iterative Methods and their Dynamics with Applications, CRC Press, New York, USA, 2017, https://doi.org/10.1201/9781315153469.


I.K. Argyros, S. George, Unified convergence analysis of frozen Newton-like methods under generalized conditions, J. Comput. Appl. Math., 347 (2019), pp. 95–107, https://doi.org/10.1016/j.cam.2018.08.010.


J.A. Ezquerro, M.A. Hernández, Newton’s Method: an Updated Approach of Kantorovich’s Theory, Birkhauser, Elsevier, Cham, Switzerland, 2017, https://doi.org/10.1007/978-3-319-55976-6.


O.H. Hald, On a Newton-Moser type method, Numer. Math., 23 (1975), pp. 411–426, https://doi.org/10.1007/bf01437039.


M.A. Hernández, M.J. Rubio, A uniparameteric family of iterative processes for solving nondifferentiable equations, J. Math. Anal. Appl., 275 (2002), pp. 821–834, https://doi.org/10.1016/S0022-247X(02)00432-8.


M.A. Hernández, A.A. Magreñán, M.J. Rubio, Dynamics and local convergence of a family of derivative-free iterative processes, selected papers of CMMSE, J. Comput. Appl. Math., 2018, https://doi.org/10.1016/j.cam.2018.08.032.


F.A. Potra, A characterization of the divided differences of an operator which can be represented by Riemann integrals, Anal. Numer. Theory. Approx., 9 (1980), pp. 251–253.


F.A. Potra, On the convergence of a class of Newton-like methods, Preprint Series in Mathematics, INCREST, Bucharest, Romania, No. 22/1982.