Return to Article Details Two step Steffensen-type methods with or without memory for nondifferentiable equations

Two Step Steffensen-Type Methods with
or without Memory for Nondifferentiable Equations

Ioannis. K. Argyros and Gagan Deep
(Date: August 05, 2025; accepted: October 02, 2025; published online: October 03, 2025.)
Abstract.

In the current study, two step Steffensen-Type methods with and without memory free from derivatives are considered for the nondifferentiable equations and the local as well as semilocal convergence analysis is proved under more generalized conditions. Numerical applications are provided which demonstrate the theoretical results. Better results in terms of radii of convergence balls and number of iterations are obtained using the proposed approach as compared to the existing ones.

Key words and phrases:
Divided Difference, Steffensen-type Method, Banach space, Convergence, Non differentiable equation.
2005 Mathematics Subject Classification:
41A58, 65G99, 65H10, 65T20
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA, e-mail: iargyros@cameron.edu.
Department of Mathematics, Hans Raj Mahila Mahavidyalaya, Jalandhar 144008, Punjab, India, e-mail: gagandeep.mat@hmv.ac.in.

1. Introduction

A significant and interesting challenge in numerical analysis is the problem of solving a nonlinear equation or system of nonlinear equations of the form

(1) F(x)=0,

where F is a Fréchet differentiable operator mapping from a Banach space X into X, and D be an open convex subset of X. Formulation of problems as an equation like (1) using Mathematical Modeling [2, 6, 11, 14, 24] arises in multiple disciplines of science and engineering. Obtaining a solution xD of (1) is in analytic form is another important issue. The non-analytic and complex functions are thereby handled using a useful computational tool called iterative methods which approximate the solution x of (1). To further overcome the issues like slow or no convergence, divergence and inefficiency, an extensive literature can be found on convergence of iterative methods based on algebraic or geometrical considerations [14, 24]. As a result, researchers all around the world are persistently endeavoring to create higher order iterative methods [1, 3, 7, 8, 5, 9, 13, 16, 17, 18, 19, 20, 21]. Let L(X) denote the space of bounded linear operators from X into itself and ξ{0} be a parameter.

We examine the convergence of a general Steffensen-type method free from derivatives developed by Chicharro et. al [7] which is defined for all m=0,1,2, by

x0D ,um=xm+ξF(xm),
ym= xm[um,xm;F]1F(xm),
(2) xm+1 =ym[um,ym;F]1F(ym),

where [,;F]:D×DL(X) is divided difference of order one [2, 14]. The beauty of the method (2) lies in the fact that if ξ is a nonzero constant then the method is without memory and if is taken to be Kurchatov operator [7] then it becomes method with memory. The fifth convergence order of the method (2) is proved utilizing Taylor series expansions in [7] provided that X=k and by assuming the existence of at least the fourth derivative which is not on the method. Hence, the application is limited to solving nonlinear equation (1) where the operator is at least that many times differentiable. But the method may converge even if F(4) does not exist.

For an instance, consider D to be an interval [52,2] and the function f defined on D as
f(t)={t3log(π2t2)+t5sin(1t),t00,t=0.

Clearly, f(3)(t) is not continuous at t=0. As a result the convergence of method (2) to the solution t=1π cannot be assured using results in [7] although the method converges.

Moreover, notice that the method (2) does not have any derivatives. Also, the iterates which contain xm+F(xm) may lead to loss of convergence when F has large variations near the solution [15].

Some other limitations with local convergence analysis provided in [7] are:

  • (C1)

    The initial guess x0 is “a shot in dark”and no information is available on the uniqueness of the solution.

  • (C2)

    A priori upper bounds on xmx are not given, xD being a solution of the equation (1). The number of iterations to be performed to reach a predecided error tolerance are not known.

  • (C3)

    The convergence of the method is not assured (although it may converge to x) if at least f(3) does not exist.

  • (C4)

    The results are limited to the case only when X=k.

  • (C5)

    The more interesting semi-local convergence is not given in [7].

The same concerns exist for numerous other methods with no derivatives [7, 21]. So, the technique of this study can also be used to extend the applicability of such methods along the same lines.

The main feature of current study is that it takes up all the aforementioned limitations positively. In particular, the local convergence is based on the more general ω-continuity condition [1, 2, 17] and uses only information from the operators appearing on the method. Moreover, the semi-local convergence utilizes majorizing sequences [1, 2] is also provided.

The novelty of the article lies in the fact that the process leading to the aforementioned benefits does not rely on the particular method (2). But it can be utilized on other methods involving inverses of linear operators in a similar manner. The numerical study includes the results considering both cases of the method (2) i.e with memory and without memory. The results show that lesser number of iterations are required to obtain the solution and larger convergence radii are obtained using the presented approach as compared to existing ones. The development of efficiency and computational benefits have been discussed in [7], hence are not repeated in present article.

The paper is structured as follows: The local convergence of the method (2) is followed by the semi-local convergence. The numerical applications and concluding remarks, respectively complete the paper.

Analysis I: Local

In this section, local convergence of (2) for solving (1) is established. Suppose U(x,r) and U[x,r] denote the open and closed balls, respectively with center x and radius r. Let B=[0,+). The hypotheses for the local convergence analysis are: Assume there exist

  • (T1)

    Functions Φ0:B×BB, w1:BB, which are continuous, symmetric and nondecreasing (CSNF) such that Φ0(w1(t),t)1=0 admits a minimal positive solution (MPS) denoted as k0. Let B0=[0,k0).

  • (T2)

    A CSNF Φ:B0×B0B such that for J1:B0B, where

    J1(t)= Φ(w1(t),t)1Φ0(w1(t),t),

    J1(t)1=0 admits MPS in B0 denoted as k1. Let B1=[0,k1).

  • (T3)

    Φ0(w1(t),J1(t),t)1=0 admits a MPS in B1 denoted by k2. Let B2=[0,k2).

  • (T4)

    For J2:B2B, where

    J2(t)= Φ(w1(t),J1(t)t)1Φ0(w1(t),J1(t)t),

    J2(t)1=0 has a MPS in B2 denoted by k3.

    (3) Letk=min{k1,k3}andB=[0,k).

    It follows by this definition that for all tB

    (4) 0 Φ0(w1(t),t)<1,
    (5) 0 Φ0(w1(t),J1(t)t)<1,

    and

    (6) 0Jj(t)<1forj=1,2.
  • (T5)

    There exist xD solving equation F(x)=0 and an invertible operator L so that for all xD, u=x+ξF(x)

    uxw1(xx)

    and

    L1(F(x)L)Φ0(ux,xx).

    Define the set D0=DU(x,k0).

  • (T6)

    For all xD0, u=x+ξF(x)

    L1([u,x;F][x,x;F]) Φ(ux,xx).

    and

  • (T7)

    U[x,k¯]D, where
    k¯=max{k,w1(k)}.

The analysis follows for the method (2) via means of hypotheses (T1)-(T7).

Theorem 1.

Assume hypotheses (T1)-(T7) are valid. Then, the assertions are validated

(7) {xm} U(x,k),
(8) ymxmJ1(xmx) xmxxmx<k,
(9) xm+1xJ2(xmx) xmxxmx

and the sequence {xm} is convergent to x, provided that the initial point x0U(x,k){x}.

Proof.

The application of hypotheses (T1), (T5) and (4), (7) give

L1(F(x0)L) Φ0(ux,x0x)
Φ0(w1(x0x),x0x)
(10) Φ0(w1(k),k)<1.

The estimate (10) and the celebrated lemma attributed to Banach on linear operators imply the invertibility of [u0,x0;F] and the estimate

(11) [u0,x0;F]1L11Φ0(w1(x0x),x0x).

Thus, the iterate y0 exists and

(12) y0x =x0x[u0,x0;F]1F(x0)

by the first substep of (2) if m=0. It follows by the estimate (3), (6) (if j=1), (11), (12) and (T6) that

y0x =[u0,x0;F]1([u0,x0;F][x0,x;F])(x0x),

so

y0x Φ(u0x,x0x)x0x1Φ0(u0x,x0x)
(13) J1(x0x)x0xx0x<k.

Hence, the iterate y0U(x,k) and the assertion (8) is validated if m=0. As in (10) but using (3), (5) and (T5) we obtain

(14) [u0,y0;F]1L11Φ0(u0x,y0x),

so the iterate x1 exists.

Moreover,we can have if m=0 in the method (2)

x1x =[u0,y0;F]1([u0,y0;F][y0,x;F])(y0x)

leading as in (13)

x1x Φ(u0x,y0x)y0x1Φ0(u0x,y0x)
(15) J2(x0x)x0xx0x.

So, the iterate x1U(x,k) and the assertion (7), (9) are validated if m=1 and m=0, respectively. The induction for assertions (7)–(9) is terminated if x0, u0, y0, x1 are exchanged by xm, um, ym, xm+1 in previous calculations. Furthermore, the estimate

(16) xm+1xb(xmx)k,

for b=J2(x0x)[0,1) give limm+xm=x and complete the induction for assertions (7)–(9). ∎

Remark 2.

The real function w1 can be selected using the assumption (18) and the following calculations:

ux =xx+ξF(x)=xx+ξ[x,x;F](xx)
=(I+ξLL1([x,x;F]L+L))(xx),
=((I+ξL)+ξLL1([x,x;F]L))(xx).

Thus, a possible choice for the function w1 is

(17) w1(t)=(1+ξL+ξΦ1(t))t.

The function can be further specified if the linear operator L is precised.

A popular choice is L=F(x). But in this case although there are no derivatives on the method (2) it cannot be used to solve non differentiable equations under previous assumptions, since we assume x is a simple solution (i.e., F(x) is invertible). Thus L should be chosen so that functions “Φ ” are as tight as possible but not L=F(x) in the case of nondifferentiable equations.

The isolation of the solution domain is specified in the next result.

Proposition 3.

Assume conditions (T2) and (T5) are validated on the ball U(x,p) for some p>0 and there exists p1p such that such that

L1([y,x;F]L) Φ1(yx),
Φ1(p1)< 1,

where Φ1:B0×B is CSNF.

Then, the equation F(x)=0 is uniquely solvable by x in the domain D1=DU[x,p].

Proof.

Suppose there exists a solution λD1 such that F(λ)=0 and λx. Define the divided difference [λ,x;F]. Then, we get

(18) L1([λ,x;F]L) Φ1(λx)
Φ1(p1)<1.

Thus, from identity

λx= [λ,x;F]1(F(λ)F(x))
= [λ,x;F]1(0)=0,

we conclude λ=x. ∎

Remark 4.

Notice that a possible choice for p=k.

Analysis II: Semi-local

This time the role of xD and the functions “Φ” are switched by x0D and the functions “Ψ” precised below.

Assume there exist:

  • (M1)

    Functions CSNF Ψ0:B×BB, w2:BB so that Ψ0(w2(t),t)1=0 has MPS in B denoted as q. Let B3=[0,q).

  • (M2)

    Functions CSNF Ψ:B3×B3×B3B, Ψ1:B3×B3×B3×B3B. Let D1=DU(x0,q)

  • (M3)

    There exist x0D and an invertible linear operator L so that for xD, u=x+ξF(x)

    ux0w2(xx0)

    and

    L1([u,x;F]L)Ψ0(ux0,xx0).

    It follows by (M1) and (M3) that

    Ψ0(u0x0,x0x0)Ψ(w2(0),0)<1,

    so, [u0,x0;F] is invertible, thus y0 exists and consequently we can let [u0,x0;F0]1F(x0)γ0.

    Consider the sequence {δm} generated for δ0=0 and each m=0,1,2 as

    δm+1= γm+Ψ(w2(δm),δm,γm)(γmδm)1Ψ0(w2(δm),γm),
    (19) ζm+1= Ψ1(w2(δm),δm,γm,δm+1)

    and

    γm+1=δm+1+ζm+11Ψ0(w2(δm+1),δm+1).
  • (M4)
    Ψ0(w2(δm),γm)<1Ψ0(w2(δm),δm)<1

    m=0,1,2, and δmq0 for some q0B3{0}. It follows from the definition (19) and this hypothesis that 0δmγmδm+1q0 and there exists δ[0,q0] so that limm+δm=δ. Notice that δ is the least upper bound of the scalar sequence {δm} which is unique.

  • (M5)

    For each x,y,uD1, u=x+ξF(x)

    L1([y,x,;F][u,x;F])Ψ(xx0,ux0,yx0)

    and

    L1([x,y,;F][u,v;F])Ψ1(xx0,yx0,ux0,vx0).

    and

  • (M6)

    U(x0,q)D, where

    q=max{δ,w2(δ)}.

Then as in local case we present the semi-local result.

Theorem 5.

Under the Assumptions (M1)(M4) there exists xU[x0,q] solving the equation F(x)=0 so that

(20) xmxδδmfor allm=0,1,2,.
Proof.

The assertions

(21) yixiγiδi

and

(22) xi+1yiδi+1γi,

must be shown by induction. Notice that (21) holds if m=0 by the choice of δ0, γ0 and the first substep of the method (2) if m=0. Then, as in local proof we have in turn the estimates

[ui,yi;F]1L 11Ψ0(uix0,xix0)
11Ψ0(w2(δi),γi),
F(yi) =F(yi)F(xi)[ui,xi;F](yixi),
=([yi,xi;F][ui,xi;F])(yixi),
L1F(yi) Ψ(uix0,xix0,yix0)
Ψ(w2(δi),δi,γi)=Pi,
xi+1yi =[ui,yi;F]1F(yi),
xi+1yi [ui,yi;F]1LL1F(yi)
Pi1Ψ0(w2(δi),γi)=δi+1γi,
xi+1x0 xi+1yi+yix0
δi+1γi+γiδ0=δi+1<δ,
F(xi+1) =F(xi+1)F(yi)[ui,xi;F](xi+1yi),
=([xi+1,yi;F][ui,xi;F])(xi+1yi),
L1F(xi+1) Ψ1(uix0,xix0,yix0,xi+1x0),
(23) Ψ(w2(δi),δi,γi,δi+1)=ζi+1,
[ui+1,xi+1;F]1L 11Ψ(w2(δi+1),δi+1),
yi+1xi+1 [ui+1,xi+1;F]1LL1F(xi+1)
ζi+11Ψ0(w2(δi+1),δi+1)=γi+1δi+1,
yi+1x0 yi+1xi+1+xi+1x0
γi+1δi+1+δi+1δ0=γi+1<δ.

Thus, {xi}, {yi}U(x,δ), (21), (22) are validated and the sequences are Cauchy in Banach space X. Hence, there exists xU[x,δ] so that limi+xi=x. By letting i+ in (23) we get F(x)=0. Finally, for i0 the estimate

xi+mxiδi+mδi

implies (20) if i+. ∎

Remark 6.

The function w2 can be determined analogously to the function w1 as follows:

ux0 =xx0+ξF(x)
=(I+ξ[x,x0;F])(xx0)+ξF(x0),
=[(I+ξL)+ξLL1([x,x0;F]L)](xx0)+ξF(x0).

Assume: There exist FCN Ψ2:B3B such that

L1([x,x0;F]L))Ψ2(xx0)

for all xB3. Then, we can choose

w2(t)=(I+ξL+ξLΨ2(t))t+ξF(x0).

The next result determines the isolation of a solution region.

Proposition 7.

Assume there exists a solution τB(x0,υ) of solution F(x)=0 for some υ>0, The first condition in (M3) is validated on the ball B(x0,υ) and there exists ϖυ such that

L1([τ,x;F]L) Ψ0(τx0,xx0),
Ψ0(υ,ϖ)< 1,

where Ψ0:B×B is FCN.

Let D3=DB[x0,ϖ]. Then, the only solution of the equation F(x)=0 is τ in the region D3.

Proof.

Let τD3 satisfy F(τ)=0. Define the divided difference [τ,τ;F]. This is possible if ττ. Then, we obtain in turn by the conditions

L1([τ,τ;F]L) Ψ0(τx0,τx0),
Ψ0(υ,ϖ)<1,

thus [τ,τ;F]1L(B). But the identity

0=F(τ)F(τ)=[τ,τ;F](ττ)

leads to a contradiction and the divided difference [τ,τ;F] can not be defined. Therefore, we conclude that τ=τ. ∎

Remark 8.
  • (1)

    The point q can also be replaced by δ in condition (M4).

  • (2)

    If conditions (M1)(M4) are all validated, let x=τ and δ=υ.

Applications

Method (2) can turn from without memory to with memory if the constant ξ turns into a suitable linear operator. As in [7] choose ξ to be the Kurchatov operator [2, 14]

(24) ξ[2xmxm1,xm1;F]1.

That is, we obtain the method with memory derived by (2) for ξ replaced by (24). Then, the local as well as the semi-local results hold say if w1, w2 are adjusted. In view of the calculation

[2xmxm1,xm1;F]1LL1 L11Φ¯0(2xmxm1x,xmx)
L11Φ¯0(2xmx+xm1x,xmx)

provided that

L1([z,x;F]L)Φ¯0(zx,xx),

where Φ¯0:X¯X¯X is CSNF for all x,zD.

Then, ξ can be replaced in Remark 2 by L11Φ¯0(3t,t)=a provided that

Φ¯0(3t,t)1 has a MPS denoted as s¯¯.

In this case we also require (T7) to be replaced by

  • (T7)

    U[x,ς] where ς=max{3k,w1(k)}.

Example 9.

Let X=×× and D=B[x,1]. Consider the mapping on the ball D be given for ρ=(ρ1,ρ2,ρ3)T as

F(ρ)=(e12ρ12+ρ1,ρ2,eρ31)T.

The Jacobian is given by

F(ρ)=((e1)ρ1+10001000eρ3).

It follows that the solution x=(0,0,0)T and F(x)=I the identity mapping with choice L=F(x). The divided difference is defined by [x,y;F]=01F(x+θ(yx))𝑑θ. Then, the conditions (T3)-(T4) are validated provided that

Φ0(u1,u2) =12(e1)(u1+u2)
Φ(u1,u2) =12(e1)u2
Φ1(u1) =e12u1.

Then, taking ξ=1 i.e considering the case of method (2) (denoted by SM1) without memory, the radius of convergence k from (4) is given as

k=0.2747721282852604

If we consider the method (2) (denoted by SM2) with memory, then from (24) the radius of convergence k is given by

k=0.33003245785816543

We compare the radius of convergence obtained by fourth order Kung-Traub method (M4) given by Sharma et al. [22], fifth-order Weerakoon method (MWM) in Sharma and Parhi [23], fourth and fifth order methods (PM14),

(HLM5),(PM25) in Maroju et al. [12].

Method SM1 SM2 PM14 M4 MWM HLM5 PM25
Convergence Radius 0.274772 0.330032 0.111497 0.131305 0.184350 0.02763 0.118532
Table 1. Comparison of radius of convergence Example 9.

Thus, from Table 1 it is clear that enlarged convergence radius is obtained by the our approach for the methods SM1 and SM2 in comparison to existing methods.

Remark 10.

Notice that using the iterates of the type um=xm+xiF(xm) may lead to very small radius of attraction balls for the method, unless a small xi is used.

Example 11.

Let Q:XX be a mapping. Recall that the standard divided difference of order one when B=k is defined for x¯=(x1,x2,,xk), y¯=(y1,y2,,yk), i=0,1,2,,k, j=0,1,2,,k by

[y¯,x¯;Q]ji=Qj(y1,,yi1,yi,xi+1,,xk)Qj(y1,,yi1,xi,xi+1,,xk)yixi

provided that yixi. It is known that for certain pairs of distinct vectors x,y, the formula is not applicable when some of the components are equal.

The solution is sought for the nonlinear system

t12t2+1+19|t11| =0
t1+t227+19|t2| =0

Let Q=(Q1,Q2)T for (t1,t2)×, where

Q1= t12t2+1+19|t11|and
Q2= t1+t227+19|t2|.

Then, the system becomes

Q(t)=0fort=(t1,t2)T.

The divided difference L=[,;Q] belongs in the space L2×2() and is the standard 2×2 matrix in 2 [10]. Let us choose x0=(1.15,2.36)T and ξ=1. Then, the application of the method (2) gives the solution x after three iterations. The solution

x=(x1,x2)T=(1.159360850193,2.361824342093)T.

Taking into account (24) and x1=(1.22,2.37)T, if we consider the method (2) with memory, the solution x is obtained after two iterations. The number of iterations required to obtain solution x by two Kurchatov methods presented in [4] are four and five, respectively. Thus, less number of iterations are required to obtain the solution by the methods SM1 and SM2 in comparison to Kurchatov methods [4].

Example 12.

Consider the following system of ten equations:

j=1,ji10xjexi=0,1i10.

The initial approximations chosen are x0=(1,1,1,1,1,1,1,1,1,1)T and x1=(0,0,0,0,0,0,0,0,0,0)T to obtain the solution

x=(0.10048840033731707,0.10048840033731707,,
0.10048840033731707)T

The comparison of error and norm of function for methods SM1 and SM2 taking three iterations are presented in Table 2. The stopping criterion used is xk+1xk+F(xk)<10100.

Methods k xk+1xk F(xk)
1 0.283001 1.29e001
SM1 2 1.31e002 1.61e007
3 1.62e008 3.13e025
1 0.284001 6.92e003
SM2 2 6.98e004 1.80e013
3 1.82e014 3.11e048
Table 2. Comparison of the performances of methods for Example 12.

The number of iterations required to obtain solution x by methods SM1 and SM2 are three and two, respectively. Thus, the method (2) with memory requires less number of iterations than the method without memory taking ξ=0.5.

Remark 13.

Summing up what we did in this study is that the local and semilocal convergence of the method (2) are presented without Taylor series expansions. Moreover, the limitations (C1)(C5) have been addressed as follows:

  • (C1)

    The radius of convergence k is provided in (3). So, the initial point is selected from a certain ball centered at x and of radius k. Moreover, the uniqueness of the solution is established in the Proposition 3 and Proposition 7. Furthermore, the convergence conditions use only the first derivative (see Theorem 1 and Theorem 5).

  • (C2)

    Computable error bounds on xmx or xm+1xm become available in Theorem 1 and Theorem 5, respectively. Thus, the number of iterations to be performed to reach a certain error tolerance is known in advance.

  • (C3)

    The method (2) converges t=1π in the motivational example of the introduction if t0 is chosen close enough to t and inside D for ξ=1.

  • (C4)

    The results are established in more general setting of a Banach space.

  • (C5)

    The more interesting and challenging semi-local convergence analysis is provided using majorizing sequences [2, 14].

Concluding Remarks

The present study is based on the local and semi-local analysis of two step Steffensen-type methods with and without memory. It is applicable to the case when the problems formulated from varied areas of science and engineering are nondifferentiable. However, this methodology can also be applied to solve the differentiable equations and on other methods utilizing the inverses of linear operators. Further, numerical experiments are performed on various examples for both the cases i.e with and without memory that demonstrates the theoretical results. The enlarged convergence radii have been obtained by the presented approach as compared to the existing ones. In our future research the methodology shall be extended to extend the applicability of multipoint and multi-step methods [2, 14, 21, 24].

Acknowledgements.

We would like to express our gratitude to the reviewers for the constructive criticism of this paper.

References