Two Step Steffensen-Type Methods with
or without Memory for Nondifferentiable Equations
Abstract.
In the current study, two step Steffensen-Type methods with and without memory free from derivatives are considered for the nondifferentiable equations and the local as well as semilocal convergence analysis is proved under more generalized conditions. Numerical applications are provided which demonstrate the theoretical results. Better results in terms of radii of convergence balls and number of iterations are obtained using the proposed approach as compared to the existing ones.
Key words and phrases:
Divided Difference, Steffensen-type Method, Banach space, Convergence, Non differentiable equation.2005 Mathematics Subject Classification:
41A58, 65G99, 65H10, 65T201. Introduction
A significant and interesting challenge in numerical analysis is the problem of solving a nonlinear equation or system of nonlinear equations of the form
| (1) |
where is a Fréchet differentiable operator mapping from a Banach space into , and be an open convex subset of . Formulation of problems as an equation like (1) using Mathematical Modeling [2, 6, 11, 14, 24] arises in multiple disciplines of science and engineering. Obtaining a solution of (1) is in analytic form is another important issue. The non-analytic and complex functions are thereby handled using a useful computational tool called iterative methods which approximate the solution of (1). To further overcome the issues like slow or no convergence, divergence and inefficiency, an extensive literature can be found on convergence of iterative methods based on algebraic or geometrical considerations [14, 24]. As a result, researchers all around the world are persistently endeavoring to create higher order iterative methods [1, 3, 7, 8, 5, 9, 13, 16, 17, 18, 19, 20, 21]. Let denote the space of bounded linear operators from into itself and be a parameter.
We examine the convergence of a general Steffensen-type method free from derivatives developed by Chicharro et. al [7] which is defined for all by
| (2) |
where is divided difference of order one [2, 14]. The beauty of the method (2) lies in the fact that if is a nonzero constant then the method is without memory and if is taken to be Kurchatov operator [7] then it becomes method with memory. The fifth convergence order of the method (2) is proved utilizing Taylor series expansions in [7] provided that and by assuming the existence of at least the fourth derivative which is not on the method. Hence, the application is limited to solving nonlinear equation (1) where the operator is at least that many times differentiable. But the method may converge even if does not exist.
For an instance, consider to be an interval and the function defined on as
Clearly, is not continuous at . As a result the convergence of method (2) to the solution cannot be assured using results in [7] although the method converges.
Moreover, notice that the method (2) does not have any derivatives. Also, the iterates which contain may lead to loss of convergence when has large variations near the solution [15].
Some other limitations with local convergence analysis provided in [7] are:
-
The initial guess is “a shot in dark”and no information is available on the uniqueness of the solution.
-
A priori upper bounds on are not given, being a solution of the equation (1). The number of iterations to be performed to reach a predecided error tolerance are not known.
-
The convergence of the method is not assured (although it may converge to ) if at least does not exist.
-
The results are limited to the case only when .
-
The more interesting semi-local convergence is not given in [7].
The same concerns exist for numerous other methods with no derivatives [7, 21]. So, the technique of this study can also be used to extend the applicability of such methods along the same lines.
The main feature of current study is that it takes up all the aforementioned limitations positively. In particular, the local convergence is based on the more general -continuity condition [1, 2, 17] and uses only information from the operators appearing on the method. Moreover, the semi-local convergence utilizes majorizing sequences [1, 2] is also provided.
The novelty of the article lies in the fact that the process leading to the aforementioned benefits does not rely on the particular method (2). But it can be utilized on other methods involving inverses of linear operators in a similar manner. The numerical study includes the results considering both cases of the method (2) i.e with memory and without memory. The results show that lesser number of iterations are required to obtain the solution and larger convergence radii are obtained using the presented approach as compared to existing ones. The development of efficiency and computational benefits have been discussed in [7], hence are not repeated in present article.
The paper is structured as follows: The local convergence of the method (2) is followed by the semi-local convergence. The numerical applications and concluding remarks, respectively complete the paper.
Analysis I: Local
In this section, local convergence of (2) for solving (1) is established. Suppose and denote the open and closed balls, respectively with center and radius . Let . The hypotheses for the local convergence analysis are: Assume there exist
-
Functions , , which are continuous, symmetric and nondecreasing (CSNF) such that admits a minimal positive solution (MPS) denoted as . Let .
-
A CSNF such that for , where
admits MPS in denoted as . Let .
-
admits a MPS in denoted by . Let .
-
For , where
has a MPS in denoted by .
(3) It follows by this definition that for all
(4) (5) and
(6) -
There exist solving equation and an invertible operator so that for all ,
and
Define the set .
-
For all ,
and
-
, where
.
The analysis follows for the method (2) via means of hypotheses -.
Theorem 1.
Assume hypotheses - are valid. Then, the assertions are validated
| (7) | ||||
| (8) | ||||
| (9) |
and the sequence is convergent to , provided that the initial point .
Proof.
The estimate (10) and the celebrated lemma attributed to Banach on linear operators imply the invertibility of and the estimate
| (11) |
Thus, the iterate exists and
| (12) |
by the first substep of (2) if . It follows by the estimate (3), (6) (if ), (11), (12) and that
so
| (13) |
Remark 2.
The real function can be selected using the assumption (18) and the following calculations:
Thus, a possible choice for the function is
| (17) |
The function can be further specified if the linear operator is precised.
A popular choice is . But in this case although there are no derivatives on the method (2) it cannot be used to solve non differentiable equations under previous assumptions, since we assume is a simple solution (i.e., is invertible). Thus should be chosen so that functions “ ” are as tight as possible but not in the case of nondifferentiable equations.
The isolation of the solution domain is specified in the next result.
Proposition 3.
Assume conditions and are validated on the ball for some and there exists such that such that
where is CSNF.
Then, the equation is uniquely solvable by in the domain .
Proof.
Suppose there exists a solution such that and . Define the divided difference . Then, we get
| (18) | ||||
Thus, from identity
we conclude . ∎
Remark 4.
Notice that a possible choice for .
Analysis II: Semi-local
This time the role of and the functions “” are switched by and the functions “” precised below.
Assume there exist:
-
Functions CSNF , so that has MPS in denoted as . Let .
-
Functions CSNF , . Let
-
There exist and an invertible linear operator so that for ,
and
It follows by and that
so, is invertible, thus exists and consequently we can let .
Consider the sequence generated for and each as
(19) and
-
and for some . It follows from the definition (19) and this hypothesis that and there exists so that Notice that is the least upper bound of the scalar sequence which is unique.
-
For each ,
and
and
-
, where
Then as in local case we present the semi-local result.
Theorem 5.
Under the Assumptions there exists solving the equation so that
| (20) |
Proof.
Remark 6.
The function can be determined analogously to the function as follows:
Assume: There exist FCN such that
for all . Then, we can choose
The next result determines the isolation of a solution region.
Proposition 7.
Assume there exists a solution of solution for some , The first condition in is validated on the ball and there exists such that
where is FCN.
Let . Then, the only solution of the equation is in the region .
Proof.
Let satisfy . Define the divided difference . This is possible if . Then, we obtain in turn by the conditions
thus . But the identity
leads to a contradiction and the divided difference can not be defined. Therefore, we conclude that . ∎
Remark 8.
-
The point can also be replaced by in condition .
-
If conditions – are all validated, let and .
Applications
Method (2) can turn from without memory to with memory if the constant turns into a suitable linear operator. As in [7] choose to be the Kurchatov operator [2, 14]
| (24) |
That is, we obtain the method with memory derived by (2) for replaced by (24). Then, the local as well as the semi-local results hold say if , are adjusted. In view of the calculation
provided that
where is CSNF for all .
Then, can be replaced in Remark 2 by provided that
has a MPS denoted as .
In this case we also require to be replaced by
-
where .
Example 9.
Let and . Consider the mapping on the ball be given for as
The Jacobian is given by
It follows that the solution and the identity mapping with choice . The divided difference is defined by . Then, the conditions - are validated provided that
Then, taking i.e considering the case of method (2) (denoted by ) without memory, the radius of convergence from (4) is given as
If we consider the method (2) (denoted by ) with memory, then from (24) the radius of convergence is given by
We compare the radius of convergence obtained by fourth order Kung-Traub method given by Sharma et al. [22], fifth-order Weerakoon method in Sharma and Parhi [23], fourth and fifth order methods
in Maroju et al. [12].
| Method | |||||||
|---|---|---|---|---|---|---|---|
| Convergence Radius |
Thus, from Table 1 it is clear that enlarged convergence radius is obtained by the our approach for the methods and in comparison to existing methods.
Remark 10.
Notice that using the iterates of the type may lead to very small radius of attraction balls for the method, unless a small is used.
Example 11.
Let be a mapping. Recall that the standard divided difference of order one when is defined for , , , by
provided that . It is known that for certain pairs of distinct vectors , the formula is not applicable when some of the components are equal.
The solution is sought for the nonlinear system
Let for , where
Then, the system becomes
The divided difference belongs in the space and is the standard matrix in [10]. Let us choose and . Then, the application of the method (2) gives the solution after three iterations. The solution
.
Taking into account (24) and , if we consider the method (2) with memory, the solution is obtained after two iterations. The number of iterations required to obtain solution by two Kurchatov methods presented in [4] are four and five, respectively. Thus, less number of iterations are required to obtain the solution by the methods and in comparison to Kurchatov methods [4].
Example 12.
Consider the following system of ten equations:
The initial approximations chosen are and to obtain the solution
The comparison of error and norm of function for methods and taking three iterations are presented in Table 2. The stopping criterion used is .
| Methods | |||
|---|---|---|---|
The number of iterations required to obtain solution by methods and are three and two, respectively. Thus, the method (2) with memory requires less number of iterations than the method without memory taking .
Remark 13.
Summing up what we did in this study is that the local and semilocal convergence of the method (2) are presented without Taylor series expansions. Moreover, the limitations – have been addressed as follows:
-
The radius of convergence is provided in (3). So, the initial point is selected from a certain ball centered at and of radius . Moreover, the uniqueness of the solution is established in the Proposition 3 and Proposition 7. Furthermore, the convergence conditions use only the first derivative (see Theorem 1 and Theorem 5).
-
The method (2) converges in the motivational example of the introduction if is chosen close enough to and inside for .
-
The results are established in more general setting of a Banach space.
Concluding Remarks
The present study is based on the local and semi-local analysis of two step Steffensen-type methods with and without memory. It is applicable to the case when the problems formulated from varied areas of science and engineering are nondifferentiable. However, this methodology can also be applied to solve the differentiable equations and on other methods utilizing the inverses of linear operators. Further, numerical experiments are performed on various examples for both the cases i.e with and without memory that demonstrates the theoretical results. The enlarged convergence radii have been obtained by the presented approach as compared to the existing ones. In our future research the methodology shall be extended to extend the applicability of multipoint and multi-step methods [2, 14, 21, 24].
Acknowledgements.
We would like to express our gratitude to the reviewers for the constructive criticism of this paper.
References
-
[1]
I.K. Argyros, G. Deep and S. Regmi,
Extended Newton-like Midpoint Method for Solving Equations in Banach Space, Foundations, 3 (2023) no. 1,
pp. 82–98. http://doi.org/10.3390/foundations3010009
- [2] I.K. Argyros and .A. , Iterative Methods and their Dynamics with Applications, CRC Press, New York, 2017.
-
[3]
I.K. Argyros and H. Ren,
Efficient Steffensen-type algorithms for solving nonlinear equations, International Journal of Computer Mathematics, 90 (2013) no. 3,
pp. 691–704. http://doi.org/10.1080/00207160.2012.737461
-
[4]
I.K. Argyros and S. Shakhno,
Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations, Int. J. Appl. Comput. Math., 32 (2020) no. 6,
http://doi.org/10.1007/s40819-020-0784-y
-
[5]
I. K. Argyros and G. Deep,
Improved Higher Order Compositions for Nonlinear Equations, Foundations, 3 (2023) no. 1,
pp. 25–36. http://doi.org/10.3390/foundations3010003
- [6] S.C. Chapra and R.P. Canale, Numerical Methods for Engineers, Sixth Edition, McGraw-Hill Book Company, New York, 2010.
- [7] F.I. Chicharro, A. Cordero, N. Garrido and J.R. Torregrosa, On the improvement of the order of convergence order of iterative methods for solving nonlinear systems by means of memory, Appl. Math. Lett., 104 (2020) Article ID 106277.
- [8] A. Cordero and J. R. Torregrosa, Variants of Newton’s method using fifth order quadrature formula, Appl. Math. Comput., 190 (2007), pp. 686–698.
- [9] G. Deep, R. Sharma and I. K. Argyros, On convergence of a fifth-order iterative method in Banach spaces, Bulletin of Mathematical Analysis and Applications, 13 (2021) no. 1, pp. 16–40.
- [10] M. Grau-Sánchez, A. Grau and M. Noguera, Frozen divided difference scheme for solving systems of nonlinear equations, J. Comput. Appl. Math., 235 (2011), pp. 1739–1743.
- [11] J.D. Hoffman and S. Frankel, Numerical Methods for Engineers and Scientists, Second Edition, Marcel Dekker, Inc. New York, 1992.
- [12] P. Maroju, .A. , . Sarra and A. Kumar, Local convergence of fourth and fifth order parametric family of iterative methods in Banach spaces, J. Math. Chem., 58 (2020), pp. 686–705.
- [13] M. Narang, S. Bathia, A. S. Alshormani and V. Kanwar, General efficient class of Steffensen type methods with memory for solving systems on linear equations, J. Comput. Appl. Math., 352 (2019), pp. 23–39.
- [14] J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, USA, 1970.
-
[15]
I. Păvăloiu and E. Cătinaș,
A New Optimal Method of Order Four of Hermite-Steffensen Type, Mediterr. J. Math., 19 (2022), http://doi.org/10.1007/s00009-022-02030-5
- [16] M.S. and J. R. Sharma, On some efficient derivative-free iterative methods with memory for solving system of nonlinear equations, Numer. Algo., 71 (2016), pp. 447–457.
-
[17]
S. Regmi, I. K. Argyros, G. Deep and L. Rathour,
A Newton-like Midpoint Method for Solving Equations in Banach Space, Foundations, 3 (2023),
pp. 154–166. http://doi.org/10.3390/foundations3020014
- [18] V. Samanskii , On a modification of the Newton method, Ukrain. Math., 19 (1967), pp. 133–138.
- [19] J.R. Sharma, R.K. Guha and R. Sharma, An efficient fourth order weighted-Newton method for systems of nonlinear equations, Numer. Algor., 62 (2013), pp. 307–323.
- [20] R. Sharma, G. Deep and A. Bahl, Design and Analysis of an Efficient Multi step Iterative Scheme for systems of Nonlinear Equations, Journal of Mathematical Analysis, 12 (2021) no. 2, pp. 53–71.
- [21] R. Sharma and G. Deep, A study of the local convergence of a derivative free method in Banach spaces, J. Anal., 31 (2022), pp. 1257–1269.
-
[22]
R. Sharma, S. Kumar and I.K. Argyros,
Generalized Kung-Traub method and its multi-step iteration in Banach spaces, J. Complexity, 54 (2019), 101400, http://doi.org/10.1016/j.jco.2019.02.003
-
[23]
D. Sharma and S.K. Parhi,
On the local convergence of modified Weerakoon’s method in Banach spaces, J. Anal., 28 (2019) no. 1, http://doi.org/10.1007/s41478-019-00216-x
- [24] J.F. Traub, Iterative Methods for the Solution of Equations, Second Prentice Hall, New York, 1964.
- [25]








