ON THE CONVERGENCE OF STEFFENSEN–TYPE METHODS USING RECURRENT FUNCTIONS

. We introduce the new idea of recurrent functions to provide a new semilocal convergence analysis for Steﬀensen–type methods (STM) in a Banach space setting. It turns out that our suﬃcient convergence conditions are weaker, and the error bounds are tighter than in earlier studies in many interesting cases [1]–[5], [12], [14]–[17], [23], [24], [26]. Applications and numerical examples, involving a nonlinear integral equation of Chandrasekhar–type, and a diﬀerential equation are also provided in this study.


INTRODUCTION
In this study we are concerned with the problem of approximating a locally unique solution x of equation ( 1) where F is a Fréchet-differentiable operator defined on a convex subset D of a Banach space X with values in X , and G : D −→ X is a continuous operator.
A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equations.For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems.For the sake of simplicity, assume that a time-invariant system is driven by the equation ẋ = T (x), for some suitable operator T , where x is the state.Then the equilibrium states are determined by solving equation (1).Similar equations are used in the case of discrete systems.The unknowns of engineering equations can be functions (difference, differential, and integral equations), vectors (systems of linear or nonlinear algebraic equations), or real or complex numbers (single algebraic equations with single unknowns).Except in special cases, the most commonly used solution methods are iterative -when starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation.Iteration methods are also applied for solving optimization problems.In such cases, the iteration sequences converge to an optimal solution of the problem at hand.Since all of these methods have the same recursive structure, they can be introduced and discussed in a general framework.
Numerical examples and special cases are also given in this study, to show: our results can apply to solve equations, where earlier ones cannot, and also provide tighter error bounds.

SEMILOCAL CONVERGENCE ANALYSIS OF (STM)
We need the following result on majorizing sequences for (NTM).
Moreover the following estimates hold for all n ≥ 1: (11) Proof.We shall show using induction on the integer m: L t m+1 < 1.
Finally, sequence {t n } is increasing, bounded above by t , and as such it converges to its unique least upper bound t .
That completes the proof of Lemma 1.
Remark 2. The hypotheses of Lemma 1 have been left as uncluttered as possible.Note that these hypotheses involve only computations only at the initial point x 0 .Below, we shall provide some simpler but stronger hypotheses under which the hypotheses of Lemma 1 hold.
Theorem 4. Let F : D ⊆ X −→ X be a Fréchet-differentiable operator, G : D −→ X be a continuous operator, [x, y; F ] be a divided difference of order one of F on D, satisfying (3), q : D −→ X a continuous operator, and let A(x) ∈ L(X , X ) given in (2) be an approximation of F (x). Assume that there exist an open convex subset D of X , x 0 ∈ D, a bounded inverse A −1 0 of A 0 , and constants 1), such that for all x, y ∈ D: and the hypotheses of Lemmas 1, or 3 hold with , and L = (1 + b) β.Then, sequence {x n } (n ≥ 0) generated by (STM) is well defined, remains in U (x 0 , t ) for all n ≥ 0, and converges to a solution x of equation Moreover, the following estimates hold for all n ≥ 0: (35)

and
(36) where sequence {t n } (n ≥ 0), and t are given in Lemma 1. Furthermore, the solution x of equation ( 1) is unique in U (x 0 , t ) provided that: Proof.We shall show using induction on m ≥ 0: (37) For every z ∈ U (x 1 , t − t 1 ), implies z ∈ U (x 0 , t − t 0 ).We also have That is (37) and (38) hold for m = 0. Given they hold for n ≤ m, then:
Thus, for every z ∈ U (x m+2 , t − t m+2 ), we have: which shows (38) for all m ≥ 0. Lemmas 1 or 3 imply that sequence {t n } is Cauchy.Moreover, it follows from (37) and (38) that {x n } (n ≥ 0) is also a Cauchy sequence in a Banach space X , and as such it converges to some x ∈ U (x 0 , t ) (since U (x 0 , t ) is a closed set).
We also have by ( 42) and (43): Hence, by the continuity of operators F and G we obtain Furthermore estimate (36) is obtained from (35) by using standard majorization techniques [2], [4], [14].Finally to show that x is the unique solution of equation (1) in U (x 0 , t ), as in ( 42) and (43), we get in turn for y ∈ U (x 0 , t ), with F (y ) + G(y ) = 0, the estimation: Note that t can be replaced by t given by (10) in the uniqueness hypothesis provided that U (x 0 , t ) ⊆ D, or in all hypotheses of the theorem.Remark 5. A direct comparison between earlier results and ours is not possible at this generality.Let us then set G = 0 on D. Pȃvȃloiu [15]- [17] has made an extensive research on Steffensen's method in this case, and under various conditions.In particular, in [17], Pȃvȃloiu extended some results of ours [1] from the Secant method to Steffensen's method using the set of hypotheses: x − q(x) ≤ c 7 F (x) , to provide the semilocal convergence theorem for Steffensen's method.
Here, some advantages of our approach: (a) Our results are given in affine invariant form.The advantages of affine invariant over non-affine invariant results have been explained in [4], [12].(b) Hypotheses ( 28)-(30), and (32) are simpler, and weaker than (45)-(48).In a future paper, we hope to find a result, similar to Lemma 1, but where conditions (45)-( 48) are used, so we can have a more direct comparison between our results and the corresponding ones in [17].

SPECIAL CASES AND APPLICATIONS
Application 6.Let q(x) = x, G(x) = 0 (x ∈ D).We can now compare our Theorem 4 with the Newton-Kantorovich theorem for solving equations in the interesting case of Newton's method [4], [14].
The famous for its simplicity and clarity Newton-Kantorovich hypothesis for solving nonlinear equations Note that in this case, functions f m (m ≥ 1) should be defined by However, it is simple algebra to show that conditions (8) reduces to: (52) holds in general, and K L can be arbitrarily large [4].
In view of (49), ( 52) and (53), we get In the example that follows, we show that K L can arbitrarily large.Indeed: Example 7. Let X = Y = R, x 0 = 1, and define scalar functions F and G by (55) where c i , i = 0, 1, 2, 3 are given parameters.Using (55), it can easily be seen that for c 3 large and c 2 sufficiently small, K L can be arbitrarily large.
In the next examples, we show (49) is violated but (52) holds.
Example 8. Let X = Y = R 2 , equipped with the max-norm, and The Fréchet-derivative of operator F is given by Using hypotheses of Theorem 4, we get: Hence, there is no guarantee that Newton's method (2) converges to However, our condition (52) is true for all c ∈ I = .450339002, 1 2 .Hence, the conclusions of our Theorem 4 can apply to solve equation ( 56) for all c ∈ I. q(s, t) u(t) dt + y(s) − θ.
by the uniqueness hypothesis.It follows by (44) that lim m−→∞ x m = y .But we showed lim m−→∞ x m = x .Hence, we deduce x = y .That completes the proof of Theorem 4.