Posts by Ion Păvăloiu

Abstract

We study the conditions under which the well-known Aitken-Steffensen method for solving equations leads to monotonic sequences whose terms approximate (from the left and from the right) the root of an equation. The convergence order and efficiency index of this method are also studied in the general case and then in various particular cases.

Authors

Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis)

Keywords

nonlinear equations in R; Aitken-Steffensen method; monotone sequences; iterative methods; convergence order.

PDF

Scanned paper (soon).

Latex version of the paper (soon).

Cite this paper as:

I. Păvăloiu, Approximation of the roots of equations by Aitken-Steffensen-type monotonic sequences, Calcolo, 32 (1995) nos. 1-2, pp. 69-82.

About this paper

Journal
Publisher Name

Springer

Print ISSN

0008-0624

Online ISSN

1126-5434

References

[1] M. Balázs,A bilateral approximating method for finding the real roots of real equations, Rev. Anal. Numér. Théorie Approximation, (21),2 (1992), 111–117.
[2] V. Casulli, D. Trigiante,The convergence order for iterative multipoint procedures, Calcolo, (13),1 (1977), 25–44.
[3] A. M. Ostrowski,Solution of equations and systems of equations, (1960), Academic Press, New York and London.
[4] I. Păvăloiu,On the monotonicity of the sequences of approximations obtained by Steffensen’s method, Mathematica (Cluj), (35), (58),1 (1993), 71–76.
[5] I. Păvăloiu,Bilateral approximations for the solutions of scalar equations, Rev. Anal. Numér. Théorie Approximation, (23),1 (1994), 95–100.
[6] F. J. Traub,Iterative methods for the solution of equations, (1964), Prentice-Hall, Inc. Englewood Cliffs, N.J.

Paper (preprint) in HTML form

Approximation of the Roots of Equations by Aitken-Steffensen-Type Monotonic Sequences(¹)

Approximation of the Roots of Equations by Aitken-Steffensen-Type Monotonic Sequences

I. Păvăloiu
Abstract.

The aim of this paper is to study the conditions under which the well-known Aitken-Steffensen method for solving equations leads to monotonic sequences whose terms approximate (from the left and from the right) the root of an equation. The convergence order and efficiency index of this method are also studied in the general case and then in various particular cases.

- Received: 27 March 1995.
This work has been supported by the Romanian Academy of Sciences.
Romanian Academy of Sciences, ”T. Popoviciu” Institute of Numerical Analysis, P.O. Box 68, 3400 Cluj-Napoca 1, Romania.

1. Introduction

From a practical standpoint, in order to approximate the roots of equations it is advantageous to use methods which lead to monotonic sequences. In this paper we shall use a single iterative process to determine an increasing sequence (xn)n0 and a decreasing one (yn)n0, both converging to a root x¯ of the equation f(x)=0. Such a procedure has the advantage of allowing, at each iteration step, an approximation error checking, i.e.

max{x¯xn,ynx¯}ynxn.

As it is well known, such sequences can be generated by applying simultaneously two methods, e.g. Newton’s method and the chord method. In the sequel we shall show that, in conditions similar to those imposed to the above mentioned methods, the Aitken-Steffensen method generates two sequences which fulfil the above inequality.

In Section 2 we give sufficient conditions for the general method of Aitken-Steffensen ((3) below) to generate two monotonic sequences, both converging to the solution x¯ of equation (1) below. In this way some similar results obtained in [4] are completed, and the proofs given in [5] are made simpler to a certain extent.

In Section 3 we study some particular cases of the method (3), namely the methods (7) and (9) below. In Section 4 there is indicated a way to construct the auxiliary functions g1,g2, or g, required by (3) or (7), in relatively simple conditions as the monotonicity and convexity of the function f. In Section 5 the convergence orders and the efficiency indices of the methods (3) and (7) are studied, concluding that the method (7) has a higher efficiency index. This one is the same as that of Newton’s method, but is given conditions, the method (7) as well as (3) and (9), provide in addition bilateral approximations for the root of the equation (1).

So, let I=[a,b],a<b, be an interval of the real axis, and consider the equation

(1) f(x)=0

where f:I. Besides (1), consider two more equations

(2) xg1(x) =0,
xg2(x) =0,

where g1,g2:I.

The Aitken-Steffensen method consists in constructing the sequences (g2(g1(xn)))n0 and (g1(xn))n0 by means of the following iterative process

(3) xn+1=g1(xn)f(g1(xn))[g1(xn);g2(g1(xn));f],n=0,1,,x0I,

where [u,v;f] stands for the first order divided difference of f on the points u,vI. We shall denote by [u,v,w;f] the second order divided difference of f on u,v,wI.

As it is well known [6, pp. 268–269], and will also be shown in Section 5, the convergence order of the sequence (xn)n0 generated by (3) is at least 2, but it generally depends on the convergence orders of the sequences (yn)n0 and (zn)n0 generated by yn+1=g1(yn),y0I;zn+1=g2(zn), z0I;n=0,1,,;[2], [5].

Another advantage of the iteration method (3) is the following: if equation (1) is given, then (as we shall see in Section 4) we have at our disposal many possibilities to construct the functions g1 and g2 from equations (2). On the other hand, hypotheses concerning the differentiability of f on the whole interval I are not needed.

If g:I is a function, we shall adopt the following definitions concerning the monotonicity and convexity of g:

Definition 1.1.

The function g:I is said to be increasing (nondecreasing; decreasing; nonincreasing) on I if for every x,yI the relations [x,y;f]>0(0;<0;0) hold respectively.

Definition 1.2.

The function g:I is convex (nonconcave; concave; nonconvex) on I if for every x,y,zI the relations [x,y,z;f]>0(0;<0;0) hold respectively.

2. Convergence of the Aitken-Steffensen method

We shall adopt the following hypotheses concerning the functions f,g1 and g2;

  • (a)

    f,g1,g2 are continuous on I;

  • (b)

    g1 is increasing on I;

  • (c)

    g2 is decreasing on I;

  • (d)

    equations (1) and (2) have only one common root x¯]a,b[;

  • (e)

    for every x,yI, the relation 0<[x,y;g1]1 holds.

Concerning the problem stated in Section 1, the following theorem holds:

Theorem 2.1.

Suppose that f,g1 and g2 fulfil conditions (a)–(e) and, in addition,

  • (i1)

    f is increasing and convex on I, and there exists f(x¯);

  • (ii1)

    there exists x0I such that f(x0)<0 and g2(g1(x0))I.

Then the sequences (xn)n0, (g1(xn))n0, (g2(g1(xn)))n0 generated by (3), where x0 fulfils condition (ii1), have the following properties:

  • (j1)

    the sequences (xn)n0 and (g1(xn))n0 are increasing and bounded;

  • (jj1)

    the sequence (g2(g1(xn)))n0 is decreasing and bounded;

  • (jjj1)

    limxn=limg1(xn)=limg2(g1(xn))=x¯;

  • (jv1)

    xng1(xn)xn+1x¯g2(g1(xn)),n=0,1,,

    and max{x¯xn+1,g2(g1(xn))x¯}g2(g1(xn))xn+1.

Proof.

Since f is increasing on I and f(x0)<0, it follows that x0<x¯. For x<y, by (e) we get g1(y)g1(x)yx, which, for y=x¯, leads to xg1(x)0 if x<x¯ and, analogously, xg1(x)0 if x>x¯. By b) and x0<x¯ we obtain g1(x0)<g1(x¯), i.e. g1(x0)<x¯, while from the above inequalities it follows x0g1(x0). By (c) and g(x0)1<x¯ we get g2(g1(x0))>g2(x¯), that is, g2(g1(x0))>x¯. By (i1) and g1(x0)<x¯ we obtain f(g1(x0))<0 which, together with [g1(x0),g2(g1(x0));f]>0 and (3), shows that x1>g1(x0).

It is easy to see that the following identities

(4) g1(x)f(g1(x))[g1(x),g2(g1(x));f]=g2(g1(x))f(g2(g1(x)))[g1(x),g2(g1(x));f]
(5) f(z)=f(x)+[x,y;f](zx)+[x,y,z;f](zx)(zy,)

hold for every x,y,zI.

Since g2(g1(x0))>x¯, it follows that f(g2(g1(x0)))>0 and, using (4) and (3), we get x1<g2(g1(x0)). If we put in (5) z=x1,x=g1(x0),y=g2(g1(x0)), and take into account (3), we get

f(x1)=[g1(x0),g2(g1(x0)),x1;f](x1g1(x0))(x1g2(g1(x0))),

from which, considering the convexity of f and the inequalities x1g1(x0)>0 and x1g2(g1(x0))<0, one obtains f(x1)<0, that is, x1<x¯.

In this way we have obtained

(2.2) x0g1(x0)<x1<x¯<g2(g1(x0)).

In order that the above reasoning may be repeated we still have to show that x1 verifies the last condition of (ii1), namely g2(g1(x1))I. Indeed, from x0<x1 and (b) it follows g1(x0)<g1(x1), which, by (c), leads to g2(g1(x0))>g2(g1(x1)). It remains to show that g2(g1(x0))>x¯. Suppose that g2(g1(x0))x¯, that is, g2(g1(x0))g2(x¯), implying g1(x0)x¯, which contradicts (2.2).

Consider xnI for which f(xn)<0 and g2(g1(xn))I,n. Repeating the above reasoning, we obtain the inequalities

(6) xng1(xn)<xn+1<x¯<g2(g1(xn)),n=0,1,,

In this way conclusions (j1), (jj1) and (jv1) of Theorem 2.1 are proved. To prove (jjj1), denote l=limxn, l1=limg1(xn),l2=limg2(g1(xn)). We shall show that l=l1=l2=x¯.

By (6) and (a), we deduce:

lg1(l)lx¯g2(g1(l));

but l1=g1(l), therefore l=l1; i.e. g1(l)=l. From this relation and (3), using the fact that there exists f(x¯) and f is increasing, we get f(l)=0, that is, l=x¯. Since l2=g2(l)x¯, it follows f(g2(l))0, while form (4) it follows that lg2(l), which, together with lg2(l), leads to l=g2(l)=x¯.

In can be shown analogously that the following theorems hold, too:

Theorem 2.2.

Suppose that f,g1,g2 fulfil conditions (a)–(e), and in addition

  • (i2)

    f is increasing and concave on I, and there exists f(x¯);

  • (ii2)

    there exists x0I for which f(x0)>0 and g2(g1(x0))I.

Then the sequences (xn)n0,(g1(xn))n0, g2(g1(xn))n0 generated by (3), have the following properties:

  • (j2)

    (xn)n0,(g1(xn))n0 are decreasing and bounded;

  • (jj2)

    (g2(g1(xn)))n0 is increasing and bounded;

  • (jjj2)

    limxn=limg1(xn)=limg2(g1(xn))=x¯, and the following relations hold

    g2(g1(xn))<x¯<xn+1<g1(xn)xn,n=0,1,,
    max{x¯g2(g1(xn)),xn+1x¯}xn+1g2(g1(xn)).
Theorem 2.3.

Suppose that f,g1,g2 fulfil conditions (a)–(e) and in addition

  • (i3)

    f is decreasing and convex on I, and there exists f(x¯);

  • (ii3)

    there exists x0I for which f(x0)<0 and g2(g1(x0))I.

Then the sequences (xn)n0, (g1(xn))n0,(g2(g1(xn)))n0 generated by (3) fulfil the conclusions of Theorem 2.2.

Theorem 2.4.

Suppose that f,g1,g2 fulfil conditions (a)–(e) and in addition

  • (i4)

    f is decreasing and concave on I, and there exists f(x¯);

  • (ii4)

    there exists x0I for which f(x0)>0 and g2(g1(x0))I.

Then the sequences (xn)n0,(g1(xn))n0,(g2(g1(xn)))n0 generated by (3), fulfil the conclusions of Theorem 2.1.

3. Particular cases

An interesting particular case of the procedure (3) is obtained taking

g1(x)=x for every xI. In this way one obtains Steffensen’s method, namely

(7) xn+1=xnf(xn)[xn,g(xn);f],n=0,1,,x0I,

where g stands for g2.

One observes easily that in this case hypotheses (a), (b), (d) and (e) are automatically fulfilled by g1. For the study of the convergence of the sequence (xn)n0 generated by (7), we have to adopt the following hypotheses:

  • (a1)

    the functions f and g are continuous on I;

  • (b1)

    the function g is decreasing on I;

  • (c1)

    equations (1) and x=g(x) have only one common root x¯]a,b[.

With these specifications, the theorems stated in Section 2 yield:

Corollary 3.1.

Suppose that f and g fulfil conditions (a1)–(c1) and in addition f is increasing and convex on I; there exists f(x¯), and the initial point x0 in (7) can be chosen such that f(x0)<0 and g(x0)I. Then the sequence (xn)n0 is increasing and bounded, while the sequence (g(xn))n0 is decreasing and bounded. Moreover, the relations x¯=limxn=limg(xn) and

xn x¯g(xn);
max{x¯xn,g(xn)x¯} g(xn)xn

hold.

Corollary 3.2.

Suppose that f and g fulfil conditions (a1)–(c1) and in addition f is increasing and concave on I; there exists f(x¯), and x0 from (7) can be chosen such that f(x0)>0 and g(x0)I. Then (xn)n0 is decreasing and bounded, while (g(xn))n0 is increasing and bounded. Moreover, the following

relations: x¯=limxn=limg(xn) and

g(xn) x¯xn;
max{x¯g(xn),xnx¯} xng(xn)

hold.

Corollary 3.3.

Suppose that f and g fulfil conditions (a1)–(c1) and in addition f is decreasing and convex on I; there exists f(x¯), and x0 from (7) can be chosen such that f(x0)<0 and g(x0)I. Then (xn)n0 and (g(xn))n0 verify the conclusions of Corollary 3.2.

Corollary 3.4.

Suppose that f and g fulfil conditions (a1)–(c1) and in addition f is decreasing and concave on I, there exists f(x¯) and x0 from (7) can be chosen such that f(x0)>0 and g(x0)I. Then (xn)n0 and (g(xn))n0 verify the conclusions of Corollary 3.1.

Another interesting particular case is that in which f has the form:

(8) f(x)=xg(x)=0.

In this case the iterative method (7) will have the form

(9) xn+1=xn(xng(xn))2g(g(xn))2g(xn)+xn,n=0,1,,x0I,

whose convergence follows easily by Corollaries 3.13.4. The following results simplify the conditions imposed to g and x0 in [1]:

Corollary 3.5.

Suppose that g is continuous, decreasing and convex on I; equation (8) has a root x¯]a,b[, there exists g(x¯), and x0I can be chosen such that x0<g(x0) and g(x0)I. Then the sequences (xn)n0 and (g(xn))n0 verify the conclusions of Corollary 3.1.

Corollary 3.6.

Suppose that g is continuous, decreasing and concave on I, equation (8) has a root x¯]a,b[, there exists g(x¯), and x0I can be chosen such

that x0>g(x0) and g(x0)I. Then the sequences (xn)n0 and (g(xn))n0 verify the conclusions of Corollary 3.2.

The restriction imposed to f in Corollaries 3.3 and 3.4 (f to be decreasing) can no more be reached in this case, because in those corollaries the condition on g to be decreasing is essential, and is easy to see that if g decreases the f is increasing.

4. Determination of auxiliary functions

In what follows we shall show that, under reasonable hypotheses concerning the monotonicity and convexity of f, we have at our disposal various ways to choose g1,g2 and g such that the hypotheses of the theorems and corollaries stated above are verified.

1. Let us admit that f:[a,b] is differentiable on [a,b], and denote by f(a) the right-hand derivative in a and by f(b) the left-hand derivative in b; we also admit that f is increasing and convex on [a,b]. Suppose that the equation f(x)=0 has a root x¯]a,b[. Then we choose g1(x)=xf(x)/f(b) and g2(x)=xf(x)/f(a).

Since f is convex, if follows that f is increasing on [a,b], hence 0<f(a)<f(x)<f(b) for every x]a,b[. In this case we have g1(x)=1f(x)/f(b)>0 for every x]a,b[ and g2(x)=1f(x)/f(a)<0 for every x]a,b[. We choose then a subinterval [α,β]]a,b[ for which x¯[α,β]. If we suppose in addition that there exists x0[α,β] for which f(x0)<0 and g2(g1(x0))[α,β] then the above constructed functions f,g1 and g2 verify the hypotheses of Theorem 2.1. It is easy to see that if we choose g1(x)=xλ1f(x) and g2(x)=xλ2f(x), with λ1, λ2, λ1>f(b) and 0<λ2<f(a), then g1 and g2 also verify the hypotheses of Theorem 2.1.

2. If f is differentiable, decreasing and convex on [a,b], then g1 and g2 can be chosen as in case 1. If there exists x0[α,β] such that f(x0)<0 and g2(g1(x0))[α,β], then the hypotheses of Theorem 2.3 are verified.

3. If f is differentiable, decreasing and concave on [a,b], then we may put g1(x)=xf(x)/f(a) and g2(x)=xf(x)/f(b). If there exists x0[α,β] for which f(x0)>0 and g2(g1(x0))[α,β], then f,g1 and g2 fulfil the conditions of Theorem 2.4.

4. If f is differentiable, increasing and concave on [a,b], then g1 and g2 can be chosen as in case 3. If there exists x0[α,β] such that f(x0)>0 and g2(g1(x0))I, then f,g1 and g2 verify the hypotheses of Theorem 2.2.

5. The function g appearing in Section 3, Procedure (7) can be chosen in the following ways:

a) If f is differentiable, increasing and convex on [a,b], then we can choose g(x)=xf(x)/f(a), and if x0 verifies f(x0)<0 and g(x0)[α,β], then the hypotheses of Corollary 3.1 are verified; while if f is decreasing and convex and there exists x0[α,β] for which f(x0)<0 and g(x0)[α,β], the hypotheses of Corollary 3.3 are verified.

b) If f is differentiable, decreasing and concave on [a,b], we choose g(x)=xf(x)/f(b). If there exists x0[α,β] such that f(x0)>0 and g(x0)[α,β], then the hypotheses of Corollary 3.4 are verified; while if f is increasing and concave and there exists x0[a,b] such that f(x0)>0 and g(x0)[α,β], the hypotheses of Corollary 3.2 are verified.

5. Convergence order and efficiency index

To fix the ideas, we shall consider hereafter, besides (1), another equation, equivalent to (1), of the form

(10) xh(x)=0

where h:I. We shall also consider a sequence (xn)n0, xnI, which, together with h and f, verifies the properties:

  • (a)

    h(xn)I for every n;

  • (b)

    (xn)n0 and (h(xn))n0 are convergent and limxn=limh(xn)=x¯, where x¯ is the root of equation (1);

  • (c)

    [x,y;f]0 for every x,yI;

  • (d)

    f is differentiable at x=x¯.

We shall adopt the following definition of the convergence order of the sequence (xn)n0:

Definition 5.1.

The sequence (xn)n0 is said to have the convergence order p with respect to h if there exists

(11) α=limln|h(xn)x¯|ln|xnx¯|

and α=p.

If xn+1=h(xn),n=0,1,, the above definition reduces to the usual one [2], [3].

The following theorem holds:

Theorem 5.1.

If h and f and (xn)n0 verify the properties (a)–(d), then the necessary and sufficient condition for (xn)n0 to have the convergence order p, p>0, with respect to h is to exist

(12) β=limln|f(h(xn))|ln|f(x0)|

and β=p.

Proof.

Supposing (11) or (12) to be true and taking into account (a)–(d), it follows

limln|h(xn)x¯|ln|xnx¯| =limln|f(h(xn))|ln|[h(xn),x¯;f]|ln|f(xn)|ln|xn,x¯;f|
=limln|f(h(xn))|ln|f(xn)|1ln|[h(xn),x¯;f]|ln|f(h(xn))|1ln|[xn,x¯;f]|ln|f(xn)|=limln|f(h(xn))|ln|f(xn)|.

The equality of these two limits proves the validity of the theorem. ∎

Consider now the functions g1,g2 appearing in equations (2) and let the function h be given by

(13) h(x)=g1(x)f(g1(x))[g1(x),g2(g1(x));f].

Concerning the convergence order of the sequence (xn)n0 generated by (3), the following theorem holds:

Theorem 5.2.

Suppose that the functions f,g1 and g2 and the initial point x0 fulfil the conditions of Theorem 2.1. If there exists f′′(x¯) and in addition the sequence (xn)n0 has the convergence order p1 with respect to g1, while (g1(xn))n0 has the convergence order p2 with respect to g2, then the sequence (xn)n0 has the convergence order p1(p2+1) with respect to the function h given by (13).

Proof.

At first observe that, under the stated hypotheses, we may use (12) to determine the convergence order. The same hypotheses also lead to

(14) limln|f(g1(xn))|ln|f(xn)|=p1;
(15) limln|f(g2(g1(xn)))|ln|f(g1(xn))|=p2.

Using (4), (5) and the procedure (3), we obtain

f(xn+1)=[g1(xn),g2(g1(xn)),xn+1;f]f(g1(xn))f(g2(g1(xn)))[g1(xn),g2(g1(xn));f]2,

from which, by (14) and (15), it follows easily the equality

limln|f(h(xn))|ln|f(xn)|=p1(1+p2).

Remark 5.1.

Theorem 5.2 remains valid if the hypotheses of Theorem 2.1 are replaced by those of Theorems 2.2, 2.3, or 2.4.

An analogous theorem holds in the case of the procedure (3), too.

Theorem 5.3.

If f and g verify the hypotheses of Corollary 3.1, if there exists f′′(x¯), and in addition the sequence (xn)n0 generated by (7) has the convergence order p with respect to g, then this sequence has the convergence order p+1 with respect to the function h given by

h(x)=xf(x)[x,g(x);f].
Remark 5.2.

Theorem 5.3 remains valid if the hypotheses of Corollary 3.1 are replaced by those of Corollaries 3.2, 3.3, or 3.4.

Remark 5.3.

Returning to the functions g1and g2 determined in Section 4, (g1=xf(x)/A,g2=xf(x)/B,where A=f(b),B=f(a),or A=f(a),B=f(b), according to the case under consideration), it is easy to see

that the convergence order of the sequence generated by (3) is equal to 2.

In the case of the procedure (7), for g(x)=xf(x)/Aor g(x)=xf(x)/B (according to the considered situation), the convergence order is also equal to 2.

In the sequel we shall deal with the efficiency index of the iterative procedures studied along the previous sections.

According to A. M. Ostrowski’s [3] definition for the efficiency index of an iterative procedure, and taking into account Theorems 5.1, 5.2 or 5.3 for the iterative procedures (3), (7), or (9), the efficiency index is expressed as

E=p1/m

Here p is the convergence order of the sequence generated by one of these procedures, while m stands for the number of values of functions to be computed at each iteration step.

Taking into consideration Remark 5.3, it follows that, choosing g1,g2, or g as in Section 4, the efficiency index of the procedure (7) is E=23, while that of (9) is E=2.

From this standpoint, the procedure (7) has the efficiency index equal to that of Newton’s method.

In the conditions of Theorem 5.2 for the generalized procedure of Aitken-Steffensen (3), the efficiency index is given by E=p1(p2+1)4. Analogously, in the conditions of Theorem 5.3, the efficiency index of the method (7) is E=p+13.

6. Numerical examples

1. Consider the equation

f(x):=x2arctanx=0,

for x[32,3]. Since f is increasing and convex on [32,3], we can choose g1 and g2 as in the case 1 (Section 4). We obtain

g1(x) =14(10arctanxx);
g2(x) =15(26arctanx8x).

Taking x0=32, the functions f,g1 and g2 verify the hypotheses of Theorem 2.1 on the interval [32,3]. Applying the procedure (3), we get the following results:

n xn g1(xn) g2(g1(xn)) f(xn)
0 1.50000000000000 2.08198430811832 2.50854785469606 -4.610-01
1 2.32357265230323 2.33006829103803 2.33195667567199 -5.110-03
2 2.33112222668589 2.33112235050042 2.33112238618252 -9.910-08
3 2.33112237041442 2.33112237041442 2.33112238041442 -3.510-17

2. Consider the equation

f(x):=xarcsinx12(x2+1)=0,

for x[2,1]. Observe that f is differentiable on [2,1], and the derivative of f from the left on x=1 is 32. The function f is increasing and convex on [2,1], and we may take g(x)=16(x+5arcsinx12(x2+1)). An elementary calculation shows that f(2)<0 and g(2)[2,1]; and therefore the hypotheses of Corollary 3.1 are verified. The procedure (7) leads to the following results:

n xn g(xn) f(xn)
0 -2.000000000000000 -1.37420481033188 -7.853 981 633 974 4810-01
1 -1.406051288716128 -1.40401615840899 -7.509 542 276 017 4610-01
2 -1.404223647476550 -1.40422359726392 -2.442 156 368 565 0410-03
3 -1.404223602391970 -1.40422360239197 -6.025 515 505 460 5810-08
4 -1.404223602391970 -1.40422360239197 -3.718 813 451 625 2810-17

References

  • [1] M. Balázs, margin: available soon,
    refresh and click here
    A bilateral approximating method for finding the real roots of real equations, Rev. Anal. Numér. Théor. Approx., (21), 2 (1992), 111–117.
  • [2] V. Casulli, D. Trigiante, The convergence order for iterative multipoint procedures, Calcolo, (13), 1 (1977), 25–44.
  • [3] A. M. Ostrowski, Solution of Equations and Systems of Equations, (1960), Academic Press, New York and London.
  • [4] margin: clickable I. Păvăloiu, On the monotonicity of the sequences of approximations obtained by Steffensen’s method, Mathematica (Cluj), (35), (58), 1 (1993), 71–76.
  • [5] margin: clickable I. Păvăloiu, Bilateral approximations for the solutions of scalar equations, Rev. Anal. Numér. Théor. Approx., (23), 1 (1994), 95–100.
  • [6] F. J. Traub, Iterative Methods for the Solution of Equations, (1964), Prentice-Hall, Inc. Englewood Cliffs, N.J.

Related Posts